content
stringlengths
86
994k
meta
stringlengths
288
619
A. The vacuum boundary B. The center A. Limits on and B. The local conductivity C. Definition of D. The energy groups E. The diffusion approximation F. Speeding up the simulation
{"url":"http://scitation.aip.org/content/aip/journal/pop/16/6/10.1063/1.3155445","timestamp":"2014-04-17T20:28:38Z","content_type":null,"content_length":"93605","record_id":"<urn:uuid:12937aca-1093-49e4-a71c-69754175c996>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00628-ip-10-147-4-33.ec2.internal.warc.gz"}
General Departmental Seminar Series Dealing with Discreteness: Making 'Exact' Confidence Intervals for Proportions, Differences of Proportions, and Odds Ratios More Exact Alan Agresti, Department of Statistics University of Florida Friday, May 4, 2001, 12:00-1:00 pm 3285 Medical Sciences Center 1300 University Avenue `Exact' methods for categorical data are exact in term of using probability distributions that do not depend on unknown parameters. However, they are conservative inferentially, having actual error probabilities for tests and confidence intervals that are bounded above by the nominal level. We examine the conservatism for interval estimation and suggest ways of reducing it. We illustrate for several parameters of interest with contingency tables, including the binomial parameter, the difference between two binomial parameters, the odds ratio and relative risk in a $2\times 2$ table, and the common odds ratio for several such tables. Less conservative behavior results from devices such as (1) inverting tests using statistics that are "less discrete," (2) inverting a single two-sided test rather than two separate one-sided tests of half the nominal level each, (3) using unconditional rather than conditional methods (where appropriate) and (4) inverting tests using alternative P-values. We also summarize simple ways of adjusting some large-sample methods to improve their small-sample performance. Back to General Departmental Seminar Series
{"url":"http://www.biostat.wisc.edu/Seminars/SeminarAbstracts2000-2001/dept050401.htm","timestamp":"2014-04-18T20:45:45Z","content_type":null,"content_length":"17416","record_id":"<urn:uuid:face3e98-ed5c-427a-96b9-416d1a4153a5>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00421-ip-10-147-4-33.ec2.internal.warc.gz"}
Dumbbell shaped domain up vote 0 down vote favorite A m-dumbbell shaped domain is a simply connected domain consisting of m disconnected domain joined by thin tubes. Clearly, because it is simply connected then H^1 is trivial. Moreover is a convex domain but what about the higher order cohomology of this domain? More preciselly, is there an integer d greater than 1 for which H^d is NOT trivial? 3 I don't understand where you get convexity from (or what you mean by it). Where is this happening? In $\mathbb{R}^n$? If so, and if it is convex, then it will be contractible and hence have all $H ^i=0$ for $i>0$. But, aside from convexity, the 2-sphere (which is two disks joined by a tube) is an example with $H^2\neq 0$. – HJRW Feb 25 '13 at 14:36 What do you mean by thin tube? A neighborhood of an arc? The boundary of a neighborhood of an arc? – Jim Conant Feb 25 '13 at 14:42 The question is ill-posed (lacks definitions and motivation) and any interpretation of it I can figure out are not research-level, I therefore vote to close. – Benoît Kloeckner Feb 25 '13 at 20:51 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged at.algebraic-topology or ask your own question.
{"url":"http://mathoverflow.net/questions/122889/dumbbell-shaped-domain?answertab=oldest","timestamp":"2014-04-21T07:39:48Z","content_type":null,"content_length":"47855","record_id":"<urn:uuid:50b78cb3-cd65-4311-8495-13dd75394353>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00034-ip-10-147-4-33.ec2.internal.warc.gz"}
"Real-Life Math" A Plethora of Ideas for Real-Life Math by Kathi, Melanie, Beth, Wendy, and Amy (for all ages) Here are just a few suggestions for real-life math. But keep in mind that what you do and how you live will determine a lot of what is "real" for you! ~ My father had me figure out his bowling average for a whole year when I was in high school! Another idea could be a favorite ball player's batting average, or other stats for a sports team. ~ Balancing a checkbook - YOUR checkbook. Not only will this improve math skills, but will also teach some valuable lessons: how to do it, how much an income is realistically, where it all goes, how little is left for luxuries ~smile~, etc. ~ Grocery shopping - Again, valuable life lessons, but also how to find the best bargain, price per portion, unit pricing and so on. Have your child keep a price book so he/she can tell if a sale is a real bargain. (There are lots of webpages devoted to price books, do a search to find several. Another is http://www.organizedhome.com ) Eventually, you might even give your teen the shopping list and let them do the grocery shopping. ~ "Buying" a stock (or even really doing it, just one share now that they are so low in price!) and keeping track of how it does. Figuring out interest on a savings account or CD. How much would interest add to a car loan? ~ A job of some sort... mowing lawns, or shoveling driveways, babysitting or whatever, figuring out 10% for tithe, deciding what percentage goes in the bank, etc. ~ When Christmas is getting close... how much will your teen have to spend on gifts? How many people to give gifts to? Average amount to be spent on each person? (My daughter and I are working on two baby quilts as Christmas gifts, and it's a wonderful exercise in math. Now that we have the patchwork done, we need to figure out how much fabric to use for the borders - we'll figure out how to cut that from a piece of fabric and how much we'll need to buy.) ~ Cooking/baking are also wonderful experiences for real-life math... we've even made half a box of macaroni and cheese, requiring the dividing of all the ingredients, including that cheese powder. Just some simple examples, but some could be stretched over a long period of time to be an ongoing lesson or project. Joyfully in Christ, Kathi ~ joyfullykathi@yahoo.com Barb has a great article called "What Is Real-Life Learning?" that's helped me a TON to see and understand how much learning can take place in our home without a "text." Every time I reread it I am just so encouraged. Basically, any activity listed in Barb's article that includes using numbers would be math. ~ Doubling or tripling a recipe, or cutting one in halves, thirds, or fourths. (These things challenge me.) ~ We did real math with a pillow that is over sized. We needed to make a pattern to fit that particular pillow. We've had to do that for curtains too. Can't find a pattern just like we want, so we have to adjust it. ~ Many people use coupons when shopping. How much did you save with your coupons? Did your coupon encourage you to buy thing you don't normally buy? Did it encourage you to buy something you don't need? Could you have bought generic brand for less? Sometimes the generic brand really doesn't taste as good, is the price difference worth the taste difference? ~ Keep track of sales in stores to decide what would be the best buy. How much does it cost to run from store to store to hit all those sales, would you save money in gas and wear and tear on your car if you just shop at one store? ~ Figure out what the cost of that shirt will be when it is 30% off regular price. Or adding sales tax. ~ Shopping on a budget, setting up the budget and deciding how much meals will cost for the week. then stay within that budget. see how realistic were you in your original estimate. ~ How much fertilizer will you need to cover your lawn? How many square feet is your lawn? What will the price per sq. ft be? ~ What are your car's monthly costs? Gas, insurance, maintenance? How many miles per gallon does your car get? ~ Put together a lemonade stand, how much do you need to charge per cup and still pay for the lemonade and supplies you used, plus make a little profit? ~ How much lumber will it take to make a deck, doghouse, or extra room on the house? Or how much fencing will we ned to fence in the dog, or the garden... ~ Draw up plans for a dream house, and furnish it. How much will each cost? ~ For real little ones, you can figure out how to cut up your 2 apples to feed all 4 people you have for lunch. or how should we cut the pizza so every one gets an even number of pieces? Or if I cut the sandwich in 4 pieces, 1 piece = ¼, 2 pieces = ½, 3 pieces = ¾. (To the above ideas from Beth, Lauri responded: Beth your examples were WONDERFUL!! These are the very things many high school graduates can't do. Not only can't they do these things, they can't necessarily figure out *how* to do them! There is a desperate need for this type of education in schools, Praise God that my children receive this education by "osmosis" here at home!) ~ How about making change!? I can't tell you how many people I have come across who would not be able to count out your change to you if their computer went down! ~ Wendy Something we just started with our 11-year-old daughter is we got her a bunch of chickens and she is keeping track of the expenses vs. (future) income She is thrilled whereas some children would not be very excited about chickens. It's about finding the delight of your child's heart and connecting it to the things they need to learn. ~ Heidi Amy shared: ~ Estimating could easily top the list of "important everyday math skills." About how much money do you have to shop with? About how much will three of these cost? Will I have enough left over for that pie, or should I plan on making one myself from what I have at home? About how many apples will that take? We need new bookshelves...how long should they be? How many supports will we need? What about mollies or screws for hanging them? How many feet of lumber will we need, and how can we plan so that we waste the least amount of wood when we cut it? How big is that garden area, and how much compost will we need to make to cover it over the winter? And don't forget miles per gallon... how do you figure that, and why does it matter? Maybe you can come up with a house project such as painting your daughter's room, or putting up a wallpaper border, or making a comforter. That would give lots of real-life practice with estimating, multiplying, dividing, etc. And it has a real-life motive, not just "do it for math class." ~ The example of unit cost (cost per ounce, pound, or whatever) is another excellent example. I've taught my sons and daughter to check the unit price at the store, not just to grab the biggest box (or the smallest box) or the bulk foods. However, the boys still need more work on estimating the unit cost for themselves, for those lovely times when one size product is marked "per ounce" and another size of the same product is marked "per pound" or "per quart" or even "per EACH" (now, how helpful is THAT, I ask you?! LOL). How many loads of laundry can I do with this box of soap? So what's the cost per load? ~ Budgeting is another important math (and critical thinking) area, too! With your daughter, write a list of bills, offerings, savings, etc. due each month (and for us, we need to know WHEN they are due, as well, since we don't have regular pay-days). A balance of $500 in the checking account does not necessarily mean we can go out for pizza... Is that money already spoken for/accounted for? Of course, this will lead into good discussions of choices, stewardship, blessing of others, savings both long-term and short-term, even interest and investments and loans and the evils of abused credit....oh, sorry, talking about myself here! ;-) But I'd start with the bare bones of daily/monthly money planning, sharing WHY you make the choices you make, and (in my case again) the consequences of poor past choices (gulp). (Click here to see the "Finance Record" I made up for Carlianne. She started using it a year ago (at age 15) and still faithfully keeps it up. After every shopping outing, she can be found diligently entering all her figures ~ and balancing!) And all of this "life learning" will "count" under Consumer Math, or General Mathematics, or... ~Amy in WA This weekend my daughter made a cake for church and cupcakes. She had to measure everything and double the recipe as it was a double batch. I taught her about fractions. Yippee!!!!!!!! ~ Cindy O. And one last thought from Amy to close this discussion of "real-life" learning with... ┃ "The "real world" is full of adults who cannot make change without a machine telling them the exact amount to give back, who cannot balance their checkbook, who think that "the more they spend, ┃ ┃ the more they save," (as the ads tell us) and so on. I wouldn't worry too much about what the so-called "real world" does! LOL ┃ Here is a little bonus... ┃ How to ┃ ┃ ┃ ┃ Count Back Change ┃ ┃ ┃ ┃ to a Customer ┃ ┃ ┃ ┃ One mom said: "I can't count how many times I have gone in a store during a storm or other "computer down time" and the checkers can't even count out the change without their computers or ┃ ┃ calculators!" Exactly!! And that's yet another great "math function" to know! In fact, you don't even have to know to subtract in your head, you just need to know the operation! ~ which really is ┃ ┃ only counting UP! ~ by one's, five's, ten's and then quarters. (In case you don't know, all you do is start with the amount of purchase... Let's say the thing cost $2.32 cents, and they gave you ┃ ┃ $10. OK, you just say "32" and then count pennies saying "33, 34, 35" at which point you've hit a five, so then you skip-count, pulling out a nickel saying "40," then skip count by 10, pulling ┃ ┃ out a dime, to 50, then two more quarters take you up to three dollars. Then pull out one dollar ~ $4 ~ another dollar ~ $5. And then finally a $5 bill takes you up to $10! So you just start with ┃ ┃ the amount of the purchase and count your way UP until you get to the amount paid. I enjoy this SO much that when people pay us in cash at our book table, I actually INSIST on doing this, saying ┃ ┃ "This is just SO cool! Let me give you your change properly!" I'm sure they think I'm weird, but so what else is new?!?!?! ;-D ┃ ┃ ┃ ┃ ;-) Barb ┃
{"url":"http://www.homeschooloasis.com/art_real-life_math.htm","timestamp":"2014-04-16T04:11:24Z","content_type":null,"content_length":"26195","record_id":"<urn:uuid:a3a4f521-2b8a-4fba-b7d4-7a9e68c5c35e>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00159-ip-10-147-4-33.ec2.internal.warc.gz"}
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole. Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages. Do not use for reproduction, copying, pasting, or reading; exclusively for search engines. OCR for page 19 19 Tire Pressure ( p) Width (W ) Tire Length (L) Figure 13. Tire load applied to pavement surface. Determination of the Effect of Cumulative Axle Load Distribution on Tire Length Because of the difficulty of employing each tire length for axle load intervals to evaluate traffic load effects on propaga- tion of reflection cracking, the effect of the axle load distribu- tion on the tire patch length for each category was used for the evaluation of traffic load. The axle load distribution inter- vals can be converted into tire length intervals using the char- acteristics of each axle type presented in Table 11. Figure 12. Comparison of SIF with ANN model The tire patch lengths of corresponding axle load intervals predictions for HMA overlays over a cracked for each category can be calculated using Equation 3 and the asphalt pavement surface layer. characteristics of axle types. Table 11 lists the calculated axle load intervals for all traffic categories, and Table 12 lists the tire patch length increments. in developing the models. These models apply only to the vari- Using the tire patch length and collected traffic data, the able ranges used as input to these models; extrapolation out- cumulative axle load distribution can be determined for each side the range of inference may not produce accurate results. category. Figure 14 illustrates the procedure for determining tire length and the cumulative axle load distribution (CALD) of Traffic Loads and Tire Footprints each category. Such distribution should be produced for all eight traffic categories to account for all types of vehicles and axles. Tire footprints are closer to rectangles than to the com- Figure 15 shows the cumulative axle load distribution of tire monly assumed circular footprints (25). In this project, rec- load for Category 1 of LTPP section 180901 in 2004, which was tangular tire footprints with known tire widths were used; determined using data in Table 12. tire footprint length was calculated from the tire load and the inflation pressure. The length of tire patch was used to eval- uate bending and shearing SIF in asphalt overlays. Also, Modeling of Cumulative Axle because the tire length is proportional to the load, a cumu- Load Distribution lative axle load distribution on tire length for each category Since the frequency distribution of each tire length of a load may be determined, based on collected traffic data such as category is used to evaluate load effects for reflection cracking WIM or AADTT. propagation, the cumulative axle load distribution (CALD) of pavement sections and traffic categories should be developed Tire Patch Length along with the tire length. The CALD of traffic loads or tire lengths follows a sigmoidal curve having a lower asymptote of The tire-load model that assumes a rectangular tire contact zero and a finite upper asymptote as shown in Figure 16. (Details area (as shown in Figure 13) was used to evaluate the effect of of the modeling process are provided in Appendix D). tire load on reflection cracking. After reviewing potential models that describe the statisti- Tire width is assumed to be constant within each traffic cal properties of the cumulative axle load distribution versus category (vehicle class and axle type) even under different tire tire length, the Gompertz model presented in Equation 4, was pressures. Thus, the tire length can be calculated as follows: chosen. Tire Length (in.) = tire load (lb) (3) y = exp [ - exp ( - x )] (4) lb tire pressure 2 × tire width (in.) in. where , , and are model parameters.
{"url":"http://www.nap.edu/openbook.php?record_id=14410&page=19","timestamp":"2014-04-18T16:07:04Z","content_type":null,"content_length":"46453","record_id":"<urn:uuid:a2d9db70-4430-46c0-bcb0-b64c7565deb0>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00540-ip-10-147-4-33.ec2.internal.warc.gz"}
Size-Dependent Materials Properties Toward a Universal Equation • We are sorry, but NCBI web applications do not support your browser and may not function properly. More information Nanoscale Res Lett. 2010; 5(7): 1132–1136. Size-Dependent Materials Properties Toward a Universal Equation Due to the lack of experimental values concerning some material properties at the nanoscale, it is interesting to evaluate this theoretically. Through a “top–down” approach, a universal equation is developed here which is particularly helpful when experiments are difficult to lead on a specific material property. It only requires the knowledge of the surface area to volume ratio of the nanomaterial, its size as well as the statistic (Fermi–Dirac or Bose–Einstein) followed by the particles involved in the considered material property. Comparison between different existing theoretical models and the proposed equation is done. Keywords: Nanomaterials, Size effect, Shape effect, Theory, Top–down Understanding how materials behave at tiny length scales is crucial for developing future nanotechnologies. The advances in nanomaterials modeling coupled with new characterization tools are the key to study new properties and capabilities and then to design devices with improved performance [1]. This study of size and shape effects on material properties has attracted enormous attention due to their scientific and industrial importance [2-4]. Nanomaterials have different properties from the bulk due to their high surface area over volume ratio and possible appearance of quantum effects at the nanoscale [5-7]. The determination of nanomaterials properties is still in its infancy and many materials properties are unknown or ill-characterized at the nanoscale [8,9]. Therefore, modeling different phenomena by only one general equation could be particularly helpful at the nanoscale when experimental data is lacking. When modeling nanomaterials, there exist two main approaches. In the “top–down” approach, one looks at the variation of the properties of systems that change when going from the macro to the nano dimensions. At the opposite, in the “bottom-up” approach, one starts from atoms and one adds more and more atoms, in order to see how the properties are modified. The first makes use of classical thermodynamics, whereas the second relies on computational methods like molecular dynamics. Molecular dynamics generally considers less than one million atoms [10] in order to keep calculation time within reasonable values. This factor limits the nanostructure size modeled until values around 100 nm [11]. By using classical thermodynamics, the “top–down” approach ceases to be valid when thermal energy kT becomes smaller than the energetic gap between two successive levels, δ. Generally for metals, according to Halperin [12], when δ/k ~ 1 K, the band energy splitting appears for diameter values between ~4–20 nm depending on the material considered. When δ/k ~ 100 K, this diameter is between ~1 and 4 nm in agreement with the value announced by Wautelet et al. [13]. The size limit considered in this manuscript will be 4 nm. Therefore, the “top–down” approach emerges as a simple complementary method which can give useful insights into nanosciences and nanotechnology. Adopting a “top–down” approach, the following equation has been proposed in a previous paper [14] to describe size and shape effects on characteristic temperatures at the nanoscale. This equation predicts the melting temperature, Debye temperature, Curie temperature and superconducting temperature of nanomaterials according to the spin of the particles involved in the considered material property. The ratio of the size/shape-dependent characteristic temperature, T[X], over the characteristic bulk temperature, T[X,∞], is given by: where X represents melting, Debye, Curie or superconducting. α[shape] is the parameter quantifying the size effect on the material property and depending on the nanostructure’s shape. α[shape] is defined as α[shape] = [D(γ[s] − γ[l])/ΔH[m,∞]](A/V) where A/V is the surface area over volume ratio, ΔH[m,∞] is the bulk melting enthalpy and γ[s(l)] the surface energy in the solid (liquid) phase. D is the size of the nanostructure. S equals to one half or one if the particles involved in the considered phenomena follow a statistic of Fermi–Dirac or Bose–Einstein. For melting and ferromagnetism (Curie), S equals to one-half, whereas for superconducting and vibration (Debye) S equals to one. One of the most important property from which we can derive almost all the thermodynamic properties of materials is the cohesive energy [15]. Indeed, the cohesive energy is responsible for the atomic structure, thermal stability, atomic diffusion, crystal growth and many other properties [6,16]. It is related to the melting temperature, activation energy of diffusion and vacancy formation energy by the following relation [15,17,18]: The cohesive energy is the energy required to break the atoms of a solid into isolated atomic species. The activation energy of diffusion is the energy required to activate the diffusion of one atom. The vacancy formation energy is the energy required to produce one vacancy i.e. a Schottky defect. All the particles involved in the cohesive energy, activation energy of diffusion and vacancy formation energy are electrons, characterized by a half integer spin, and obey then to a Fermi–Dirac statistic (Table (Table11). Distinction between “fermionic” and “bosonic” material properties By combining Eqs. 1 and 2, this suggests an extension of the universal relation developed for characteristic temperatures to other properties as the cohesive energy which is one of the most important material properties. where ξ represents the size/shape-dependent material property and ξ[∞] represents the bulk material property. The material properties considered here are the melting temperature, Curie temperature, Debye temperature, superconductive temperature, cohesive energy, activation energy of diffusion, vacancy formation energy. From Eq. 3, it is clear that for a given material (i.e. a given α[shape] parameter) and a given size (D), the size effect on materials properties described by a Fermi–Dirac statistic (“fermionic properties”) is stronger than the size effect on materials properties described by a Bose–Einstein one (“bosonic properties”). For a given material property, the size effect increases when the α [shape] parameter increases or the size of the nanostructure D decreases or both. In Fig. Fig.1,1, we have illustrated the materials properties behavior (Eq. 3) whatever the size, the shape and the nature of the material. Figure Figure1a,1a, ,1b1b illustrates the “fermionic” and “bosonic” material properties, respectively. Figure Figure22 illustrates both properties into one graph versus the reciprocal size of nanomaterials for different α[shape] values. ξ/ξ[∞] ratio versus the α[shape] parameter for different sizes in both cases: a when materials properties are described by a Fermi–Dirac statistic and b when they are described by a Bose–Einstein one. When α ... ξ/ξ[∞] ratio versus the reciprocal size for different values of α[shape] parameter. When D^−1 is equal to 0 (vertical red line) or when ξ/ξ[∞] is equal to 1 (horizontal red line) then there is no ... Results and Discussion To validate Eq. 3, we have compared the theoretical prediction with experimental data of cohesive energy for Mo and W nanoparticles (Fig. (Fig.2.)2.) and of activation energy of diffusion for Fe and Cu nanoparticles (Fig. (Fig.3.).3.). We observe in Fig. Fig.2,2, a decreasing behavior of the cohesive energy by reducing size. From Fig. Fig.3,3, we note that diffusion is more easily activated and faster [19] at the nanoscale which is then particularly interesting for industrial applications because it lowers the process temperature. Moreover, the theoretical predictions from Eq. 3 are in good agreement with experimental data. The small discrepancies with Mo data may come from the shape, here we used with Eq. 3 the α[shape] for a sphere and experimentally the shape may deviate a little bit from this ideal case. Different from complex and time-consuming computer simulation process, the universal relation (Eq. 3) can predict the mentioned materials properties from the bulk to sizes of nanostructures higher than ~4 nm. For a given material, the α[shape] parameter can be calculated and then used to explore the size effect on all the mentioned material properties (Fig. Cohesive energy versus the size of the nanostructure for molybdenum (Mo) and tungsten (W). The solid lines indicate the theoretical prediction with Eq. 3. for Mo and W nanoparticles. The symbols are the experimental values of Mo [28] and W [28] nanoparticles. ... Activation energy of diffusion versus the size of the nanostructure for iron (Fe) and copper (Cu). The solid lines indicate the theoretical prediction with Eq. 3. for Fe and Cu nanoparticles. The symbols are the experimental values of Fe [15] and Cu [ ... Vacancies play an important role in the kinetic and thermodynamic properties of materials. Therefore, the vacancy formation energy is the key to understand the processes occurring in nano and bulk materials during heat treatment and mechanical deformation. To the best of our knowledge, only bulk vacancy formation energy is known [20-22] and there is not yet experimental data concerning the vacancy formation energy at the nanoscale. As it is difficult to determine it experimentally, researchers refer to theoretical predictions. Therefore, we compared our results obtained from Eq. 3 with different models predicting the size-dependent behavior of the vacancy formation energy. Due to the linear proportionality between the cohesive energy and the vacancy formation energy [23], the surface-area-difference model from Qi et al. [24,25] which consider the difference between the surface area of a whole particle and the overall surface area of all the constituent atoms in isolated state could write the vacancy formation energy as given by Eq. 4. where p is the ratio between the interface surface energy per unit area at 0K over the surface energy per unit energy at 0K. d[hkl] is the interplanar distance of hkl. β equals to 3κ/D, 2/w or 1/t for a nanoparticle, nanowire or nanofilm, respectively. Dw and t are the size of the nanoparticle, width of nanowire and thickness of the nanofilm, respectively. κ is the shape factor of the nanoparticle defined as the surface area ratio between non-spherical and spherical nanoparticles in an identical volume. The thermodynamic model from Yang et al. [15] expresses the vacancy formation energy of nanostructures from the size-dependent cohesive energy model of Jiang et al. [26] as: where d is the atomic diameter, R is the ideal gas constant. S[b] is the bulk evaporation entropy. The effective coordination number model from Shandiz [16] is based on the low coordination number of surface atoms and it expresses the vacancy formation energy as: where Z[SB] is the ratio of the surface coordination number over the bulk coordination number. D[0] is the size of the nanoparticle for which all the atoms are located on the surface. D[0] = (2/3)(3 − λ)(P[S]/P[L])d. λ is a parameter representing the dimension of the nanostructure: λ = 0 for nanoparticles, λ = 1 for nanowires and λ = 2 for nanofilms. P[S] is the packing fraction of the surface crystalline plane. P[L] is the lattice packing fraction. d is the atomic diameter. The bond-order-length-strength (BOLS) model from Sun [6] is based on the atomic coordination number imperfection due to the termination of the lattice periodicity. The BOLS formalism expresses the size-dependent vacancy formation energy as: where i is counted up to 3 from the outermost atomic layer to the center of the solid because no coordination imperfection is expected for i > 3. γ[i] = τc[i]d/D is the portion of the atoms in the i th layer from the surface compared to the total number of atoms in the entire solid. τ is a parameter representing the dimension of the nanostructure (τ = 1 for a film, τ = 2 for a wire and τ = 3 for a particle). d is the bond length or the atomic diameter (without coordination number imperfection). Z[iB] is the ratio of the coordination number of the ith layer (Z[i]) over the bulk coordination number (Z[B]). m is a parameter representing the nature of the bond. The liquid-drop model from Nanda et al. [17,27] expresses the size-dependent vacancy formation energy as: where E[s] = πd^2γ is the cohesive energy of an atom at the surface and γ is the surface energy of the material. d is the atomic diameter. Figure Figure55 illustrates the comparison between the mentioned models and all the models indicate a decreasing behavior of the vacancy formation energy of free-standing nanostructures with the size. Let us note that the Guisbiers and Nanda’s models give in this particular case the same results. The consequence of this decreasing behavior with size means an increasing of the vacancies concentration in nanostructures compared to bulk. Indeed, by considering the size effect on the vacancy formation energy in the vacancies concentration of bulk materials c[v,∞] = C exp (−E[v,∞]/kT) (C being a constant considered size independent), we get Eq. 9which is similar to the one obtained earlier by Qi et al. [25], validating then the reasoning based on Eq. 3. Vacancy formation energy versus the size of a spherical gold (Au) nanoparticle. The bulk vacancy formation energy is 0.95 eV [21]. The models from Guisbiers, Qi, Yang, Shandiz, Sun and Nanda are compared together. The parameters used with the Guisbiers’ ... where c[v] is the size/shape-dependent vacancies concentration and c[v,∞] is the bulk vacancies concentration. k is the Boltzmann constant and T is the temperature. In summary, it is shown that there exists a universal relation between many materials properties, the inverse of the particle size and the spin of the particles involved in the considered material property. Whatever the nature of the material, Figs. Figs.11 and and22 are general maps summarizing the size and shape effects on the mentioned materials properties from the bulk to the nanoscale. The prediction from the universal relation (Eq. 3) has been validated by comparison with available experimental results and existing theoretical models. Describing different phenomena with only one equation is the “Holy Grail” for all physicists and maybe a more sophisticated equation may exist by considering other material properties. Nevertheless, the great advantage of the present equation is that it is free of any adjustable parameters! The author thanks the Belgian Federal Science Policy Office (BELSPO) for financial support through the “Mandats de retour” action. Dr. Steve Arscott is greatly acknowledged for proof reading this Open Access This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited. Articles from Nanoscale Research Letters are provided here courtesy of Springer • PubMed PubMed citations for these articles Your browsing activity is empty. Activity recording is turned off. See more...
{"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2894066/?tool=pubmed","timestamp":"2014-04-16T07:30:31Z","content_type":null,"content_length":"80291","record_id":"<urn:uuid:d2c9e2d1-1a6b-4aa3-bac8-92625160c7b5>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00453-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: CONVERGENCE ANALYSIS OF A QUADRATURE FINITE Abstract. A quadrature finite element Galerkin scheme for a Dirichlet boundary value problem for the biharmonic equation is analyzed for a solution existence, uniqueness, and convergence. Con- forming finite element space of Bogner-Fox-Schmit rectangles and an integration rule based on the two-point Gaussian quadrature are used to formulate the discrete problem. An H2-norm error esti- mate is obtained for the solution of the original finite element problem consistent with the solution regularity. A standard quadrature error analysis gives a suboptimal order error estimate. Optimal order error estimates under sufficient regularity assumptions are obtained using an alternative ap- proach based on the equivalence of the quadrature problem with an orthogonal spline collocation Key words: biharmonic problem, finite elements, Galerkin method, Gaussian quadrature, orthogonal spline collocation AMS subject classification. 65N12, 65N15, 65N30, 65N35 1. Introduction. In this article, we analyze existence, uniqueness, and conver- gence of a quadrature finite element Galerkin approximation of a Dirichlet boundary value problem (BVP) with the biharmonic equation on a rectangular polygonal region. Problems with the biharmonic equation arise in many areas of applied mathematics, for example, plate problems in plane elasticity and problems for the stream function
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/384/1305761.html","timestamp":"2014-04-20T18:58:00Z","content_type":null,"content_length":"8617","record_id":"<urn:uuid:4732fb13-7e5f-4345-850f-2af0e202fbcf>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00349-ip-10-147-4-33.ec2.internal.warc.gz"}
No data available. Please log in to see this content. You have no subscription access to this content. No metrics data to plot. The attempt to load metrics for this article has failed. The attempt to plot a graph for these metrics has failed. Rare-event sampling: Occupation-based performance measures for parallel tempering and infinite swapping Monte Carlo methods FIG. 1. Occupation traces for three-temperature PT (top row) and PINS (bottom row) simulations for Ar[13]. T[1] = 30 K and T[3] = 40 K for all simulations, while T[2] = 35 K for simulations in left column and 39 K for those in right column. FIG. 2. Plots of the average number of moves required for a round-trip transit of the computational ensemble, ⟨n[rt]⟩ as function of T[2], for extended versions of the three-temperature Ar[13] simulations of the type in Fig. 1 . FIG. 3. Approach of S[f](n[move]) (c.f., Eq. (2.3) ) to its uniform limiting value for the three-temperature PINS and PT simulations of Ar[13] used in Table I and described in the text. T[1] = 30 K, T[3] = 40 K, T[2] = 35 K or 39 K. FIG. 4. Plot of ln(S[max] − S[f]) for the three-temperature Ar[13] results of Fig. 3 . FIG. 5. Plots of C(s) (c.f., Eq. (2.4) ) for the three-temperature Ar[13] simulations of Fig. 3 . FIG. 6. A portion of occupation traces for 66-temperature Ar[38] PINS (black) and PT (red) simulations discussed in the text. The vertical axis denotes the temperature index (1–66) as a function of the number of moves in the simulation. FIG. 7. A histogram of the PINS occupation trace shown in Fig. 6 showing the number of times the various temperature indices are visited, M(n), as a function of n. FIG. 8. A plot of S[f](n[move]) obtained for Ar[13] using PINS and PT methods for the various 24-temperature ensembles described in the text. The apparent “break” in the PINS-24a results occurs at an S[f] value of roughly 2.5, a value that corresponds to an active number of temperatures (N[a]) of ∼12. FIG. 9. Plot of ln(S[max] − S[f]) for results in Fig. 8 . FIG. 10. C(s) for the PINS Ar[13] results obtained using the three, 24-temperature ensembles described in the text. PT results for ensemble-c are shown for comparison (dashed line near top of plot). FIG. 11. Brief portions of occupation traces for PINS (top panel) and PT (bottom panel) simulations for Ar[13] obtained using 24-temperature ensemble-c (see text for details). FIG. 12. Plots of the occupation entropy, S[f](nmove), for 66-temperature PINS and PT simulations of the Ar[38] system. The two PINS results correspond to simulations that are initiated in the global minimum geometry (black curve) or lowest-lying icosahedral minimum (red curve). The limiting S[f] value of ln(66) is shown for reference. FIG. 13. Plot of ln(S[max] − S[f]) for results in Fig. 12 . FIG. 14. A plot of ⟨Q[4](T)⟩ for the Ar[38] cluster obtained by the PINS simulations described in the text. Results in black (red) are obtained using a simulation initialized using the fcc global minimum (icosahedral local minimum) structure. For clarity and as an aid in comparing the two simulations ⟨Q[4](T)⟩ only values for every other (every fourth) temperature are shown for the fcc (icosahedral) FIG. 15. Q[4] values for an extended portion of the global minimum initiated occupation trace of Fig. 6 . FIG. 16. A history of the number of configurations (out of 66) in the two PINS Ar[38] simulations described in the text for which Q[4] ≤ 0.09. FIG. 17. Block averages of Q[4] for Ar[38] for T[1] = 10 K, T[2] = 14.9350 K for the PINS simulations described in the text. FIG. 18. Shown are the Q[4] values for T = 22.727 K for a short, post warm up portion of the icosahedral minimum initiated Ar[38] PINS simulation. Compare with Fig. 11 of Ref. ^ 29 . Table I. Observed fractions of total moves (f[n]) spent at each of the ensemble temperatures by two three-temperature Ar[13] parallel tempering (PT) and partial swapping (PINS) simulations. In both ensembles, T[1] = 30 K and T[3] = 40 K, while in one ensemble T[2] = 35 K and in the other T[2] = 39 K. Table II. The temperatures used in the PINS Ar[38] computational ensemble. Table III. Average of S[ρ] values, ⟨S[ρ]⟩ , for Ar[38] PINS simulation obtained using the computational ensemble shown in Table II . For reference, the maximum values for S[ ρ] correspond to ln(3!) = 1.792 and ln(6!) = 6.579. Article metrics loading...
{"url":"http://scitation.aip.org/content/aip/journal/jcp/137/20/10.1063/1.4765060","timestamp":"2014-04-16T05:42:12Z","content_type":null,"content_length":"93856","record_id":"<urn:uuid:32c7fb00-8446-4214-807d-d35c997fff7f>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00147-ip-10-147-4-33.ec2.internal.warc.gz"}
solving 2d diffusion equation with cuda up vote 1 down vote favorite I am learning cuda with trying to solve some standard problems. As a example I am solving diffusion eqaution in two dimension with following code. But my results are different than the standard results and I am not able to figure that out. //kernel definition __global__ void diffusionSolver(double* A, double * old,int n_x,int n_y) int i = blockIdx.x * blockDim.x + threadIdx.x; int j = blockIdx.y * blockDim.y + threadIdx.y; A[i+n_y*j] = A[i+n_y*j] + (old[i-1+n_y*j]+old[i+1+n_y*j]+ old[i+(j-1)*n_y]+old[i+(j+1)*n_y] -4*old[i+n_y*j])/40; int main() int i,j ,M; M = n_y ; phi = (double *) malloc( n_x*n_y* sizeof(double)); phi_old = (double *) malloc( n_x*n_y* sizeof(double)); dummy = (double *) malloc( n_x*n_y* sizeof(double)); int iterationMax =10; //phase initialization for(j=0;j<n_y ;j++) phi[i+M*j] = -1; phi[i+M*j] = 1; phi_old[i+M*j] = phi[i+M*j]; double *dev_phi; cudaMalloc((void **) &dev_phi, n_x*n_y*sizeof(double)); dim3 threadsPerBlock(100,10); dim3 numBlocks(n_x*n_y / threadsPerBlock.x, n_x*n_y / threadsPerBlock.y); //start iterating for(int z=0; z<iterationMax; z++) //copy array on host to device cudaMemcpy(dev_phi, phi, n_x*n_y*sizeof(double), //call kernel diffusionSolver<<<numBlocks, threadsPerBlock>>>(dev_phi, phi_old,n_x,n_y); //get updated array back on host cudaMemcpy(phi, dev_phi,n_x*n_y*sizeof(double), cudaMemcpyDeviceToHost); //old values will be assigned new values for(j=0;j<n_y ;j++) phi_old[i+n_y*j] = phi[i+n_y*j]; return 0; Can someone tell me if there is anything wrong in this process? Any hep will be greatly appreciated. cuda nvidia differential-equations 1 The "old values will be assigned new values" section is pointless. You can either perform a device-to-device memcpy and eliminate the transfers and host side loop, or better still, just swap the pointer values. – talonmies Aug 16 '12 at 20:24 @talonmies, thanks for suggestion. I will swap pointer values(that seems easy) – chatur Aug 16 '12 at 20:47 You haven't said anywhere in that code what the values of n_x and n_y are, and you are performing any error checking at all in your code. Every CUDA API call returns a status. You should be checking them all to make sure the kernel is actually running and code executing correctly. – talonmies Aug 17 '12 at 5:21 add comment 2 Answers active oldest votes One big mistake you have is that phi_old is passed to the kernel and used by the kernel but this is a host pointer. up vote 2 down vote Malloc a dev_phi_old using cudaMalloc. Set it to default value and copy it to the GPU first time before entering the z loop. In addition to mistake you pointed out I had to put extra condition in kernel call (if(i<n_x && j <n_y) do something ...). Thanks a lot for your answer. Now I am wondering why it is necessary to condition if(i<n_x && j <n_y)..!! – chatur Aug 17 '12 at 8:06 1 The reason is that you need to filter out the threads which will lead to indexing out of bounds. Sometimes you are not able to launch exactly the amount of threads in X and Y for a given dimension n_x and n_y. – brano Aug 17 '12 at 9:01 add comment A[i+n_y*j] = A[i+n_y*j] + (old[i-1+n_y*j]+old[i+1+n_y*j]+old[i+(j-1)*n_y]+old[i+(j+1)*n_y] -4*old[i+n_y*j])/40; You are dividing by 40(integer) which can result in wrong diffusing rate. Actually can result in none-diffusing. But A is an array of doubles. Divide the diffuse rate by 40.0 and see if it works. If this is from Jos-Stam's solver, it should be 4.0 not 40 Theres also another thing: up vote 1 down vote -4*old[i+n_y*j])/40; Here you are multiplying with 4(integer). This can cause a integral-casting too! decreases some errors. Have a nice day. thanks for reply tugrul, I tried with 40.0 but results are still different(percentage error ~40). I am using 40.0 instead of 4.0 to ensure it is numerically stable. Can you please take a look at procedure of copying array to host(and vice versa) and see if is the correct way specially this is supposed to be done multiple times. – chatur Aug 16 '12 at 20:20 As you mentioned my array values are not changing at all..!! I have tried your solution but it not working either. can you suggest any method so that I can ensure that kernel is actually called? – chatur Aug 16 '12 at 20:46 ok, im working on it – huseyin tugrul buyukisik Aug 16 '12 at 20:53 is size of double same in gpu? Maybe it is 128 bit in gpu and 64 bit in cpu – huseyin tugrul buyukisik Aug 16 '12 at 20:59 can you tell what are the differencies between what you get and what you want? – huseyin tugrul buyukisik Aug 16 '12 at 21:04 show 3 more comments Not the answer you're looking for? Browse other questions tagged cuda nvidia differential-equations or ask your own question.
{"url":"http://stackoverflow.com/questions/11994679/solving-2d-diffusion-equation-with-cuda/12001356","timestamp":"2014-04-19T20:07:17Z","content_type":null,"content_length":"82933","record_id":"<urn:uuid:82d65553-9bac-4502-89fb-a50a7aa9c16b>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00202-ip-10-147-4-33.ec2.internal.warc.gz"}
proof by induction on rationals January 22nd 2009, 09:04 AM proof by induction on rationals ok.. I have a function f such that f(x + y) = f(x) + f(y) I have proves f(nx) = nf(x) for all x and every natural number n I now need to show that this is true for f(rx) where f is a rational number n/m I've tried fixing m and doing induction on n but I cant do it by fixing n and doing induction on m, is this the right way of going about it? many thanks January 22nd 2009, 09:18 AM ok.. I have a function f such that f(x + y) = f(x) + f(y) I have proves f(nx) = nf(x) for all x and every natural number n I now need to show that this is true for f(rx) where f is a rational number n/m I've tried fixing m and doing induction on n but I cant do it by fixing n and doing induction on m, is this the right way of going about it? many thanks When you have $nx$ you can write $\underbrace{x+...+x}_{n \text{ times} }$. Therefore, $f(nx) = f(x+...+x) = f(x)+...+f(x) = nf(x)$. January 22nd 2009, 10:19 AM Yes, I understand that. I am trying to prove this result for rational numbers.. many thanks January 22nd 2009, 11:00 AM Write, $1 = \tfrac{1}{n}+...+\tfrac{1}{n}$. Therefore, $f(1) = f(\tfrac{1}{n}+...+\tfrac{1}{n}) = nf(\tfrac{1}{n}) \implies f(\tfrac{1}{n}) = \tfrac{1}{n}f(1)$. Now you can prove it for positive rational numbers.
{"url":"http://mathhelpforum.com/calculus/69410-proof-induction-rationals-print.html","timestamp":"2014-04-21T05:13:11Z","content_type":null,"content_length":"7046","record_id":"<urn:uuid:58f3613f-b1bd-42e0-990d-5ad95461f6e5>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00158-ip-10-147-4-33.ec2.internal.warc.gz"}
Kinematics of a particle moving November 25th 2009, 06:25 PM Kinematics of a particle moving Could you please help me on the part (c) and (d) of the problem given A particle of mass 0.5kg is at rest on a horizontal table. It receives a blow of impulse 2.5Ns. (a) Calculate the speed with which P is moving immediately after the blow The height of the table is 0.9m and the floor is horizontal. In an initial model of the situation the table is assumed to be smooth. (b) Calculate the horizontal distance from the egde of the table to the point where P hits the ground. In a refinement of the model the table is assumed rough. The coefficient of friction between the table and P is 0.2. (c) Calculate the deacceleration of P. Given that P travels 0.4m to the edge of the table, (d) calculate the time which elapses between P receiving the blow to P hitting the floor. November 26th 2009, 06:59 AM Could you please help me on the part (c) and (d) of the problem given A particle of mass 0.5kg is at rest on a horizontal table. It receives a blow of impulse 2.5Ns. (a) Calculate the speed with which P is moving immediately after the blow The height of the table is 0.9m and the floor is horizontal. In an initial model of the situation the table is assumed to be smooth. (b) Calculate the horizontal distance from the egde of the table to the point where P hits the ground. In a refinement of the model the table is assumed rough. The coefficient of friction between the table and P is 0.2. (c) Calculate the deacceleration of P. Given that P travels 0.4m to the edge of the table, (d) calculate the time which elapses between P receiving the blow to P hitting the floor. (c) $F_{net} = ma = f_k = \mu mg$ magnitude of the acceleration is $a = \mu g$ (d) for the time the P slides on the ruff table ... $\Delta x = v_0 t - \frac{1}{2}at^2$ , solve for $t$ for the time it takes for P to fall to the floor ... $\Delta y = -\frac{1}{2}gt^2$ , solve for $t$ sum the two times. November 27th 2009, 08:06 AM Thanks a lot Skeeter.
{"url":"http://mathhelpforum.com/math-topics/116771-kinematics-particle-moving-print.html","timestamp":"2014-04-21T10:48:54Z","content_type":null,"content_length":"7961","record_id":"<urn:uuid:a577fcb1-ddef-442c-a466-007791ecd949>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00484-ip-10-147-4-33.ec2.internal.warc.gz"}
Tanzania: Have There Been Changes in Mathematics? Mathematics is one of the subjects taught before and after independence. It has been taught in primary schools and progressively in other levels of education. At independence the aim of teaching mathematics in primary school was to provide numerical skills such as counting, the four operations of multiplication, addition, subtraction and division, money, measurements and time. In secondary and higher levels they were given algebra, geometry and other mathematical skills needed for vocation. After independence Tanganyika continued with the same mathematics curricula. Popular books used in primary schools were those written by Carey Francis called 'Hesabu za Kikwetu'. The books were easily identified by their covers which had a picture of a giraffe.The giraffe was the identity symbol (emblem) for Tanganyika. Mathematics teaching from standard one up to standard four was done in The formal education system in Tanganyika was 4 years of primary, 4 years of middle school 2 years of lower secondary (territorial college) and 2 years of senior secondary school. Mathematics books for standard five to eight were written in English and were used in East Africa. For example, there was a book called Highway Mathematics Book 6 authored by E. Carey Francis, E.A.William and H. P. Bradley published by Longman (Arusha, Kampala and Nairobi). Its cover had a picture of lion, giraffe, peacock and a boat to represent the East African countries. From 1964, following the first 5-year economic plan, Tanzania's education system was transformed. The middle schools were renamed 'upper primary School'. Some primary schools started offering standard five education.. In 1965 both standard seven and standard eight sat for form one entry examinations.Standard Eight was phased out after 1966. The 1966 form one pupils were a mixture of pupils who did mathematics for seven years and those who did it for eight years. No research has been conducted to date to find if there was significant difference between the performance or the learning of mathematics for the two groups. They all did the same course and examinations there after. The mathematics changes in other countries instigated by Russian's success in putting sputnik 1 in space way back in 1957, influenced the teaching of mathematics in Africa. A project called Entebbe Mathematics conducted some experiments in Tanzania initiated by the United States of America and supported by the African Education Programme. Some primary schools started the experiment in 1964 (three years after independence). Another experiment was started by the British through a project called School Mathematics Project of East Africa (SMPEA), renamed later as School Mathematics of East Africa ( SMEA). These projects for secondary schools were started in 1966 when the mathematical Association of Tanzania was formed after 60 mathematics teachers from all over Tanzania met at the University College. Dar es Salaam to deliberate on the new mathematics programmes. At the end of the meeting 11 schools were elected to start the Entebbe Mathematics and 6 schools started the SMEA. The two programmes were called 'Moderu Mathematics'. Some schools decided to continue with the old programme which was then called 'Traditional Mathematics'. Some of the teachers' colleges inclined towards SMEA and others towards Entebbe. Traditional mathematics in secondary schools used the University of Cambridge Syndicate examination syllabuses which were examined by alternatives A and B mathematics papers. Some schools used books by H.E.Parr who had a series called 'School Mathematics' while other schools used books by Clement V. Durrell who had separate series for Algebra, Arithmetic and Geometry. The examination paper was called 'Elementary Mathematics'. A more demanding paper called Additional Mathematics was offered for the more able students who opted for the course at Form 3. Those who did Advanced Mathematics in Form Five and Six did the Advanced Mathematics paper eather as a single subject or as two separate subjects of Pure Mathematics (PM) and Applied Mathematics (AM). Pupils in Forms 5 and 6 who did Physics, Biology and Chemistry (PCB) sat for the subsidiary mathematics paper. The Swahili version of the original Entebbe Primary Mathematics Books called 'VitabuvyaMajaribio'were used to try the programmein primary schools. These books were later revised to form the primary school series called 'Hesabu za Tanzania'. Teachers Training colleges used a book called 'Basic concepts in Mathematics' which was inclined towards the Entebbe Mathematics. The strong features of both programmes together with their complementary nature, made it difficult to select one as being the most suitable programme for Tanzania. The more intuitive nature of the SMEA contrasted with the stronger emphasis on the step-by -step deductive process in the Entebbe. There was Cross-fertilization between the two programmes, but this was limited to the central mathematics Institutes in Dar es Salaam together with lectures and meetings in various parts of Tanzania. Such meetings were mainly organized within the framework of the Mathematical Association of When the experiment was accomplished and evaluated the two programmes (Entebbe and SMEA) were fused into one and termed 'Modern Mathematics'. Syllabuses were therefore written for ordinary level Modern Mathematics, Advanced level Modern Mathematics, Additional Mathematics (modern) and subsidiary mathematics (modern). These courses were examined by the East African Examinations and were later taken up by the National Examinations Council of Tanzania (NECTA) upon its inception in 1971. The first O-level modern mathematics was conducted in November 1969 and the first A-level modern mathematics in 1971. The Entebbe mathematics books were revised and produced as secondary mathematics books to be used for the modern mathematicsprogramme. The advanced Mathematics Entebbe books were adapted and produced as Advanced Mathematics books in 1974. Similarly the Additional Mathematics Entebbe books were adapted and produced as Additional Mathematics books. The Modern Mathematics and traditional mathematics syllabuses were used to write a new syllabus for ordinary level called Basic Mathematics Forms one to Four. The Advanced Modern and Traditional Mathematics syllabuses gave rise to a syllabus called Advanced mathematics Forms Five and Six. The modern and traditional subsidiary maths gave rise to Basic Applied Mathematics Forms Five and Six. The Modern and Traditional Additional Mathematics became Additional Mathematics Form Three and Four. The new syllabuses became effective since 1974. The two programmes (modern and traditional) were phased out gradually and by 1977 all the pupils in form one to four in Mainland Tanzania were doing Basic Mathematics. An evaluation of the teaching of mathematics in the primary schools was conducted in 1979. Results of the evaluation recommended changes in both the syllabus and textbooks. Among the notable changes was the exclusion of the set language which was regarded by parents and the general public as responsible for the deterioration of mathematics performance in primary schools. The books were revised and given the new series title 'HisabatishuleyaMsingi'. The teaching of Basic Mathematics forms one to four was evaluated in 1981. The results showed a great need of revising the syllabus and splitting it according to forms. The evaluation also recommended that the books be written afresh. To date the syllabus has been revised and books for forms one to fourhave been produced. The Advanced Mathematics and Basic Applied mathematics courses were evaluated in 1984. The evaluation revealed that the syllabus was too heavily loaded and could not be covered effectively in two years. The evaluation recommended a revision of the syllabus by reducing content and specifying depth of coverage. It also called for instructional materials to be written coupled with inservice courses to improve teaching and learning. Preliminary work of revising the Advanced Mathematics syllabus has now completed. Mathematics teachers for secondary schools have mainly been trained at the University of Dar es Salaam and teachers' training colleges which offer diploma courses in Education. At the university, undergraduate preservice mathematics teachers take Mathematics and Education. Within the Education course, they do mathematics teaching methods which is meant to train them on how to teach mathematics effectively. Teachers' training tutors for mathematics are also groomed at the University of Dar es Salaam. Mathematics teaching in Tanzania has been facing the following problems: There is acute shortage of teachers and teaching materials at all levels. The syllabuses are very long and a number of concepts are rather difficult for the levels specified. Many teachers are inadequately trained to teach mathematics. This is a result of allowing very little time in methods of teaching and teaching practice during teachers training. The Ministry of Education and Vocational Training has been concerned with the deterioration of mathematics performance in schools. In trying to solve the problem of shortage of mathematics teachers, the Ministry converted Mkwawa Secondary school into a mathematics and science teachers' college. At Mkwawa Teachers College aspiring mathematics and science teachers studied their A- level subjects in the first two years coupled with some courses on Education. In the third year they studied education which included methods of teaching and teaching practice. The successful candidates were awarded diploma in education. The other diploma students who were not in special colleges had to stay in college for two years after completing their A level studies. It meant that students at Mkwawastarted to train as teachersright at form five. This was later abandoned and the institution has been converted to a constituent college of the UDSM known as Mkwawa University College. The Ministry also provided financial assistance to Educational institutions to conduct in- service seminars in collaboration with the Institute of Curriculum Development. The purpose of the seminars was to orientate teacher on changes in the syllabus. The Ministry also offered funds to the conduct in-service courses for A-level mathematics and science teachers. Other organisations that have conducted seminars for mathematics teachers are the Professional Teachers Association of Tanzania or 'Chama Cha Kitaalaam cha Walimu Tanzania (CHAKIWATA)'. Chama Cha Walimu Tanzania (CWT).The Mathematical Association of Tanzania (MAT) and the International Village of Science and Technology (IVST). The Mathematical Association of Tanzania (MAT/ CHAHITA), in particular, has been supplementing efforts being taken by educational institutions in raising competence among mathematics teachers. It has been conducting annual seminars for its members as well as interested teachers. The lecturers offered have mainly been on topics which teachers find difficult to teach. Moreover, seminars of similar nature have been conducted in active MAT zones and they have proved to be very effective. In 1990, the Harold Macmillan Trust, (HMT) of London sponsored a research of Problems of Teaching and Learning Mathematics in Tanzania. This research gave rise to the project entitled MAT 3- year Integrated Training and Publications Programme. The objectives of the project were to publish and supply supplementary and teaching aids for primary, secondary and teachers classrooms and expand their programme of in-serves training. The project also intended to assist in the production of the Tanzanian Mathematical Bulletin which published mathematical articles and MAT seminar proceedings. The project was funded jointly by MAT, HMT and the European Commission (E.C). It must be emphasized that education and particularly mathematical education, is fundamental to our future economy. Without mathematics there can be no modern technology, no manufacture, no commerce, no modern economy, in our daily lives. Practically everything we use, everything we depend upon needed some people working mathematically in its origination, design and development. Let us consolidate our efforts to raise its standard. Let us motivate both the pupil and the teacher. Let us give them a good teaching environment. Archimedes once said "Give me a place to stand, and I will move Earth".Let us give them what they require and they will do wonders. Another action initiated as intervention to the deterioration of mathematics was the Primary Mathematics Project (PMUP) based at Korogwe Teachers' College. The project developed simple and friendly teaching methods and tried them in Korogwe District Schools. The project included development of teaching and learning materials. Some publishers have published mathematics friendly books which motivate the pupil. For example, Mture Education Publishers, has come up with 'Hisabati kwa Vitendo'(Practical Mathematics) which motivate pupils to like mathematics. Recently, the introduction of Pi day has provided a platform for pupils and teachers to discuss problem of mathematics teaching learning and do activities to alleviate them. Activities include singing mathematics songs, demonstration of mathematics teaching aids,telling mathematics stories and playing mathematical games. The 51 years of independence have surely done something in mathematics.. How many of these changes are you aware of? Have you or your children kept them abreast?
{"url":"http://allafrica.com/stories/201212130094.html","timestamp":"2014-04-21T10:27:40Z","content_type":null,"content_length":"58005","record_id":"<urn:uuid:d45f3be3-535d-4e4e-8515-9a8bed5f49f0>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00118-ip-10-147-4-33.ec2.internal.warc.gz"}
Floating in Platonic heaven In the comments section of my last post, Jack in Danville writes: I may have misunderstood [an offhand comment about the "irrelevance" of the Continuum Hypothesis] … Intuitively I’ve thought the Continuum Hypothesis describes an aspect of the real world. I know we’ve touched on similar topics before, but something tells me many of you are hungerin’ for a metamathematical foodfight, and Jack’s perplexity seemed as good a pretext as any for starting a new thread. So, Jack: this is a Deep Question, but let me try to summarize my view in a few paragraphs. It’s easy to imagine a “physical process” whose outcome could depend on whether Goldbach’s Conjecture is true or false. (For example, a computer program that tests even numbers successively and halts if it finds one that’s not a sum of two primes.) Likewise for P versus NP, the Riemann Hypothesis, and even considerably more abstract questions. But can you imagine a “physical process” whose outcome could depend on whether there’s a set larger than the set of integers but smaller than the set of real numbers? If so, what would it look like? I submit that the key distinction is between 1. questions that are ultimately about Turing machines and finite sets of integers (even if they’re not phrased that way), and 2. questions that aren’t. We need to assume that we have a “direct intuition” about integers and finite processes, which precedes formal reasoning — since without such an intuition, we couldn’t even do formal reasoning in the first place. By contrast, for me the great lesson of Gödel and Cohen’s independence results is that we don’t have a similar intuition about transfinite sets, even if we sometimes fool ourselves into thinking we do. Sure, we might say we’re talking about arbitrary subsets of real numbers, but on closer inspection, it turns out we’re just talking about consequences of the ZFC axioms, and those axioms will happily admit models with intermediate cardinalities and other models without them, the same way the axioms of group theory admit both abelian and non-abelian groups. (Incidentally, Gödel’s models of ZFC+CH and Cohen’s models of ZFC+not(CH) both involve only countably many elements, which makes the notion that they’re telling us about some external reality even harder to Of course, everything I’ve said is consistent with the possibility that there’s a “truth” about CH floating in Platonic heaven, or even that a plausible axiom system other than ZFC could prove or disprove CH (which was Gödel’s hope). But the “truth” of CH is not going to have consequences for human beings or the physical universe independent of its provability, in the same way that the truth of P=NP could conceivably have consequences for us even if we weren’t able to prove or disprove it. For mathematicians, this distinction between “CH-like questions” and “Goldbach/Riemann/Pvs.NP-like questions” is a cringingly obvious one, probably even too obvious to point out. But I’ve seen so many people argue about Platonism versus formalism as if this distinction didn’t exist — as if one can’t be a Platonist about integers but a formalist about transfinite sets — that I think it’s worth hammering home. To summarize, Kronecker had it backwards. Man and Woman deal with the integers; all else is the province of God. Luca Says: Comment #1 May 15th, 2008 at 12:44 am Moodworves Says: Comment #2 May 15th, 2008 at 1:54 am I’m having trouble imagining a process who’s outcome is dependent on the P vs. NP question. Certainly if P=NP, and the proof/algorithm is easy enough to find, that could influence a process, but is there a process that gives the answer to the P vs. NP question? In other words, is the P vs. NP question reducible a halting problem? komponisto Says: Comment #3 May 15th, 2008 at 2:05 am Let Man and Woman deal with the integers; all else is the province of God. This sounds dangerously close to a quote of Errett Bishop: “Classical mathematics concerns itself with operations that can be carried out by God.. Mathematics belongs to man, not to God… When a man proves a positive integer to exist, he should show how to find it. If God has mathematics of his own that needs to be done, let him do it himself”. Such a view is of course Wrong with a capital W, for several reasons: 1. God doesn’t exist, so if we don’t do it, no one will. 2. It implicitly assumes Platonism: that when mathematicians talk about transfinite sets, they aren’t just talking about consequences of the ZFC axioms in the first place. 3. We don’t know enough about physics to know whether transfinite sets “correspond to reality” even in the naive sense that everyone always assumes in these discussions. 4. Assuming we did have this kind of knowledge, we would need to express it in the form of a formal physical theory, which would then necessarily make use of these mathematical concepts. Scott Says: Comment #4 May 15th, 2008 at 2:11 am Moodworves, P=NP is reducible to the halting problem with an oracle for the halting problem. To put it differently, P=NP means that there exists a Turing machine M and integers c,k such that for all SAT instances φ of size n, M decides φ after at most cn^k steps. Ignoring the details, this is tantamount to saying that there exists an integer x such that for integers y, some computable predicate A(x,y) is true. Which is still a statement about integers (just like Goldbach’s Conjecture), the only difference being that now there are two quantifiers instead of one. Scott Says: Comment #5 May 15th, 2008 at 2:31 am komponisto, two quick responses: 1. My view is very different from the view of Bishop you quoted. Unlike him, I have no problem at all accepting nonconstructive existence proofs. (Of course constructive proofs often yield deeper insights, better algorithms, etc., but those are added bonuses.) More generally, I’d never, ever advocate throwing away interesting math to uphold a philosophical principle. I’m not talking about discarding anything we can prove; I’m talking about how to deal with statements we know we can’t prove. 2. If you think a saying like “all else is the province of God” presupposes God’s existence, you’re reading way too much into it! Think of the legal concept “act of God” (meaning “not an act of anyone you can sue”). Moodworves Says: Comment #6 May 15th, 2008 at 3:18 am Ah, thanks Scott! Correct me if I’m wrong, but it seems that the second quantifier makes the P vs. NP question fundamentally different from Goldbach’s Conjecture, in that it is possible that it doesn’t affect the behavior of any computable process. Since our universe (as far as we understand it) doesn’t have any halting oracles, it could some day be relegated to Plato’s Math Heaven (where it can sit around with the Continuum Hypothesis). …That’s a strange thought considering the importance a positive or negative result would have. Liron Says: Comment #7 May 15th, 2008 at 3:28 am Thanks Scott, I was really wondering about what’s real and what isn’t, but now it’s obvious in retrospect. csrster Says: Comment #8 May 15th, 2008 at 3:33 am Scott, shh! You were on your way to a Templeton Prize nomination there for a couple of hours. asdf Says: Comment #9 May 15th, 2008 at 3:45 am Scott, I saw your paper about whether P=NP might be independent of ZFC. ZFC seems pretty artificial and bogus to me (powerset function, ok; iterating it through the entire transfinite list of ordinals: wtf?). And maybe that independence is unlikely. But I wonder if anyone has considered whether P=NP might be independent of first order Peano arithmetic. I’m thinking of the graph minor theorem (a generalization of Kruskal’s tree theorem) as an example of that sort of thing. It doesn’t seem so outrageous (at least to a neophyte like me) that a theorem about an infinite set of graphs is independent of PA, since the obvious well-orderings on these graphs have order type larger than epsilon-0 which is the largest ordinal that “fits” in PA, so some inductions over those graphs may not fit in PA. And the graph minor theorem proves nonconstructively(!!) that certain problems are solvable in polynomial time. Anyway in an earlier comment thread someone suggested it would be amusing if P=NP but the fastest algorithm for SAT was O(n^10000) or something like that. If P vs NP is independent of PA, it might be natural for the situation to be a lot worse, e.g. P=NP (provable in 2nd order arithmetic) but the fastest algorithm for SAT is O(n^B(B(k+7))) where k is the number of states of the smallest TM that solves an np-complete problem (in exponential time) and B is the (uncomputably fast-growing) busy beaver function. Anyway the above is just raving bogosity, no real mathematical thought behind it, but if it turns out to be true, there’s probably someone out there in Platonic heaven laughing at us…. asdf Says: Comment #10 May 15th, 2008 at 3:51 am Also I wonder if you know of Woodin’s argument that CH is false, based on infinitary logic. And Paul Cohen was not a platonist but nonetheless once said he thought CH was obviously false, because the powerset operation was so much more powerful than diagonalization. I’ve been trying to make up a joke but all I have is the beginning, so sorry if that gets you ready to hear something funny, but the funny part doesn’t come. It begins: Kurt Godel died in 1978 and went immediately to heaven, where like all new arrivals, he was given wings and a harp and asked if he had any questions. The first thing he wanted to know was whether the continuum hypothesis is true. … I’m not sure where to go from there. I’ve thought of a couple directions. Maybe someone can think of a suitable conclusion. I realized after a while that it’s sort of similar to a very famous joke about Wolfgang Pauli and the fine structure constant, that you all probably know. asdf Says: Comment #11 May 15th, 2008 at 3:57 am Darn, I wish there was a way to edit these comments. Anyway I had forgotten to ask in the first one, whether anyone had thought of what natural well-orderings there might be on P-time Turing machines and that sort of thing, and what the corresponding ordinals would be. Is that a completely bogus thing to want to know? Pascal Koiran Says: Comment #12 May 15th, 2008 at 7:35 am Maybe one reason why no “natural” example has been found yet is that one would have to reason outside of ZFC to obtain such an example. Consider for instance diophantine equations. Since their solvability is Turing-undecidable (Matiyasevich) there must exist specific equations whose solvability is undecidable in ZFC (assuming of course that ZFC is consistent, or you can prove anything and its negation). But the moment you exhibit such an equation (call it E) you know that E is, in fact, unsolvable! Indeed, simple proofs of solvability (namely, an integral solution) exist for all solvable equations. We therefore have a proof (in ZFC) of non-solvability for E. So the original proof that E is ZFC-undecidable must have been obtained outside of ZFC. QED Possibly, this argument breaks down if one looks for a “natural example” which is higher up in the arithmetic hierarchy. Job Says: Comment #13 May 15th, 2008 at 8:10 am If P=NP, then i find it unintuitive that NPC problems would require n^c where c is very large. Why would it be n^10000 for example? All the relevant data can be scanned in n^2, so it would either be of the form 2^f(n) due to some requirement that a portion of the 2^n subsets be analyzed, or of the form n^c where c Job Says: Comment #14 May 15th, 2008 at 8:14 am …where c is small (~3). asdf Says: Comment #15 May 15th, 2008 at 8:47 am Job, if c were small it would be computable and someone would have solved the problem by now ;-). That’s what makes me chuckle over the idea that it’s enormous, uncomputably large in some parameter having to do with the problem class. Pascal, where would something like the twin primes conjecture come in? It might be independent but the independence doesn’t imply either its truth or its falsity. Andy Says: Comment #16 May 15th, 2008 at 9:05 am I agree with Scott’s general message, although I think as we go higher in the arithmetic hierarchy we should be correspondingly less confident about our intuitions. The one point I would emphasize is that in asking how ‘concrete’ or ‘physically meaningful’ a statement S is, we should look, not at its syntactic, ‘apparent’ content (cardinalities of the sets it concerns, level of existential-universal alternation, etc.), but at its ‘essential’ content: the syntactic content of the least-complex statement S’ provably equivalent to S under ZFC (though we have latitude to decide what syntactic resources count as ‘complex’), or at least the least-complex known equivalent. Scott already affirmed this, when he described P vs NP as being reducible to a halting problem relative to a HALT oracle. A naive translation would’ve placed P vs NP one level higher in the arithmetic hierarchy (corresponding to an attempt to ‘guess’ which NP machine accepted a hard language), but Cook’s theorem allows us to remove that quantifier–we know a SAT checker is the best Similarly, research in set theory has shown that CH is provably equivalent to lower-complexity statements (i.e. ones with fewer alternations). Bill G. discussed one example: CH is equivalent to ‘there exists a coloring of the reals with countably many colors, without a monochromatic solution to x + y = w + z in distinct w, x, y, z.’ Can we even rule out that CH might be provably equivalent to a statement involving only natural numbers, albeit high in the arithmetic hierarchy? Andy Says: Comment #17 May 15th, 2008 at 9:33 am Bill’s discussion: Scott Says: Comment #18 May 15th, 2008 at 9:38 am asdf, a few responses: 1. If you go further into my survey, you’ll see that there’s a whole subfield (which Razborov, Krajicek, Pudlak, and others have been involved in) which tries to prove P=NP independent of weak fragments of PA. For example, it’s now known that Resolution and various other proof systems too weak to prove things like the Pigeonhole Principle can’t prove circuit lower bounds. Alas, these techniques are nowhere near being able to handle anything as rich as PA — and the great irony is that if they were, then we’d probably understand enough about proof complexity to be able to prove NP≠coNP (which implies P≠NP)! 2. I don’t understand Woodin’s argument for 2^Aleph0=Aleph[2], though I know he has such an argument. My own favorite argument for not(CH) is Freiling’s: in ZFC+CH, it’s possible to assign a countable subset S(x)⊂[0,1] to every real number x∈[0,1], so that for every (x,y) pair, either y∈S(x) or x∈S(y). That seems incredibly counterintuitive: how could a set that’s countable for every x possibly be “dense” enough that the union of it with its flipped version would cover the entire unit square? Whereas in ZFC+not(CH) this isn’t possible. (See Lecture 2 of the Democritus series.) 3. Like ZFC, your joke about Gödel has multiple consistent extensions. Maybe he ends up starving himself because he thinks God is going to poison him? david Says: Comment #19 May 15th, 2008 at 9:39 am Pascal, how do you draw the conclusion that “there must exist specific equations whose solvability is undecidable in ZFC” from “solving diophantine equations is undecidable”? This only means that there is no general algorithm to solve the question, but any specific equation may be proven not to have any solutions in ZFC, only that the same proof method won’t work for all of them. david Says: Comment #20 May 15th, 2008 at 9:45 am Ok I get it, we can enumarate all statements provable in ZFC and see if one of these says that equation has no solution. Scott Says: Comment #21 May 15th, 2008 at 9:54 am I removed the word “Let” from the sentence “Let Man and Woman deal with the integers…”, just to eliminate a whiff of prescriptive dogmatism. Pascal Koiran Says: Comment #22 May 15th, 2008 at 10:16 am >Ok I get it, we can enumarate all statements provable >in ZFC and see if one of these says that equation > has no solution. That’s correct. david Says: Comment #23 May 15th, 2008 at 11:04 am Anyway it is true that we can exhibit specific equations whose solvability is unprovable in ZFC, once we fix encodings. We can write a program P which, given a diophantine equation E, enumerates all proofs in ZFC and checks if one of them gives a solution of E, or proves there is none, and if so, outputs the answer. We want to find an E such that P(E) never halts (which implies E is unsolvable). But the proof of non-computability of the halting problem tells us just how to do this. Given an integer i, we can write a program Q(i) that does the following: 1. Write a diophantine equation E(i) such that E(i) has a zero if and only if the I-th Turing machine halts on input i. 2. Compute P(E(i)); if it halts, we know i does not halt on input i (this is where we use that ZFC is consistent). Now if we let i = code of program Q, it follows that P(E(Q)) does not halt, so e(Q) is unprovable in ZFC. So, it seems there is an algorithm to explicitly find an expression (namely E(Q)) such that E is undecidable in ZFC, and hence unsolvable. We can also prove in ZFC that if E is solvable, there is a proof for this (compute the value of E on the purported solution). I guess (I’m not entirely sure) inside ZFC we can carry out all these steps, but we have to assume ZFC is consistent in order to draw the conclusion that P(E(i)) halts implies Mi(i) doesn’t. Am I Pascal Koiran Says: Comment #24 May 15th, 2008 at 11:54 am David, at first glance I am happy with your argument; the only thing that worries me is that the conclusion that you reach is in contradiction with mine… I am afraid that I don’t know enough set theory to be sure who’s right. We need a wise, fair and knowledgable referee in this case: Scott, please weigh in! Scott Says: Comment #25 May 15th, 2008 at 12:35 pm Pascal: Yes, I believe it’s possible to write down an explicit Diophantine equation that has a solution iff ZFC is inconsistent (and hence, whose solvability is independent of ZFC). This is non-obvious but should follow from the Matiyasevich/MRDP Theorem. I don’t know if that Diophantine equation will need variables in the exponents or not. Does anyone who actually knows want to enlighten us? Pascal Koiran Says: Comment #26 May 15th, 2008 at 3:36 pm Scott, I am beginning to believe it too. The equation E would “simulate” a Turing machine that enumerates all proofs in ZFC and halts when it finds a contradiction. So indeed, Solvable(E) is independent of ZFC (assuming that ZFC is consistent). However, the statement E’: Solvable(E) “ZFC inconsistent” would presumably be provable in ZFC (because the Matiyasevich-based argument can presumably be formalized within ZFC). So the next challenge would be to exhibit a diophantine equation E such that the associated statement E’ is independent of ZFC! zzz Says: Comment #27 May 15th, 2008 at 3:41 pm CH just says sets are either computably enumerable |Z|, or uncomputable |R|. Never really understood what the fuss is. Pascal Koiran Says: Comment #28 May 15th, 2008 at 3:44 pm Also I think you would not need variables in the exponents of E: Matiyasevich showed that those don’t buy you any additional power (and the conversion from these “exponential diophantine equations” to ordinary diophantine equations is effective!) Pascal Koiran Says: Comment #29 May 15th, 2008 at 3:48 pm One last comment for tonight: in my definition of E’ an equivalence sign got eaten by wordpress. It should read: Solvable(E) equivalent to “ZFC inconsistent”. Sam Nead Says: Comment #30 May 15th, 2008 at 11:08 pm I have a question which I hope belongs in this thread: Suppose that T is a Turing machine and N is an input. It is possible that there is a proof that T(N) halts and perhaps there is a proof that T (N) does not halt. Of course, a formal proof assumes some collection of axioms. So, can there be a pair (T, N) which provably halts if we assume ZFC and provably does not halt if we assume ZF+notC? Can the workings of a Turing machine depend on the axioms we assume? And if so, what could this possibly mean? Joseph Hertzlinger Says: Comment #31 May 16th, 2008 at 12:15 am I’ve been trying to make up a joke but all I have is the beginning, so sorry if that gets you ready to hear something funny, but the funny part doesn’t come. It begins: Kurt Godel died in 1978 and went immediately to heaven, where like all new arrivals, he was given wings and a harp and asked if he had any questions. The first thing he wanted to know was whether the continuum hypothesis is true. … By analogy with the Pauli joke, God would hand Godel a paper with an elegant philosophical argument answering the Continuum Question. Godel would leaf through it, point to page epsilon 0, and say “There’s a mistake right over here…” asdf Says: Comment #32 May 16th, 2008 at 12:36 am Sam Nead, I think there is a TM like you are asking for. Someone more knowledgeable should confirm/unconfirm this, but I believe that the MRDP theorem implies that one can construct a diophantine equation system that has a solution (set of integers) iff AC is true. So the TM would just start enumerating sets of integers and checking whether they were a solution to that diophantine system, halting if a solution is fonud. Scott Says: Comment #33 May 16th, 2008 at 2:01 am Sam and asdf: No, a given Turing machine either halts or doesn’t halt; it makes no difference what axioms you assume! This is an absolutely crucial point, and is a huge part of what I was trying to get across in this post. You might ask, what makes me so sure? Well, suppose you want to believe that a given Turing machine M has “indeterminate” behavior — i.e., that there’s no objective fact about whether M halts or not, separate from what can be proved about the question in various formal systems like ZFC. Then why on earth would you suppose there’s an objective fact about whether ZFC proves M halts? After all, the existence of a proof just corresponds to the halting of another Turing machine. So you see that there’s an infinite regress: if the question “does M halt?” is meaningless in the absence of a proof one way or the other, then question “is there a proof?” is equally meaningless. Let’s consider two concrete examples: 1. A Turing machine that searches for inconsistencies in ZFC will run forever, assuming ZFC is consistent. Of course ZFC can’t prove it runs forever, but that’s just because ZFC is consistent (i.e., because it does run forever)! 2. On the other hand, I now claim that assuming ZF+Con(ZF) is consistent, there’s no Turing machine that can be proved in ZF to halt iff AC is true. Proof: Suppose such a machine M existed. Then ZF |= “If M halts, then AC is true.” This implies that if M halts, then ZF proves AC. But we know from Cohen that if ZF is consistent then ZF doesn’t prove AC. Hence M doesn’t halt. Now, the whole argument above can be formalized in the system ZF+Con(ZF). Hence ZF+Con(ZF) proves that M doesn’t halt. This means that ZF+Con(ZF) proves not(AC). But we know that ZF+Con(ZF) doesn’t prove not(AC), assuming ZF+Con(ZF) is consistent. (I can’t remember who showed this, but it follows from the fact that large cardinal axioms don’t decide AC.) So M can’t exist, QED. (Question for experts: can one weaken the assumption in the above result, from ZF+Con(ZF) is consistent to ZF is consistent?) So asdf: no, the MRDP theorem can’t possibly yield a diophantine system that has a solution iff AC is true. What you might have been thinking is this: the MRDP theorem does yield a diophantine system that has a solution iff ZF proves AC. But that’s a different question, and assuming ZF is consistent we know the answer to it (no) — hence the diophantine system in question simply won’t have a asdf Says: Comment #34 May 16th, 2008 at 8:13 am Scott, doh, thanks, that was a good explanation of my silly error. What I was actually thinking may have been somewhat different: Is there a Turing machine T such that 1) T halts on every input 2) That T halts on every input is provable in ZFC, but it is not provable in ZF in the absence of AC. I believe the answer to the above is yes. wolfgang Says: Comment #35 May 16th, 2008 at 9:02 am > So, can there be a pair (T, N) which provably halts if we assume ZFC and provably does not halt if we assume ZF+notC? Now this is a really stupid question: What if N (the input) is simply “we assume ZFC” or “we assume ZF+notC” in whatever encoding and T (the Turing machine) simply checks if it is one or the other? wolfgang Says: Comment #36 May 16th, 2008 at 9:31 am Sorry, please disregard my previous comment, it makes no sense. Jack in Danville Says: Comment #37 May 16th, 2008 at 11:21 am I get it! There are only enumerably transfinite quantities in the physical world. That’s a profound statement. It follows there are ultimate Planckian units of time and volume, the points of the physical world; and geodesics have a dimension orthogonal to the dimension of direction. Gil Kalai Says: Comment #38 May 16th, 2008 at 12:26 pm “For mathematicians, this distinction between “CH-like questions” and “Goldbach/Riemann/Pvs.NP-like questions” is a cringingly obvious one, probably even too obvious to point out. But I’ve seen so many people argue about Platonism versus formalism as if this distinction didn’t exist — as if one can’t be a Platonist about integers but a formalist about transfinite sets — that I think it’s worth hammering home.” I must admit that I do not see this distinction as obvious or clear, in fact, I do not see it. I am not sure it is meaningful for practicing mathematics. For both kinds of questions some intuition emerges and occasionally the intuition is incorrect. Scott Says: Comment #39 May 16th, 2008 at 4:48 pm It follows there are ultimate Planckian units of time and volume, the points of the physical world; and geodesics have a dimension orthogonal to the dimension of direction. Huh? I don’t understand what you’re talking about (what’s the “dimension of direction”?). I was only arguing for an implication in the other direction: it follows from the fact that we’re finite creatures who can only ever engage in finite chains of reasoning (and in particular, from the Church-Turing Thesis), that CH can have no effect on us separate from its provability. Scott Says: Comment #40 May 16th, 2008 at 4:54 pm Is there a Turing machine T such that 1) T halts on every input 2) That T halts on every input is provable in ZFC, but it is not provable in ZF in the absence of AC. Interesting question! I’m pretty sure a generalization of the argument from my previous comment would rule this out. I just landed in Seattle and am way too tired to think, but let me get back to you about it later (unless someone wants to beat me to the punch). asdf Says: Comment #41 May 17th, 2008 at 12:34 am Joseph Hetzinger, I like your conclusion of the joke. Maybe Woodin’s argument could even be incorporated into it somehow. Gil and Scott, I thought the difference between a Goldbach-type question and a CH type question was the number of alternating quantifiers. That would make the twin prime conjecture (that there are infinitely many p such that p and p+2 are both prime) a CH-type question: it doesn’t involve any uncountable sets, but it is not necessarily decidable by a Turing machine as either true or false. Re being a platonist about the integers but a formalist about transfinite sets: well, what are the integers anyway? They are described (up to isomorphism) by the Peano axioms in second-order logic, but believing those axioms would impute some kind of existence to every subset of the integers, and there are uncountably many such subsets… Scott Says: Comment #42 May 17th, 2008 at 1:14 am asdf: No, the difference between “Goldbach-type questions” and “CH-type questions” has nothing to do with the number of quantifiers. The difference is that in the one case the quantifiers range over integers, while in the other they range over transfinite sets. And no, I don’t agree that the Peano axioms “impute some kind of existence” to every subset of integers. After all, the key point about PA is that it only involves quantification over integers, and not over sets of integers. asdf Says: Comment #43 May 17th, 2008 at 1:16 am Arggh, I misspelled Joseph Hertzlinger’s name above, I was thinking of someone else after scrolling down. My apologies. Scott, I don’t see how to convert your argument about ZF vs ZFC into one where we’re only discussing provability of whether the machine halts. But, I’m not very good at this subject, as you can surely tell. BTW, here is an interesting article by Doron Zeilberger, who doesn’t believe in any infinite sets of any kind, i.e. he thinks there are really only finitely many integers and there is a largest one, and shows how to develop calculus from there: pdf link. asdf Says: Comment #44 May 17th, 2008 at 1:25 am PA (first order Peano arithmetic) quantifies only over integers, but because of that, it has nonstandard models. Goodstein’s theorem is a fairly straightforward theorem about integers that is unprovable in PA (but provable in PA+CON(PA) if I understand it right). The classical Peano axioms contain an induction axiom which says (from Wikipedia “Peano axioms”): If K is a set such that: * 0 is in K, and * for every natural number n, if n is in K, then S(n) is in K, then K contains every natural number. This is where second-order logic comes in: that axiom quantifies over all sets of integers. The result is that the Peano axioms have only one model. However, since it’s second-order logic, the completeness and compactness theorems of first-order logic don’t hold, so there are sentences about the unique model of the Peano axioms that are true but are not theorems. I don’t think we can say first-order PA describes the platonic integers, since PA has (as you put it) multiple consistent extensions. cody Says: Comment #45 May 17th, 2008 at 10:31 am asdf, havent you read Scott’s biggest number essay? …Doren must have discovered that 83 is the largest integer. cody Says: Comment #46 May 17th, 2008 at 11:14 am i am not intimate enough with ZFC to know why we all have such confidence (faith?) in its consistency. so, as a physicist, id like to admit that lack of simultaneity is still a very non-intuitive result for me to cope with… and so my question is, why are Gödel’s incompleteness theorems so well accepted when ZFC is not provably consistent? also, in regards to the last post, mathematicians seem to be (on average), the most demanding, least accepting of conjecture, group of individuals so far established, (if not possible, thanks Cauchy), so its hard to imagine you guys biting bullets at all. which is intended as a compliment, not criticism. John Sidles Says: Comment #47 May 17th, 2008 at 11:32 am I’d like to thank asdf for providing the link to Doron Zeilberger’s ultrafinitist manifesto … this essay was a lot of fun to read! Zeilberger’s observation that “There are many ways to divide mathematics into two-culture dichotomies” was for me an especially enjoyable starting point. Zeilberger’s essay divides mathematics into an (old-fashioned) culture of the continuous versus a (new-fangled) culture of the discrete. The essay then argues that the discrete side of the dichotomy contains all of the truth of the continuous side, with none of the bedeviling transfinite conundrums. But is this really the case? Zeilberger’s essay notes the ubiquity of interval arithmetic in theorem-proving on the discrete side. He asserts that this arithmetic is governed by “obvious rules” … leaving the reader to assume that these rules have no philosophical depth. But are all the implications of interval arithmetic’s seeming simple rules really obvious? Definitely not! Until quite recently, for example, it definitely was *not* obvious that computing the cube of an interval matrix is NP-hard. Zeilberger’s essay thus confronts us with an unexpectedly difficult dichotomous choice between two mathematical paradises: Cantor’s transfinite paradise—which has the flaw that seeming truths in set-theory are undecidable—versus Zeilberger’s “ultrafinite paradise”—which the flaw that even the simplest mathematical questions have answers that are NP-hard to compute. Since my own mathematical philosophy is unitarian, it is for me an article of faith that all mathematical paradises are fundamentally the same paradise. It follows, therefore, that the transfinite obstruction to uniquely choosing set-theory axioms must be identical to the ultrafinite obstruction of proving P≠NP. Having demonstrated philosophically that this transfinite / ultrafinite equivalence must exist, I will leave the details of actually proving it to mathematicians who are wiser than myself! Jack in Danville Says: Comment #48 May 17th, 2008 at 1:15 pm Doh! Well I thought (and bought) you were arguing there are physically no transfinite sets of cardinality greater than Aleph-null. That would apply to points in physical space. If the collection of points in a line segment, or any finite path, cannot have a higher cardinality, I cannot see how the set can have a cardinality of merely Aleph-null, so the points in a finite path, or a finite volume of space, must be finite. (If I haven’t already gotten into trouble, surely this is where I do.) Finite points in finite space requires a smallest unit of space (a Planck volume?). Any path in spacetime, for instance a geodesic, would consist of a series of these teeny-tiny volumes strung together. Hence as well as having length the path would have a circumference (an additional dimension perpendicular to the dimension of length). Pascal Koiran Says: Comment #49 May 17th, 2008 at 2:38 pm Even if there is a smallest unit of length, quantum mechanics could provide the continuous with a victory over the discrete: amplitudes are complex numbers, and I’ve never heard of a “smallest Have the physicists ever proposed such a thing ? Bram Cohen Says: Comment #50 May 17th, 2008 at 5:04 pm Scott, given that ZFC is consistent, doesn’t that mean that every diophantine equations with no solutions qualifies as one having solutions iff ZFC is inconsistent? Scott Says: Comment #51 May 17th, 2008 at 6:34 pm Bram: What you want, and what Matiyasevich/MDRP gives you, is a Diophantine equation that can be proved in ZFC to have solutions iff ZFC is inconsistent. Scott Says: Comment #52 May 17th, 2008 at 6:41 pm Pascal: Plenty of people have speculated about “QM with discrete amplitudes,” but no one has proposed such a theory that makes any sense. The fundamental problem is that the discrete subgroups of the unitary group all seem to be “trivial” (e.g., they don’t allow entanglement) or “unphysical” (e.g. the Clifford group). Scott Says: Comment #53 May 17th, 2008 at 6:54 pm Well I thought (and bought) you were arguing there are physically no transfinite sets of cardinality greater than Aleph-null. Jack: Sets are mathematical objects; I’m not even sure what it would mean for them to “physically exist.” For me the question is not what exists; it what we ever need to invoke to explain our experiences. Because we’re finite beings, who live for finite amounts of time and discriminate between observations with finite precision, all our knowledge and reasoning can be expressed as finite strings of bits. Goldbach’s Conjecture and the Riemann Hypothesis both make predictions about what the outcomes of certain operations on finite strings of bits are going to be, whereas CH makes no such prediction. That’s the key difference between them as I see it. Scott Says: Comment #54 May 17th, 2008 at 6:56 pm my question is, why are Gödel’s incompleteness theorems so well accepted when ZFC is not provably consistent? Cody: I wouldn’t say Gödel’s theorems require us to “assume” ZFC is consistent. They say either there are such-and-such limits on what ZFC can prove, or else ZFC is inconsistent — in which case it can prove anything, but who cares? John Sidles Says: Comment #55 May 17th, 2008 at 7:54 pm Pascal: lattice gauge theory has all the ingredients you require — space is discrete, the values of the gauge fields are (or can be chosen to be) discrete too, and the resulting theory is well-posed mathematically, efficient algorithmically, and can be directly linked to experiment. This article by Kenneth Wilson is a wonderful account of how all these ideas were worked-out. From a fundamental physics point of view, however, this discretizing leads nowhere — by design! — because the whole point is to devise a lattice theory such that the discreteness parameter disappears from the final predictions. This is yet another example of the ubiquity of “duality” in physics and mathematics … in which the main point of Discipline “A” commonly appears as a small parameter or unwanted side-effect of Discipline “B”. My own interest in the Continuum Hypothesis chiefly resides in trying to guess what other problems it might be dual to. I am a little bit surprised that no one else is posting about this point of komponisto Says: Comment #56 May 18th, 2008 at 1:22 am Goldbach’s Conjecture and the Riemann Hypothesis both make predictions about what the outcomes of certain operations on finite strings of bits are going to be, whereas CH makes no such prediction. That’s the key difference between them as I see it. But if you’re a formalist, to ask about CH is just to ask whether there is a proof of CH in ZFC — and then we’re right back in Turing Machine Land. So my question for you, Scott, is: why aren’t you a formalist? asdf Says: Comment #57 May 18th, 2008 at 4:31 am A formalist might believe that ZFC is not the best formalization of set theory for doing math in. They might prefer some other axioms instead. And then the question of whether CH is a theorem is back in play. Scott Says: Comment #58 May 18th, 2008 at 9:27 am komponisto: I’m reluctant to buy into any sort of -ism without being sure of what I’m getting. So for example, could a formalist believe P≠NP, even supposing the question were proved independent of ZFC? If not, then I am not a formalist. komponisto Says: Comment #59 May 18th, 2008 at 1:57 pm Along the lines of asdf’s comment, I suspect that if the P vs. NP question were to be proved independent of ZFC, there would be a movement to revisit ZFC’s status as the “official” axiom system of mathematics. (Indeed, there was/is such a movement with CH, but it hasn’t really taken off, I suppose because CH is not seen as a particularly urgent question by the larger mathematical community.) Like Platonists, formalists can have preferences among axiom systems; the difference is that formalists don’t attribute “incorrectness” to the systems they’re less interested in. John Sidles Says: Comment #60 May 18th, 2008 at 2:53 pm Komponisto, please correct me if I’m wrong, but if the P vs. NP question were proved to be independent of ZFC, wouldn’t that immediately imply P≠NP? On the following grounds. One proof that P = NP would be a concrete algorithm in P that solved NP-complete problems. So if P≠NP is independent of ZFC, then no such proof exists, and hence, no such algorithm exists. Probably this point is already clear to most people … or else my own understanding of this implication is simply wrong. komponisto Says: Comment #61 May 18th, 2008 at 3:42 pm John: My understanding from reading Scott’s paper on this topic is that there might be such an algorithm, but it might be impossible to prove that it works. By the way, I thought your earlier comment was right on the mark. komponisto Says: Comment #62 May 18th, 2008 at 3:43 pm Let’s try that second link again. asdf Says: Comment #63 May 18th, 2008 at 5:25 pm CH used to be an urgent issue. It stopped being urgent when Cohen proved its independence. I don’t think it’s possible to prove P!=NP is independent of ZFC. I.e. it might be independent, but (within ZFC) there can be no proof of this. The reason is that P!=NP is a statement about the standard integers, and these are the same in every model of ZFC, unlike the situation with CH. Maybe Scott’s article says more about this. I should re-read it now that I know a little bit more logic than I did the last time. asdf Says: Comment #64 May 18th, 2008 at 5:28 pm No wait, what I said above makes no sense. If the standard integers are the same in every model of ZFC, then obviously a first-order statement about them can’t be independent. Can somebody straighten me out? Scott Says: Comment #65 May 18th, 2008 at 6:26 pm Live from STOC 2008: asdf: It’s perfectly conceivable (even if astronomically unlikely) that P≠NP could be proved independent of ZFC, despite being about standard integers. Consis(ZFC) is also about standard integers, but we know it’s independent of ZFC. John: If Goldbach’s Conjecture were proved independent of ZFC, that would immediately imply Goldbach’s Conjecture. However, the same is not true for P≠NP. The difference is that if Goldbach is false, then there’s necessarily a proof it’s false; but if P=NP, then there’s not necessarily a proof (as komponisto says, there could be a polytime algorithm for SAT, but no way to prove its efficiency or Job Says: Comment #66 May 18th, 2008 at 7:50 pm If “is P!=NP?” is a particular instance of a problem L and “is P=NP?” is an instance of a problem L’, then would L be in NP? What about L’? Formally i don’t know what L’s input is, but it seems plausible that given a proof that P!=NP, it can be quickly verified. Is that probably the case? Job Says: Comment #67 May 18th, 2008 at 7:58 pm In more detail, to avoid asking a blurry question, suppose L is the problem: Given two complexity classes A and B, are they different? And L’ would be the complement. Scott Says: Comment #68 May 18th, 2008 at 8:22 pm Job, “complexity class” is itself a blurry notion. John Sidles Says: Comment #69 May 18th, 2008 at 9:17 pm Konponisto and Scott, thank you both for your clarifying replies, which helped my understanding a lot … Scott’s survey article was a very great help too. Tricky stuff, this set theory! mitchell porter Says: Comment #70 May 18th, 2008 at 9:35 pm Scott: can you imagine a “physical process” whose outcome could depend on whether there’s a set larger than the set of integers but smaller than the set of real numbers? Naively it seems possible that the subset structure of the continuum might have ‘detectable’ implications for real analysis, and hence for continuum-based physics. I’d ask the gurus of FOM, such as Harvey Friedman, some of whom have worked on the practical implications of large cardinal axioms. Walt Says: Comment #71 May 18th, 2008 at 11:25 pm As far as I understand your argument, Scott, it advances two basic claims. The first claim is that there exist statements whose truth has no impact on what theorems are true about integers and Turing machines. It’s not obvious that this is true. Fortunately, it really is true, and the Continuum Hypothesis is an example. Forcing cannot change the truth of any statement about the integers, so any statement proven independent by forcing cannot have any consequences for the integers. But this is only one of the two main ways to prove statements independent of ZFC. The other main way is to prove that the statement implies the consistency of ZFC, which means such a statement makes predictions about integers as well as the reals. The types of things Woodin talks about are of this type. The other claim is that there are no physical processes that can’t be modelled as Turing machines. I have to admit everything I know about this claim I learned from your blog, but my sense is that has the statement of a plausible conjecture, rather than an established fact. JerboaKolinowski Says: Comment #72 May 19th, 2008 at 6:35 am Hi Scott, I think I can imagine a physical process whose outcome depended on the existence of a set larger than the integers but smaller than the reals. However, my admitted lack of mathematical sophistication may make this easier for me than for some! In my limited understanding, the independence of CH from ZFC means that in speaking of the “existence” such a set we must be using the word “existence” as a gloss for “existence under some set of axioms which is not ZFC”, and so my imagined physical process depends on there being some interesting set of axioms under which we are prepared to say that there either is or is not a set greater than the integers and less than the reals. The physical process I imagine, then, is just some mathematician writing down a proof under this as-yet-undreamed-of axiom set, where the (non)existence of the set in question is a result or dependency of the proof (or, if you like, a machine check of this proof). Because I am a mathematical unsophisticate, I don’t have to try very hard to imagine this. In particular, I don’t feel the need to specify the axiom set – I just imagine the mathematician in the act of writing and leave the details to her In this respect, the “existence” of the set seems in a position no different from the “existence” of other mathematical objects: it “exists” in the same way we would say that the solution to a problem “exists”. Naturally some problems (or axiom sets) seem more important or fundamental to us than others, and in those cases we’re more tempted towards a platonic viewpoint, perhaps. david Says: Comment #73 May 19th, 2008 at 9:54 am Sidles: Even if a proof that P=NP gives a concrete algorithm in P to solve SAT, this doesn’t mean that it is provable in ZFC that it runs in polynomial time. So the question may be independent of ZFC and still this need not imply that P=NP. John Sidles Says: Comment #74 May 19th, 2008 at 11:14 am Thank you David .. and more generally, many thanks to *everyone* who is contributing to this very enjoyable topic … I’ve learned a lot, and I’m sure many other folks have too. Again, I especially commend Scott’s survey Is P Versus NP Formally Independent? as a starting point for further reading. John Sidles Says: Comment #75 May 19th, 2008 at 2:35 pm Folks following this thread might enjoy reading the numerous good research ideas on the wiki Vision Nuggets for Theoretical Computer Science. Included on the wiki are Scott’s nugget entitled Efficient computation in the physical world (?) and Avi Wigderson’s nugget entitled P != NP as a law of nature. Raoul Ohio Says: Comment #76 May 19th, 2008 at 3:54 pm There are about 37k Google hits for “joke about Wolfgang Pauli and the fine structure constant” (Turns out I remember it). Is this a great universe, or what? RM Says: Comment #77 May 20th, 2008 at 8:17 pm But can you imagine a “physical process” whose outcome could depend on whether there’s a set larger than the set of integers but smaller than the set of real numbers? If so, what would it look like? Well, venturing into wild speculation here, such a process might look something like the Casimir Effect. Imagine a similar phenomenon (which I’ll call the Rimisac Effect just to make it clear that I am not claiming that the Casimir Effect forces us to believe the continuum exists, and to allow me to bend to physics a bit) in which two plates in empty space are drawn together by something akin to vacuum modes. There are an infinite number of such modes both between the plates and outside them, but the infinity within is countable while the infinity without is uncountable. Now suppose we know from experiments with exciting modes that these modes are energetically degenerate: the energy density of a cavity doesn’t depend on which modes are excitied, only on the number of excitations, and our theory says that this fact should carry over to the vacuum limit (no excitations). Thus we find that the energy density in any countably infinite set of vacuum modes is the same, but less than it would be for an uncountable set. Add in some theorem that no discrete model of the universe (satisfying some reasonable constraints) can give rise to uncountably infite modes outside the plates, and the Rimisac experiment may well give an empirical measurement of whether the continuum “exists” in this universe. As for the Continuum Hypothesis, let us imagine that we can modify the Rimisac experiment with some sort of metacavity structure that sets the countable and uncountable vacua in a carefully balanced tension such that theorists predict will either result in erratic jumps between the two types or else equilibrate to an intermediate infinity if such a thing “exists”. Thus in this hypothetical scenario the Continuum Hypothesis could be equivalent to the claim that there can exist cavities with Rimisic energy densities greater than that of countable-mode cavities and less than that of the continuuous vacuum. I make no claims about the relevance to actual physics, only that this seems to be a conceptual cartoon of what it might “look like” for the CH to be relevant to the physical Job Says: Comment #78 May 21st, 2008 at 1:47 am I suppose complexity classes like P, NP, etc are undecidable languages. We can’t have a TM decide the elements of P or NP. In addition P_k and NP_k also seem to be undecidable. P_k being the class containing languages whose TMs complete in n^k time, similarly for NP_k. If P and NP are “probably” different, where would they start diverging at the beginning? Or maybe would P_1 = NP_1, P_2 = NP_2 but then P_3 != NP_3 and so on? The complexity classes P_(ksb) and NP_(ksb) would be decidable. These being the classes containing languages whose TMs complete in n^k time on inputs less than s in length and where the TM definition takes no more than b bits. If P_(ksb) is not equal to NP_(ksb) for some given values of k, s and b, then does this imply that P != NP? If P != NP, then must there be values of k, s and b such that P_(ksb) and NP_(ksb) are different? In other words, can a brute force attempt at settling P vs NP succeed? Job Says: Comment #79 May 21st, 2008 at 1:55 am I apologize in advance if the above makes no sense, sometimes i write things without thinking them through and usually regret it. Jonathan Vos Post Says: Comment #80 May 21st, 2008 at 10:05 am RM’s “Rimisac Effect” is clever but does not, I think, answer the question. Because of particle-wave duality and Bohr’s principle of complementarity, the wave description of Casimir effect is dual to a photon description. I have not seen a good theoretical nor experimental description of photons with infinitesimal energy, for either of several definitions of infinitesimal. Note that Cantor did believe that his hierarchy of infinity applied to an atomistic model of physical reality, and he said that the mind (or soul) was made of infinitesimal particles of a higher order of infinity than the particles of matter. But he never said how this could be tested, and nobody believed him. Walt Says: Comment #81 May 21st, 2008 at 9:42 pm I just read Scott’s survey, and it is good. It also makes me realize 80 percent of my comment was superfluous… Yury Says: Comment #82 May 23rd, 2008 at 4:32 pm I’d like to answer the question Scott asked in comment 33: can one weaken the assumption in the above result, from ZF+Con(ZF) is consistent to ZF is consistent? Yes, it suffices to assume that ZF is consistent. The truth value of the formula “M halts” depends only on the set of natural numbers, \omega, in the model. If two models have the same natural numbers then either M halts in both models, or M halts in neither of them (that is, “M halts” is an absolute formula). In particular, “M halts” relativized to the constructible universe L is true if and only if “M halts” is true in V. Now we choose a model T of ZF in which AC is false, we get that T |= AC is false, T |= AC^L is true If ZF implied that “AC holds iff M halts”, then we would get that T |= M doesn’t halt T |= (M halts)^L we would get a contradiction. In general, when we prove independence results by forcing or by considering L, we don’t change truth values of arithmetic formulas. Darran Says: Comment #83 May 23rd, 2008 at 4:55 pm This is the first time I’ve heard somebody else say this, but it’s kind of what I’ve always thought as well. That is to say I think questions like P=NP and the Riemann hypothesis have real answers “out there in the heavens above”. However I’m a formalist about stuff like the Continuum hypothesis. Things like the Continuum hypothesis have always struck me as artefacts of the language of set theory. Sets are such a basic concept that they’re approaching the point of concepts left undefined and can easily lead to self-contradiction if they aren’t limited in their scope. However if you limited them too much they don’t really correspond to our intuitive understanding of a set. Hence we end up with axioms like ZFC which are purposefully vague about exactly when something is a set. Which leads to a question like CH having no answer since it asks how many subsets of the Naturals there are. Definitely something you can’t answer if you haven’t said exactly what a set is. Hence CH seems to be a language thing. Constrast this with P=NP or Cauchy’s integral theorem, very definite statements about well defined objects. Job Says: Comment #84 May 25th, 2008 at 11:32 pm Scott, if i’m getting annoying let me know and i’ll stop spamming but i have a question on a variant of P vs NP, namely: Is P “exactly” equal to NP? In other words, given that a solution to problem P can be verified in exactly n^k time and g(n) space, can it also be solved in exactly n^k time and g(n) space? Do we know the answer to this already? I was thinking about that question and the following problem doesn’t seem to be verifiable as quickly as it is solvable: Given an array of n numbers, identify a sequence of n or less steps that can sort the array. It requires at least nlogn operations to solve but can be verified in linear time, isn’t that right? But this isn’t a yes/no problem anyway. Do we know that P isn’t “exactly” equal to NP? Scott Says: Comment #85 May 26th, 2008 at 12:07 am Job, that’s actually an excellent question, and it turns out that we do know the answer to it. Paul, Pippenger, Szemeredi and Trotter showed in 1983 that DTIME(n)≠NTIME(n); that is, there are problems solvable in nondeterministic linear time but not deterministic linear time, on reasonable models such as multi-tape Turing machines. (Their lower bound on the deterministic time needed to solve these problems is n times an extremely slow-growing function of n, basically iterated log.) However, with all such results you need to be careful in defining the model. Regarding your sorting example, there are several issues: (1) Even verifying a sort will require n*log(n) time, if we measure time by the number of bit operations. This is because, if each number in the array has at least n possible values (as is needed for the n*log(n) lower bound to hold), then the numbers will take log(n) bits each to specify, hence even reading them all will take n*log(n) time. (2) If we instead adopt the comparison model (where the only allowed operation is to compare two numbers, but each comparison takes unit time), then if we’re going to be consistent and apply the same rules to the witness, it again will take n*log(n) time to verify a sort (by a generalization of the standard proof that sorting takes n*log(n) time in the comparison model). (3) If we consider more powerful models — which involve “unit-cost comparisons” but also bit operations — then the n*log(n) lower bound for sorting breaks down (in many such models it’s actually known to be false). So, I don’t know whether there’s any reasonable model for which sorting yields a separation between DTIME(n) and NTIME(n). Job Says: Comment #86 May 26th, 2008 at 1:04 am How cool, that’s really interesting. sirix Says: Comment #87 May 26th, 2008 at 6:35 pm Scott, I generally agree with your article. However, today I’ve learned about Whitehead Problem (see wikipedia). I am stunned that it is undecidable. I’m not saying that it contradicts your logic or anything, but still, I’m stunned, and it does change my thinking about set theory and stuff a bit. Joe Shipman Says: Comment #88 May 26th, 2008 at 11:11 pm No arithmetical statement can depend on CH, but the ontology of fundamental physical theories involves sets that are not only uncountable, but several levels of infinity up from the integers. Itamar Pitowsky showed that the EPR paradox could be resolved if a certain kind of nonmeasurable set exists, which allows the physical weirdness to be explained by a Banach-Tarski-like mathematical weirdness; his model (later elaborated by Stanley Gudder) involved iterated integrals of (necessarily nonmeasurable) functions giving different answers with the order of integration corresponding to the order noncommuting observables were measured. Pitowsky and Gudder USED CH to build their models. In my thesis (“Cardinal Conditions for Strong Fubini Theorems”, October 1990 Transactions of the AMS) I showed that some such trans-ZFC assumption was necessary, because it is consistent with ZFC that iterated integrals (for non-negative functions to avoid trivial counterexamples) always match when they exist. My favorite candidate for an axiom that settles CH is the existence of a countably additive measure on ALL (not just the measurable) subsets of the reals. This axiom (known as RVM for “real-valued measurable cardinal”) is equiconsistent with a measurable cardinal, entails the continuum being very large (having many “weakly inaccessible cardinals” below it), and has a certain intuitive plausibility. (It also implies the strong Fubini theorems I alluded to above — iterated integrals must agree when they exist.) The Banach-Tarski phenomenon shows that such a measure could not be rotationally invariant. But a possible alternative history for physics could have had Riemann not die young and discover General Relativity quite early, before the quantum theory destroyed the intuition of continuous space. In this alternate timeline, Banach-Tarski might been discovered shortly thereafter and the assumption that was sacrificed as unphysical could have been the isotropy of space rather than its continuity, since General Relativity already requires anisotropy and assumes continuity, and so RVM could have come to be accepted as an axiom. By the time quantum mechanics had been discovered, RVM would have been shown so fruitful (it proves Con(ZFC) for example!) that it would be retained as a mathematical axiom and CH would be considered settled. Scott Says: Comment #89 May 27th, 2008 at 12:24 am Yury #82: Thanks very much for sharing! If I’m not mistaken, your argument should also answer in the negative the question from comment #40 (which I just realized I never answered). John Sidles Says: Comment #90 May 27th, 2008 at 12:35 am Joe Shipman sez: “The Banach-Tarski phenomenon shows …” Joe, have you ever read White Light? It’s one of the very few quasi-comedic novels ever written about the CF … which is what earned it a place in our UW QSE Group’s library of subversive literature. Perhaps I’ll transcribe some excerpts from White Light in the next few days … it includes quite a few quotations from Cantor’s own writings on the physical and meta-physical implications of CH. Yury Says: Comment #91 May 28th, 2008 at 11:29 am Scott, yes, the argument also gives a negative answer to the question in post #40.
{"url":"http://www.scottaaronson.com/blog/?p=327","timestamp":"2014-04-20T18:23:08Z","content_type":null,"content_length":"90986","record_id":"<urn:uuid:d5b2f385-64c7-461b-86a9-e2e4085e6372>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00277-ip-10-147-4-33.ec2.internal.warc.gz"}
August 15th, 2010 Recall that a unitarily-invariant matrix norm is a norm on matrices X ∈ M[n] such that One nice way to think about unitarily-invariant norms is that they are the matrix norms that depend only on the matrix’s singular values. Some unitarily-invariant norms that are particularly well-known are the operator (spectral) norm, trace norm, Frobenius (Hilbert-Schmidt) norm, Ky Fan norms, and Schatten p-norms (in fact, I would say that the induced p-norms for p ≠ 2 are the only really common matrix norms that aren’t unitarily-invariant – I will consider these norms in the future). The core question that I am going to consider today is what linear maps preserve singular values and unitarily-invariant matrix norms. Clearly multiplication on the left and right by unitary matrices preserve such norms (by definition). However, the matrix transpose also preserves singular values and all unitarily-invariant norms – are there other linear maps on complex matrices that preserve these norms? For a more thorough treatment of this question, the interested reader is directed to [1,2]. Linear Maps That Preserve Singular Values We first consider the simplest of the above questions: what linear maps Φ : M[n] → M[n] are such that the singular values of Φ(X) are the same as the singular values of X for all X ∈ M[n]? In order to answer this question, recall Theorem 1 from my previous post, which states [3] that if Φ is an invertible map such that Φ(X) is nonsingular whenever X is nonsingular, then there exist M, N ∈ M[n] with det(MN) ≠ 0 such that In order to make use of this result, we will first have to show that any singular-value-preserving map is invertible and sends nonsingular matrices to nonsingular matrices. To this end, notice (recall?) that the operator norm of a matrix is equal to its largest singular value. Thus, any map that preserves singular values must be an isometry of the operator norm, and thus must be invertible (since all isometries are easily seen to be invertible). Furthermore, if we use the singular value decomposition to write X = USV for some unitaries U, V ∈ M[n] and a diagonal matrix of singular values S ∈ M[n], then det(X) = det(USV) = det(U)det(S)det(V) = det(UV)det(S). Because UV is unitary, we know that |det(UV)| = 1, so we have |det(X)| = |det(S)| = det(S); that is, the product of the singular values of X equals the absolute value of its determinant. So any map that preserves singular values also preserves the absolute value of the matrix determinant. But any map that preserves the absolute value of determinants must preserve the set of nonsingular matrices because X is nonsingular if and only if det(X) ≠ 0. It follows from the above result about invertibility-preserving maps that if Φ preserves singular values then there exist M, N ∈ M[n] with det(MN) ≠ 0 such that either Φ(X) = MXN or Φ(X) = MX^TN. We will now prove that M and N must each in fact be unitary. To this end, pick any unit vector x ∈ C^n and let c denote the Euclidean length of Mx: By the fact that Φ must preserve singular values (and hence the operator norm) we have that if y ∈ C^n is any other unit vector, then Because y was an arbitrary unit vector, we have that N^* = (1/c)U, where U ∈ M[n] is some unitary matrix. It can now be similarly argued that M = cV for some unitary matrix V ∈ M[n]. By simply adjusting constants, we have proved the following: Theorem 1. Let Φ : M[n] → M[n] be a linear map. Then the singular values of Φ(X) equal the singular values of X for all X ∈ M[n] if and only if there exist unitary matrices U, V ∈ M[n] such that Isometries of the Frobenius Norm We now consider the problem of characterizing isometries of the Frobenius norm defined for X ∈ M[n] by That is, we want to describe the maps Φ that preserve the Frobenius norm. It is clear that the Frobenius norm of X is just the Euclidean norm of vec(X), the vectorization of X. Thus we know immediately from the standard isomorphism that sends operators to bipartite vectors and super operators to bipartite operators that Φ preserves the Frobenius norm if and only if there exist families of operators {A[i]}, {B[i]} such that Σ[i] A[i] ⊗ B[i] is a unitary matrix and It is clear that any map of the form described by Theorem 1 above can be written in this form, but there are also many other maps of this type that are not of the form described by Theorem 1. In the next section we will see that the Frobenius norm is essentially the only unitarily-invariant complex matrix norm containing isometries that are not of the form described by Theorem 1. Isometries of Other Unitarily-Invariant Norms One way of thinking about Theorem 1 is as providing a canonical form for any map Φ that preserves all unitarily-invariant norms. However, in many cases it is enough that Φ preserves a single unitarily-invariant norm for it to be of that form. For example, it was shown by Schur in 1925 [4] that if Φ preserves the operator norm then it must be of the form described by Theorem 1. The same result was proved for the trace norm by Russo in 1969 [5]. Li and Tsing extended the same result to the remaining Schatten p-norms, Ky Fan norms, and (p,k)-norms in 1988 [6]. In fact, the following result, which completely characterizes isometries of all unitarily-invariant complex matrix norms other than the Frobenius norm, was obtained in [7]: Theorem 2. Let Φ : M[n] → M[n] be a linear map. Then Φ preserves a given unitarily-invariant norm that is not a multiple of the Frobenius norm if and only if there exist unitary matrices U, V ∈ M[n] such that 1. C.-K. Li and S. Pierce, Linear preserver problems. The American Mathematical Monthly 108, 591–605 (2001). 2. C.-K. Li, Some aspects of the theory of norms. Linear Algebra and its Applications 212–213, 71–100 (1994). 3. J. Dieudonne, Sur une generalisation du groupe orthogonal a quatre variables. Arch. Math. 1, 282–287 (1949). 4. I. Schur, Einige bemerkungen zur determinanten theorie. Sitzungsber. Preuss. Akad. Wiss. Berlin 25, 454–463 (1925). 5. B. Russo, Trace preserving mappings of matrix algebra. Duke Math. J. 36, 297–300 (1969). 6. C.-K. Li and N.-K. Tsing, Some isometries of rectangular complex matrices. Linear and Multilinear Algebra 23, 47–53 (1988). 7. C.-K. Li and N.-K. Tsing, Linear operators preserving unitarily invariant norms of matrices. Linear and Multilinear Algebra 26, 119–132 (1990). An Introduction to Linear Preserver Problems August 5th, 2010 The theory of linear preserver problems deals with characterizing linear (complex) matrix-valued maps that preserve certain properties of the matrices they act on. For example, some of the most famous linear preserver problems ask what a map must look like if it preserves invertibility or the determinant of matrices. Today I will focus on introducing some of the basic linear preserver problems that got the field off the ground – in the near future I will explore linear preserver problems dealing with various families of norms and linear preserver problems that are actively used today in quantum information theory. In the meantime, the interested reader can find a more thorough introduction to common linear preserver problems in [1,2]. Suppose Φ : M[n] → M[n] (where M[n] is the set of n×n complex matrices) is a linear map. It is well-known that any such map can be written in the form where {A[i]}, {B[i]} ⊂ M[n] are families of matrices (sometimes referred to as the left and right generalized Choi-Kraus operators of Φ (phew!)). But what if we make the additional restrictions that Φ is an invertible map and Φ(X) is nonsingular whenever X ∈ M[n] is nonsingular? The problem of characterizing maps of this type (which are sometimes called invertibility-preserving maps) is one of the first linear preserver problems that was solved, and it turns out that if Φ is invertibility-preserving then either Φ or T ○ Φ (where T represents the matrix transpose map) can be written with just a single pair of Choi-Kraus operators: Theorem 1. [3] Let Φ : M[n] → M[n] be an invertible linear map. Then Φ(X) is nonsingular whenever X ∈ M[n] is nonsingular if and only if there exist M, N ∈ M[n] with det(MN) ≠ 0 such that In addition to being interesting in its own right, Theorem 1 serves as a starting point that allows for the simple derivation of several related results. Determinant-Preserving Maps For example, suppose Φ is a linear map such that det(Φ(X)) = det(X) for all X ∈ M[n]. We will now find the form that maps of this type (called determinant-preserving maps) have using Theorem 1. In order to use Theorem 1 though, we must first show that Φ is invertible. We prove that Φ is invertible by contradiction. Suppose there exists X ≠ 0 such that Φ(X) = 0. Then because Φ preserves determinants, it must be the case that X is singular. Then there exists a singular Y ∈ M[n] such that X + Y is nonsingular. It follows that 0 ≠ det(X + Y) = det(Φ(X + Y)) = det(0 + Φ(Y)) = det(Y) = 0, a contradiction. Thus it must be the case that X = 0 and so Φ is Furthermore, any map that preserves determinants must preserve the set of nonsingular matrices because X is nonsingular if and only if det(X) ≠ 0. It follows from Theorem 1 that for any determinant-preserving map Φ there must exist M, N ∈ M[n] with det(MN) ≠ 0 such that either Φ(X) = MXN or Φ(X) = MX^TN. However, in this case we have det(X) = det(Φ(X)) = det(MXN) = det(MN)det(X) for all X ∈ M[n], so det(MN) = 1. Conversely, it is not difficult (an exercise left to the interested reader) to show that any map of this form with det(MN) = 1 must be determinant-preserving. What we have proved is the following result, originally due to Frobenius [4]: Theorem 2. Let Φ : M[n] → M[n] be a linear map. Then det(Φ(X)) = det(X) for all X ∈ M[n] if and only if there exist M, N ∈ M[n] with det(MN) = 1 such that Spectrum-Preserving Maps The final linear preserver problem that we will consider right now is the problem of characterizing linear maps Φ such that the eigenvalues (counting multiplicities) of Φ(X) are the same as the eigenvalues of X for all X ∈ M[n] (such maps are sometimes called spectrum-preserving maps). Certainly any map that is spectrum-preserving must also be determinant-preserving (since the determinant of a matrix is just the product of its eigenvalues), so by Theorem 2 there exist M, N ∈ M[n] with det(MN) = 1 such that either Φ(X) = MXN or Φ(X) = MX^TN. Now note that any map that preserves eigenvalues must also preserve trace (since the trace is just the sum of the matrix’s eigenvalues) and so we have Tr(X) = Tr(Φ(X)) = Tr(MXN) = Tr(NMX) for all X ∈ M[n]. This implies that Tr((I – NM)X) = 0 for all X ∈ M[n], so we have NM = I (i.e., M = N^-1). Conversely, it is simple (another exercise left for the interested reader) to show that any map of this form with M = N^-1 must be spectrum-preserving. What we have proved is the following characterization of maps that preserve eigenvalues: Theorem 3. Let Φ : M[n] → M[n] be a linear map. Then Φ is spectrum-preserving if and only if det(Φ(X)) = det(X) and Tr(Φ(X)) = Tr(X) for all X ∈ M[n] if and only if there exists a nonsingular N ∈ M [n] such that 1. C. K. Li, S. Pierce, Linear preserver problems. The American Mathematical Monthly 108, 591–605 (2001). 2. C. K. Li, N. K. Tsing, Linear preserver problems: A brief introduction and some special techniques. Linear Algebra and its Applications 162–164, 217–235 (1992). 3. J. Dieudonne, Sur une generalisation du groupe orthogonal a quatre variables. Arch. Math. 1, 282–287 (1949). 4. G. Frobenius, Uber die Darstellung der endlichen Gruppen durch Linear Substitutionen. Sitzungsber Deutsch. Akad. Wiss. Berlin 994–1015 (1897).
{"url":"http://www.njohnston.ca/2010/08/","timestamp":"2014-04-21T09:36:25Z","content_type":null,"content_length":"37518","record_id":"<urn:uuid:2f55ea62-a3bb-4ccc-a427-eb2ef353d1c3>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00281-ip-10-147-4-33.ec2.internal.warc.gz"}
Stochastic Population Systems - Don Dawson Historically, the modelling of biological populations has been an important stimulus for the development of stochastic processes. The revolutionary changes in the biological sciences over the past 50 years have created many new challenges and open problems. At the same time probabilists have developed new classes of stochastic processes such as interacting particle systems and measure-valued processes and made advances in stochastic analysis that make possible the modelling and analysis of populations having complex structures and dynamics. This course will focus on these developments. In particular stochastic processes that model populations distributed in space as well as their genealogies and interactions will be considered. This will include branching particle systems, interacting Wright-Fisher diffusions, Fleming-Viot processes and superprocesses. Basic methodologies including martingale problems, diffusion approximations, dual representations, coupling methods, random measures and particle representations will be involved. A principal objective is to describe the dynamics and structure of populations in large and small space and time scales using dual processes asymptotics, mean-feld methods and multiscale analysis. Some recent developments based on the use of these methods and models to approach some challenging problems in evolutionary biology, genetics, ecology and epidemiology will be described. Finally, we will discuss some open problems in stochastic population processes and their applications to modelling biological populations. Statistical Mechanics and the Renormalisation Group - David Brydges Course Outline * Some canonical models in equilibrium statistical mechanics and connections between them o ideal gas = Poisson field o lattice Gaussian field o hard core gas o Ising model o mean field models o self-avoiding walk and random walk * Gibbs measures, correlations o program to classify scaling limits o relation to CLT and the Newman-Wright theorem * Central role of the lattice Gaussian field o graphical expansions o Hermite polynomials * Generalisations of Gaussian field o Grassmann variables versus differential forms o supersymmetry o self-avoiding walk as a Gaussian integral o matrix tree theorems * Symmetry breaking and phase transitions o the basic phenomenon at the lattice Gaussian field level o proof of symmetry breaking by infra-red bounds o role of the transfer matrix and Osterwalder-Schrader positivity * Hierarchical lattice o Renormalisation Group (RG) for models on the hierarchical lattice o Relevant, Irrelevant interactions o critical models and tuning the initial mass o Why four dimensions is special * RG for models on the Euclidean lattice o space of interactions defined in analogy to hierarchical case o theorems on local existence of RG flow o global existence for critical models o When scaling limits are Gaussian References (incomplete) * Supersymmetry/differential forms: o Differential Forms with Applications to the Physical Sciences, Harley Flanders o Advanced Calculus: A Differential Forms Approach, Harold M. Edwards
{"url":"http://www.math.ubc.ca/~db5d/SummerSchool09/courses.html","timestamp":"2014-04-21T02:00:50Z","content_type":null,"content_length":"12642","record_id":"<urn:uuid:a2086979-b96a-460a-86e8-3f5b56ae1180>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00125-ip-10-147-4-33.ec2.internal.warc.gz"}
Ondelettes et Opérateurs I: Ondelettes”, Hermann Éditeurs Results 1 - 10 of 22 - Biometrika , 1994 "... With ideal spatial adaptation, an oracle furnishes information about how best to adapt a spatially variable estimator, whether piecewise constant, piecewise polynomial, variable knot spline, or variable bandwidth kernel, to the unknown function. Estimation with the aid of an oracle o ers dramatic ad ..." Cited by 838 (4 self) Add to MetaCart With ideal spatial adaptation, an oracle furnishes information about how best to adapt a spatially variable estimator, whether piecewise constant, piecewise polynomial, variable knot spline, or variable bandwidth kernel, to the unknown function. Estimation with the aid of an oracle o ers dramatic advantages over traditional linear estimation by nonadaptive kernels � however, it is a priori unclear whether such performance can be obtained by a procedure relying on the data alone. We describe a new principle for spatially-adaptive estimation: selective wavelet reconstruction. Weshowthatvariableknot spline ts and piecewise-polynomial ts, when equipped with an oracle to select the knots, are not dramatically more powerful than selective wavelet reconstruction with an oracle. We develop a practical spatially adaptive method, RiskShrink, which works by shrinkage of empirical wavelet coe cients. RiskShrink mimics the performance of an oracle for selective wavelet reconstruction as well as it is possible to do so. A new inequality inmultivariate normal decision theory which wecallthe oracle inequality shows that attained performance di ers from ideal performance by at most a factor 2logn, where n is the sample size. Moreover no estimator can give a better guarantee than this. Within the class of spatially adaptive procedures, RiskShrink is essentially optimal. Relying only on the data, it comes within a factor log 2 n of the performance of piecewise polynomial and variable-knot spline methods equipped with an oracle. In contrast, it is unknown how or if piecewise polynomial methods could be made to function this well when denied access to an oracle and forced to rely on data alone. , 1992 "... We describe the Wavelet-Vaguelette Decomposition (WVD) of a linear inverse problem. It is a substitute for the singular value decomposition (SVD) of an inverse problem, and it exists for a class of special inverse problems of homogeneous type { such asnumerical di erentiation, inversion of Abel-type ..." Cited by 182 (12 self) Add to MetaCart We describe the Wavelet-Vaguelette Decomposition (WVD) of a linear inverse problem. It is a substitute for the singular value decomposition (SVD) of an inverse problem, and it exists for a class of special inverse problems of homogeneous type { such asnumerical di erentiation, inversion of Abel-type transforms, certain convolution transforms, and the Radon Transform. We propose to solve ill-posed linear inverse problems by nonlinearly \shrinking" the WVD coe cients of the noisy, indirect data. Our approach o ers signi cant advantages over traditional SVD inversion in the case of recovering spatially inhomogeneous objects. We suppose that observations are contaminated by white noise and that the object is an unknown element of a Besov space. We prove that nonlinear WVD shrinkage can be tuned to attain the minimax rate of convergence, for L 2 loss, over the entire Besov scale. The important case of Besov spaces Bp;q, p <2, which model spatial inhomogeneity, is included. In comparison, linear procedures { SVD included { cannot attain optimal rates of convergence over such classes in the case p<2. For example, our methods achieve faster rates of convergence, for objects known to lie in the Bump Algebra or in Bounded Variation, than any linear procedure. , 2001 "... We discuss wavelet frames constructed via multiresolution analysis (MRA), with emphasis on tight wavelet frames. In particular, we establish general principles and specific algorithms for constructing framelets and tight framelets, and we show how they can be used for systematic constructions of spl ..." Cited by 129 (50 self) Add to MetaCart We discuss wavelet frames constructed via multiresolution analysis (MRA), with emphasis on tight wavelet frames. In particular, we establish general principles and specific algorithms for constructing framelets and tight framelets, and we show how they can be used for systematic constructions of spline, pseudo-spline tight frames and symmetric biframes with short supports and high approximation orders. Several explicit examples are discussed. The connection of these frames with multiresolution analysis guarantees the existence of fast implementation algorithms, which we discuss briefly as well. , 1992 "... : A new approach for the construction of wavelets and prewavelets on IR d from multiresolution is presented. The method uses only properties of shift-invariant spaces and orthogonal projectors from L 2 (IR d ) onto these spaces, and requires neither decay nor stability of the scaling function. F ..." Cited by 78 (11 self) Add to MetaCart : A new approach for the construction of wavelets and prewavelets on IR d from multiresolution is presented. The method uses only properties of shift-invariant spaces and orthogonal projectors from L 2 (IR d ) onto these spaces, and requires neither decay nor stability of the scaling function. Furthermore, this approach allows a simple derivation of previous, as well as new, constructions of wavelets, and leads to a complete resolution of questions concerning the nature of the intersection and the union of a scale of spaces to be used in a multiresolution. AMS (MOS) Subject Classifications: primary: 41A63, 46C99; secondary: 41A30, 41A15, 42B99, 46E20. Key Words and phrases: wavelets, multiresolution, shift-invariant spaces, box splines. Authors' affiliation and address: 1 Center for Mathematical Sciences University of Wisconsin-Madison 610 Walnut St. Madison WI 53705 and 2 Department of Mathematics University of South Carolina Columbia SC 29208 This work was carried out while t... - Proc. Edinburgh Math. Soc , 1994 "... Multiresolution is investigated on the basis of shift-invariant spaces. Given a finitely generated shift-invariant subspace S of L2(IR d), let Sk be the 2 k-dilate of S (k ∈ Z). A necessary and sufficient condition is given for the sequence {Sk}k ∈ Z to form a multiresolu-tion of L2(IR d). A general ..." Cited by 48 (24 self) Add to MetaCart Multiresolution is investigated on the basis of shift-invariant spaces. Given a finitely generated shift-invariant subspace S of L2(IR d), let Sk be the 2 k-dilate of S (k ∈ Z). A necessary and sufficient condition is given for the sequence {Sk}k ∈ Z to form a multiresolu-tion of L2(IR d). A general construction of orthogonal wavelets is given, but such wavelets might not have certain desirable properties. With the aid of the general theory of vector fields on spheres, it is demonstrated that the intrinsic properties of the scaling function must be used in constructing orthogonal wavelets with a certain decay rate. When the scaling function is skew-symmetric about some point, orthogonal wavelets and prewavelets are constructed in such a way that they possess certain attractive properties. Several examples are provided to illustrate the general theory. , 1998 "... Standard wavelet shrinkage procedures for nonparametric regression are restricted to equispaced samples. There, data are transformed into empirical wavelet coefficients and threshold rules are applied to the coefficients. The estimators are obtained via the inverse transform of the denoised wavelet ..." Cited by 39 (3 self) Add to MetaCart Standard wavelet shrinkage procedures for nonparametric regression are restricted to equispaced samples. There, data are transformed into empirical wavelet coefficients and threshold rules are applied to the coefficients. The estimators are obtained via the inverse transform of the denoised wavelet coefficients. In many applications, however, the samples are nonequispaced. It can be shown that these procedures would produce suboptimal estimators if they were applied directly to nonequispaced samples. We propose a wavelet shrinkage procedure for nonequispaced samples. We show that the estimate is adaptive and near optimal. For global estimation, the estimate is within a logarithmic factor of the minimax risk over a wide range of piecewise Hölder classes, indeed with a number of discontinuities that grows polynomially fast with the sample size. For estimating a target function at a point, the estimate is optimally adaptive to unknown degree of smoothness within a constant. In addition, the estimate enjoys a smoothness property: if the target function is the zero function, then with probability tending to 1 the estimate is also the zero function. (1.1) 1. Introduction. - J. Functional Anal , 1996 "... Discrete affine systems are obtained by applying dilations to a given shift-invariant system. The complicated structure of the affine system is due, first and foremost, to the fact that it is not invariant under shifts. Affine frames carry the additional difficulty that they are "global" in nature: ..." Cited by 25 (5 self) Add to MetaCart Discrete affine systems are obtained by applying dilations to a given shift-invariant system. The complicated structure of the affine system is due, first and foremost, to the fact that it is not invariant under shifts. Affine frames carry the additional difficulty that they are "global" in nature: it is the entire interaction between the various dilation levels that determines whether the system is a frame, and not the behaviour of the system within one dilation level. We completely unravel the structure of the affine system with the aid of two new notions: the affine product, and a quasi-affine system. This leads to a characterization of affine frames; the induced characterization of tight affine frames is in terms of exact orthogonality relations that the wavelets should satisfy on the Fourier domain. Several results, such as a general oversampling theorem follow from these characterizations. Most importantly, the affine product can be factored during a multiresolution analysis con... , 1997 "... This paper deals with constructions of compactly supported biorthogonal wavelets from a pair of dual refinable functions in L 2 (R s ). In particular, an algorithmic method to construct wavelet systems and the corresponding dual systems from a given pair of dual refinable functions is given. Keyw ..." Cited by 24 (8 self) Add to MetaCart This paper deals with constructions of compactly supported biorthogonal wavelets from a pair of dual refinable functions in L 2 (R s ). In particular, an algorithmic method to construct wavelet systems and the corresponding dual systems from a given pair of dual refinable functions is given. Keywords: multivariate biorthogonal wavelets, multivariate wavelets, box splines, matrix extension 1. INTRODUCTION This paper deals with constructions of compactly supported biorthogonal wavelets, whose dilations and shifts form a Riesz basis for L 2 (R s ) and the dual basis is an affine system generated by compactly supported functions with required order of the smoothness, from a given pair of dual refinable functions. Constructions of compactly supported refinable dual pairs can be found in Ref. 6 and Ref. 3. With a pair of compactly supported refinable functions constructed, the key step to construct biorthogonal wavelets from a given pair of multiresolutions can be reduced to the - CMS-TSR # 96–02, University of Wisconsin , 1995 "... Discrete affine systems are obtained by applying dilations to a given shift-invariant system. The complicated structure of the affine system is due, first and foremost, to the fact that it is not invariant under shifts. Affine frames carry the additional difficulty that they are “global ” in nature: ..." Cited by 17 (13 self) Add to MetaCart Discrete affine systems are obtained by applying dilations to a given shift-invariant system. The complicated structure of the affine system is due, first and foremost, to the fact that it is not invariant under shifts. Affine frames carry the additional difficulty that they are “global ” in nature: it is the entire interaction between the various dilation levels that determines whether the system is a frame, and not the behaviour of the system within one dilation level. We completely unravel the structure of the affine system with the aid of two new notions: the affine product, and a quasi-affine system. This leads to a characterization of affine frames; the induced characterization of tight affine frames is in terms of exact orthogonality relations that the wavelets should satisfy on the Fourier domain. Several results, such as a general oversampling theorem follow from these characterizations. Most importantly, the affine product can be factored during a multiresolution analysis construction, and this leads to a complete characterization of all tight frames that can be constructed by such methods. Moreover, this characterization suggests very simple sufficient conditions for constructing tight frames from multiresolution. Of particular importance are the facts that the underlying scaling function does not need to satisfy any a priori conditions, and that the freedom offered by redundancy can be fully exploited in these constructions. - Statist. Probab. Lett "... We show that for nonparametric regression if the samples have random uniform design, the wavelet method with universal thresholding can be applied directly to the samples as if they were equispaced. The resulting estimator achieves within a logarithmic factor from the minimax rate of convergence ove ..." Cited by 14 (0 self) Add to MetaCart We show that for nonparametric regression if the samples have random uniform design, the wavelet method with universal thresholding can be applied directly to the samples as if they were equispaced. The resulting estimator achieves within a logarithmic factor from the minimax rate of convergence over a family of Holder classes. Simulation result is also discussed. Keywords: wavelets, nonparametric regression, minimax, adaptivity, Holder class. AMS 1991 Subject Classification: Primary 62G07, Secondary 62G20. 1 Introduction Wavelet shrinkage methods have been very successful in nonparametric regression. But so far most of the wavelet regression methods have been focused on equispaced samples. There, data are transformed into empirical wavelet coefficients and threshold rules are applied to the coefficients. The estimators are obtained via the inverse transform of the denoised wavelet coefficients. The most widely used wavelet shrinkage method for equispaced samples is the Donoho-Joh...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1512817","timestamp":"2014-04-18T22:42:47Z","content_type":null,"content_length":"41098","record_id":"<urn:uuid:5586331d-2c3c-409f-a616-912315a516e3>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00538-ip-10-147-4-33.ec2.internal.warc.gz"}
synthesizer-0.0.3: Audio signal processing coded in Haskell Source code Contents Index Portability requires multi-parameter type classes Synthesizer.Plain.Oscillator Stability provisional Maintainer synthesizer@henning-thielemann.de Tone generators Frequencies are always specified in ratios of the sample rate, e.g. the frequency 0.01 for the sample rate 44100 Hz means a physical frequency of 441 Hz. Oscillators with arbitrary but constant waveforms static :: C a => T a b -> Phase a -> a -> T b Source oscillator with constant frequency freqMod :: C a => T a b -> Phase a -> T a -> T b Source oscillator with modulated frequency phaseMod :: C a => T a b -> a -> T (Phase a) -> T b Source oscillator with modulated phase shapeMod :: C a => (c -> T a b) -> Phase a -> a -> T c -> T b Source oscillator with modulated shape phaseFreqMod :: C a => T a b -> T (Phase a) -> T a -> T b Source oscillator with both phase and frequency modulation shapeFreqMod :: C a => (c -> T a b) -> Phase a -> T c -> T a -> T b Source oscillator with both shape and frequency modulation staticSample :: C a => T a b -> [b] -> Phase a -> a -> T b Source oscillator with a sampled waveform with constant frequency This is essentially an interpolation with cyclic padding. freqModSample :: C a => T a b -> [b] -> Phase a -> T a -> T b Source oscillator with a sampled waveform with modulated frequency Should behave homogenously for different types of interpolation. shapeFreqModSample :: (C c, C b) => T c (T b a) -> [T b a] -> c -> Phase b -> T c -> T b -> T a Source Shape control is a list of relative changes, each of which must be non-negative in order to allow lazy processing. '1' advances by one wave. Frequency control can be negative. If you want to use sampled waveforms as well then use sample in the list of waveforms. With sampled waves this function is identical to HunkTranspose in Assampler. Example: interpolate different versions of Wave.oddCosine and Wave.oddTriangle. You could also chop a tone into single waves and use the waves as input for this function but you certainly want to use sampledTone or shapeFreqModFromSampledTone instead, because in the wave information for shapeFreqModSample shape and phase are strictly separated. shapeFreqModFromSampledTone :: C t => T t y -> T t y -> t -> T y -> t -> t -> T t -> T t -> T y Source Time stretching and frequency modulation of a pure tone. We consider a tone as the result of a shape modulated oscillator, and virtually reconstruct the waveform function (a function of time and phase) by interpolation and resample it. This way we can alter frequency and time progress of the tone independently. This function is identical to using shapeFreqMod with a wave function constructed by sampledTone but it consumes the sampled source tone lazily and thus allows only relative shape control with non-negative control steps. The function is similar to shapeFreqModSample but respects that in a sampled tone, phase and shape control advance synchronously. Actually we could re-use shapeFreqModSample with modified phase values. But we would have to cope with negative shape control jumps, and waves would be padded locally cyclically. The latter one is not wanted since we want padding according to the adjacencies in the source tone. Although the shape difference values must be non-negative I hesitate to give them the type Number.NonNegative.T t because then you cannot call this function with other types of non-negative numbers like Number.NonNegativeChunky.T. The prototype tone signal is reproduced if freqs == repeat (1/period) and shapes == repeat 1. Oscillators with specific waveforms staticSine :: (C a, C a) => a -> a -> T a Source sine oscillator with static frequency freqModSine :: (C a, C a) => a -> T a -> T a Source sine oscillator with modulated frequency phaseModSine :: (C a, C a) => a -> T a -> T a Source sine oscillator with modulated phase, useful for FM synthesis staticSaw :: C a => a -> a -> T a Source saw tooth oscillator with modulated frequency freqModSaw :: C a => a -> T a -> T a Source saw tooth oscillator with modulated frequency Produced by Haddock version 2.3.0
{"url":"http://hackage.haskell.org/package/synthesizer-0.0.3/docs/Synthesizer-Plain-Oscillator.html","timestamp":"2014-04-18T06:28:12Z","content_type":null,"content_length":"25728","record_id":"<urn:uuid:9e88385c-d7d7-419d-b827-c146441011f8>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00537-ip-10-147-4-33.ec2.internal.warc.gz"}
Previous question papers for M.Sc Mathematics entrance exam of Anna University. 1. 31st March 2012 09:39 PM #1 Previous question papers for M.Sc Mathematics entrance exam of Anna University. Pls tell where i can get previous year question papers for msc maths entrance exam of anna university 3. Re: Previous question papers for M.Sc Mathematics entrance exam of Anna University. Dear Sir, Please provoide MSC maths (central University) entrance exam previous papers 4. 29th June 2012 06:53 PM #3 Re: Previous question papers for M.Sc Mathematics entrance exam of Anna University. Yes off course, you can get the question papers of M.Sc(Masters of Science) Mathematics entrance examination of Anna university or any other university on the various concerned websites on the internet.But you are required to search over the internet for the same. Actually,there are many such websites on the internet that can provide you all these kind of papers, these are websites where students share the question papers that they have and are able to download the papers that they want. Some examples can be - entranceexam.net, educationcareer.in, etc.
{"url":"http://educationcareer.in/previous-question-papers-m-sc-mathematics-entrance-exam-anna-university-111650.html","timestamp":"2014-04-16T07:20:41Z","content_type":null,"content_length":"38574","record_id":"<urn:uuid:01030f9a-1433-47fd-a509-5a11abbcca76>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00168-ip-10-147-4-33.ec2.internal.warc.gz"}
Characteristics of Centrifugal Pumps Working in Direct or Reverse Mode: Focus on the Unsteady Radial Thrust International Journal of Rotating Machinery Volume 2013 (2013), Article ID 279049, 11 pages Research Article Characteristics of Centrifugal Pumps Working in Direct or Reverse Mode: Focus on the Unsteady Radial Thrust Cetim, 74 route de la Jonelière, 44326 Nantes, France Received 25 February 2013; Revised 20 June 2013; Accepted 20 June 2013 Academic Editor: J-C. Han Copyright © 2013 Anthony Couzinet et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Experimental and numerical investigations have been carried out to study the behaviour of a centrifugal pump operating in direct mode or turbine mode. First of all, the complete characteristics (head, power, and efficiency) were measured experimentally using a specific test loop. The numerical data obtained from a CFD study performed with the ANSYS CFX software and based on steady state and unsteady approaches were compared to the experimental results. The representation in the 4 operating quadrants shows the various operating zones where the head is always positive. Then, the unsteady radial forces were analysed from transient computations. The results obtained for the pump operation are consistent with the literature and extended to the nonnormal operating conditions, namely, for very high flowrate values. The evolution of the radial load during turbine operation is presented for various partial flow operating points. 1. Introduction The hydraulic performances of centrifugal pumps were widely studied experimentally over the last century, for normal operating modes close to the nominal operating point. For specific states of flow, the unsteady behaviour of the flow due to the rotor/stator interactions formed the subject of many studies. The experimental study of these unsteady phenomena requires complex and/or expensive experimental methods [1–3] such as the use of dynamic pressure sensors or strain gauges installed on the pump shaft, and, as a result, the numerical approach becomes a real alternative. As a matter of fact, CFD computations have been commonly used for approximately twenty years to predict the hydraulic performance of rotating machines. First of all, CFD computations made it possible to study and improve the design of blades. For that purpose, the use of periodical conditions became a means to reduce the size of the computational domain as well as the CPU time. Furthermore, it was possible to use a steady state approach to study operating points located close to the best efficiency point. Then, to study the rotor/stator interactions, the complete geometry of the pump needs to be integrated into the numerical model; it is also necessary to simulate these flow configurations in a transient manner [1, 4] in order to correctly predict all potential hydrodynamic instabilities. Nevertheless, although many projects focus on rotor/stator interactions, very few of them propose comparisons between numerical and experimental results. The only existing results concern the fluctuating pressure field at impeller outlet [5]. Even if a few studies focused on the abnormal operating conditions of centrifugal pumps, this subject has been much less studied, but it has become again a point of interest over the past 5 years. On the one hand, accident scenarios can be integrated into the test specifications, and therefore the performances of the pump operating in abnormal conditions have to be known. On the other hand, a growing number of small hydroelectric power stations (5 to 100kW [6]) are being developed due to their extremely attractive operating costs. However, the initial investment for the equipment is rather high. This is why the use of standard range centrifugal pumps operating in turbine mode has become a credible alternative to hydraulic turbines since their much lower cost and the wide variety of machines (in terms of operating points and dimensions) make it possible to significantly reduce equipment costs. Consequently, centrifugal pumps as turbines (“PAT”) have become the main subject of an increasing number of studies, and the first work started approximately twenty years ago. The main objective of these studies concerns the analytical development of correlations whose purpose is to assess the hydraulic performance of the machine running in turbine mode, based on its pump operation characteristics. The use of CFD computations is rather new [7–11] for simulating the flow generated in the pump and for optimising the design of PATs. However, numerical results become all the more credible when they are accompanied by experimental results. During this study, suitable experimental means and a sophisticated numerical model made it possible to study the complete characteristics of a centrifugal pump with a specific speed equal to 70. It is possible to represent these characteristics on a speed versus flow diagram which materialises a representation in 4 quadrants (Knapp diagram [12]) as shown in Figure 1. These four quadrants correspond to four different operating modes. As a result, operation in pump mode is located in quadrant , operation in reverse pump mode is located in quadrant , operation in turbine mode is located in quadrant , and operation in reverse turbine mode is located in quadrant . We will particularly focus on the operating zones where is positive, which means the points located below the asymptotes in Figure 1. Experimental data have been compared to the numerical results obtained from steady state and transient simulations. These results are very rich in terms of information: on the one hand, the overall characteristics in the 4 operating quadrants can be used to predict the behaviour of the machine during transient operations; on the other hand, local fluctuations in the flow can be predicted using the transient numerical simulations, as long as the numerical model used (in particular the turbulence model) is able to reproduce the development of the turbulence structures induced by the flow configurations. The choice of turbulence model will then be discussed through an analysis of the turbulence structures and the representative turbulence scales. Furthermore, based on unsteady computations, the fluctuations of the radial forces which act on the impeller have been studied in normal operating mode and in turbine operating mode. In pump operation mode, the “radial force versus flow” curve has a conventional “V” shape. This curve is extended to the abnormal operating points (viz., when the head and torque become negative). This force has a privileged direction which is dependent on the operating point, while the fluctuations found for each pump operating point remain moderate. In turbine operation, the radial force is not stable, and its direction changes periodically over time depending on the operating point. This effect is demonstrated with the transient numerical simulations performed. 2. Experimental Approach 2.1. Description of the Test Loop and Design of the Centrifugal Pump The centrifugal pump used in this study is comprised of a volute and a closed impeller with inlet and outlet diameters equal to 300mm and 250mm, respectively. The characteristics at the best efficiency point are as follows: rpm, m^3/h, and m. The specific speed of the pump is (3,595 in US units). The pump was tested in a test loop dedicated to the measurements of the characteristics in the 4 quadrants. Following, the test rig is described. (i)It is possible to set accurately the operating points thanks to supplying pumps which can ensure 5000m^3/h.(ii)A 90kW direct current motor powered by a variable frequency controller.(iii)Motorized discharge valves of the pumping station for the control of the flow rate.(iv)An electromagnetic flow meter (accuracy 0.5%) set as far as possible from the pump exit.(v)Pressure transmitters (accuracy 0.3%) located at the inlet and outlet sections and giving the average tip pressure on four pressure tapping.(vi)A torque meter set between the pump and the motor. 3. Results We mainly focused on quadrants and which correspond to operation in pump mode or turbine mode for normal operating conditions. The overall characteristics of the centrifugal pump in these two operating modes are illustrated in Figures 2 and 3. The head and the power are represented by the dimensionless variables and versus the dimensionless flow rate . The curves are consistent with the experimental results presented by Derakhshan et al. [9, 13, 14] for volute centrifugal pumps with a specific speed varying from 15 to 60. All these results are compared in the same figures. The positive flowrates correspond to the operating points in pump mode whereas the negative flowrates correspond to the operating points in turbine mode. This convention was adopted for all representations. The efficiency curves are plotted in the same way in Figure 4. Whatever the specific speed considered, the efficiency values obtained in pump or turbine operation are similar. In fact, the PAT’s efficiencies are quite good because the hydraulic design of this centrifugal pump is similar to the Francis turbines and the head losses for pump or turbine mode are similar. Based on their test campaign, Derakhshan and Nourbakhsh [14] proposed several correlations which depended on the specific speed. The purpose of these correlations was to assess the hydraulic performance of the pump operating in turbine mode, based on the characteristics at the pump best efficiency point. So, the dimensionless BEP characteristics of the tested PAT are defined as following: where , , , and are the head, flow rate, power, and efficiency. These correlations are illustrated in Figure 5 and compared with Derakshan’s experimental results. The characteristics predicted from the correlations proposed for the centrifugal pump of this study are illustrated in this figure. This prediction method is based on the experimental results obtained on four centrifugal pumps with similar geometry in which only the hydraulics of the impeller was modified to change the specific speed from 14.6 to 55.6; it seems to be unsuitable for centrifugal pumps with high specific speeds. 4. Numerical Approach All numerical simulations were carried out with ANSYS CFX 11, and the computations were performed on workstations with the following technical characteristics: HP xw8600—8 Xeon processors (3.16GHz), RAM 16Gb, Windows XP. The average number of processors is equal to 10. 4.1. Computational Domain The numerical model of the centrifugal pump is comprised of the volute, a six-blade impeller with balancing holes, and a leakage gap at the back of the impeller. So as to obtain the best possible comparison with the test campaign, all the geometrical details of the pump are taken into account as described in Figure 6. However, the surfaces of the various parts of the computational domain need to be cleaned to ensure optimum meshing quality. The computational domain is broken down into three parts: the inlet duct, the rotating domain, and the volute. The positions of the interfaces between the rotating part and the stationary parts are defined in Figure 6. Each colour represents one of the three parts of the computational domain. The inlet and outlet of the domain are positioned, respectively, at 2D upstream and 3D downstream. 4.2. Hexahedral Structured Meshing The meshing for the computational domain was constructed from a structured multiblock approach based on hexahedral elements. In comparison with nonstructured or hybrid approaches, this type of computation grid gives better meshing quality in the impeller while it simultaneously keeps a reasonable number of cells. Furthermore, considering the control-volume finite-element method (CVFEM) formulation implemented in the ANSYS CFX computation code, the use of structured grids remains the best guarantee to obtain accurate numerical solutions. The meshing is generated with the ICEM CFD software. The inlet duct, the impeller, and the volute are fully meshed, and the size of the generated grids reaches 450,000, 4,900,000, and 2,300,000 elements, respectively. A view of the meshing of the impeller is illustrated in Figure 7. The meshing is refined in the vicinity of the walls in order to correctly obtain the peripheral velocity gradients and the friction effects: the average value of is equal to 50; specific cell thickness progression laws in the meridian, hub-to-shroud, and blade-to-blade directions are applied to ensure good grid quality: near wall orthogonality is enforced, and the minimum angle observed in the domain corresponds to 23° (less of 1% of the cells are smaller than 30°). 4.3. Operating and Boundary Conditions The physical characteristics of the fluid correspond to the characteristics of water: density is set to 1,000kg/m^3, and dynamic viscosity is set to 0.001kg·m^−1·s^−1. An even flowrate is imposed as an input for the computational field, while a constant static pressure is set for the output of the computational domain. We do not use any condition of periodicity to simulate the flow in the pump, as the flow in the volute is completely simulated. For the steady state simulations, the interface between the stationary parts and the rotating part requires the use of a model which allows information to be transferred between both parts while integrating the Coriolis effects: we tested “stage” type approaches (azimuthal average of the velocity field) and “frozen rotor” type approaches (fixed rotor position). The computations performed in transient operation integrate the displacement of the meshing of the rotating part at each time iteration. (The time increment used is thus related to the angular displacement of the rotating part.) In actual practice, an angular increment of 1° to 2° is enough to accurately simulate this type of configuration. The accuracy of the time and space discretisation schemes is of the second order. More precisely, the convective flows are assessed using the Barth and Jespersen method, which is similar to a TVD_MUSCL scheme [3, 15–17]. The turbulence is modelled from a two-equation approach based on the turbulent viscosity concept. The arguments related to the choice of the turbulence model (in particular SST or -eps) are discussed in a previous document [18]. Considering the work previously carried out, only the results obtained in the -eps model are presented in this paper. 5. Overall Characteristics of the Centrifugal Pump 5.1. Definition of the Reduced Variables Based on the definitions proposed in Knapp’s article and based on the laws of similarity of centrifugal pumps, it is possible to determine a set of reduced variables for the flowrate and the speed velocity as a function of the head of the pump: In order to extend these reduced variables for the abnormal operating conditions, the head and torque values can be substituted by their absolute value. Moreover the values at the nominal operating point in pump mode allow this set of reduced variables to be made dimensionless which allows us to write the following relations: All the numerical and experimental results will be represented considering these two variables, by means of the curve . 5.2. Steady State Computations As a first approach, steady state computations were carried out in the 4 operating quadrants in order to simulate the behaviour of the centrifugal pump in all operating conditions, thus making it possible to determine the position of the asymptotes. As regards quadrant , the position of the asymptote is correctly predicted, and the crossing of the head/flowrate curve at the zero head point is located at approximately 1.8. Anyway, the prediction of the asymptote in quadrant , namely, in reverse pump operation, is unrealistic. The head value is still negative, even for very low flowrates. This malfunction is visible in Figure 9, where the grey curve represents the evolution of the reduced variables. 5.3. Unsteady Approach-Transient Computations Although the steady state approach seems to be sufficient to correctly describe the behaviour of the flow in the entire quadrants and , a few transient computations were performed. As a matter of fact, five points were simulated for these operating modes in order to demonstrate the capability of the steady state approach to provide reliable solutions in terms of head, torque, and force. On the contrary, it was demonstrated that the steady state approach gives unrealistic solutions for most of the operating points of quadrants and . This is why a specific effort was implemented to simulate a wide number of operating points for these states of flow, which represented in total 6 and 7 operating points in quadrants and . The choice of these various operating points allowed us to correctly describe the characteristic curve and determine the position of the asymptote in quadrant . All transient computations were performed with a time step corresponding to an impeller angular increment of 2°. This value has been optimised so as to obtain an acceptable compromise between computation time and result accuracy. The gap between the impeller and the volute is rather significant, which does not require high-frequency sampling. However, several impeller rotations (up to eight rotations) may be necessary to correctly acquire instabilities such as “rotating stalls” which occur in certain operating conditions. In the end, the levels of RMS residuals of the time resolution are excellent for all the cases, and they reach a value close to . The time evolutions of the overall characteristics are followed-up for all computational cases. If the head and power fluctuations are not significant for quadrants and , thus substantiating that the steady state is sufficient for these cases, these fluctuations are much more significant for quadrants and . This occurs in particular for the operating points where the flowrate is moderate, as shown in Figure 8, where the time evolutions of head and power are represented for the operating point −0.6/−rpm. The minimum and maximum values of the fluctuations are then used to compute the minimum and maximum values of the corresponding reduced variables. The interval between these two extreme values is then represented on the curve of Figure 9 for each operating point (black circle symbols) in comparison to the results obtained with the steady state approach. As we supposed, the results obtained in transient operation in quadrants and are similar to the steady state results, while the values corresponding to the other two quadrants give more realistic results showing an asymptote in quadrant with very significant fluctuations along this asymptote. Based on the fluctuating values of the reduced variables, an average curve is plotted to describe the numerical behaviour of the curve in the 4 operating quadrants, and it is compared to the experimental results in Figure 10. We can see a very good correlation between the experimental measurements and the numerical results. The maximum difference occurs for the points located in the vicinity of the axis: whether in the quadrant or that is to say on either side of the axis, these operating points correspond to high flowrate values. It is worth reminding that the development of cavitation within the pump is not taken into account in the numerical model. For these operating points, the head drop generated by the appearance of cavitation may explain these differences. However, it seems rather unlikely that the use of Rayliegh-Plesset models to simulate the cavitating flow will improve the prediction of the overall variables because the pressure drop due to the appearance of cavitation is significantly underestimated with this type of model [19, 20]. 6. Radial Thrust and Forces Fluctuations on the Impeller At the nominal operating point, the centrifugal pump impellers are (in practice) correctly balanced, and the radial thrust becomes negligible. Further from the nominal operating point, the flow becomes unsuitable, and local stalls may appear which is likely to generate a radial thrust whose amplitude and direction mainly depend on the operating conditions. This static force has been widely studied by a great number of authors [1, 2, 4, 21]. The “radial thrust versus flowrate curve” has a typical “V” form. This conventional form is obtained numerically: the minimum radial thrust is obtained for the best efficiency point, and it increases for the points with partial flowrate or with excess flow (Figure 13). The dimensionless factor is then defined by the relation below: The components of this factor are illustrated in Figure 11 for the three following operating points: 0.5, , and 2.2. For each operating point, we compared the solutions in steady state and unsteady modes. The force directions are correctly predicted with the steady state approach since the fluctuations of direction are low, but the amplitude may be widely underestimated in particular for underflow operating points. The direction of the radial thrust with respect to the volute varies with the operating point, and it is dependent on the type of machine, as Gülich explains [21]. This direction goes towards the outlet for underflow conditions and in the opposite direction for excess flowrate conditions. Although this result is widely known for excess flowrates up to 1.5 (normal pump operating conditions), the simulated operating point at 2.2 corresponds to an abnormal operating point where the head and torque are negative, which allows us to extent these results (direction and amplitude of the radial force) to abnormal operating points. To select the suitable equipment which will be installed in hydroelectric power stations, it is crucial to predict the hydraulic performance of centrifugal pumps in PAT operation. As we mentioned in the introduction and the first part of this paper, there are correlations which should make it possible to assess the performance in turbine mode, based on pump operation data. However, the prediction of the hydraulic load is also extremely important in order to guarantee the mechanical integrity of the pump in this new operating mode. As a matter of fact, except for turbine pumps, the pumps which operate in turbine mode (pumps as turbine, PAT) are not specifically designed for these operating ranges. This is why we have studied the radial thrust for several operating points in PAT: 0, 0.4, 0.6, 1.2, and 4. In particular, we analysed the load fluctuations which may appear depending on the operating conditions. For underflow operating points, it appears that the fluctuating components of the radial thrust are generated by a low-frequency hydraulic imbalance. Therefore, the numerical representation of this physical phenomenon requires several simulated impeller rotations. For example, 8 impeller rotations were simulated for the operating point −0.6. The temporal evolution of the radial load is illustrated in Figure 12 for the various simulated flowrates. The components of the factor are plotted for each iteration. At the same time, the average amplitude of the radial force is illustrated in Figure 13 as a function of the flowrate. The behaviour of the radial thrust in PAT operation is similar to that obtained in pump operation. The minimum radial force is obtained for the best efficiency point in turbine mode (1.2). The PAT configuration exhibits radial force levels similar to those obtained in pump operation. For the excess flowrate point, the fluctuating part of the radial force is very low in comparison to the average value, and its direction remains constant and opposite to the turbine inlet (Figure 12). For the operating points with underflow and in particular for operating points close to runway, the radial force becomes a rotating force centred on the impeller. Finally, the average amplitude of the radial force increases when the flowrate decreases. The instantaneous pressure field is illustrated in Figure 14 for six different instants, and the corresponding radial thrust is represented by a superimposed vector. The computation code gives access to the radial force in the rotating frame, but it can be recomputed in the stationary frame if the angular position of the impeller is known, which makes it possible to represent this force over time. The amplitude and direction of the radial thrust change with the relative position of the impeller, and this evolution is pseudoperiodic. This physical phenomenon is due to the rotating stalls which occur in a few channels of the impeller and move with the rotation of the impeller. They are induced due to complex interactions [22–24] which appear in underflow conditions and lead to interaction between the pumping and turbining effects. These phenomena also exist for turbine pumps, and they have been studied by several authors [22, 25, 26]. Then, the time spectrum of the radial thrust is calculated. Two main frequencies are then identified in Figure 15. Then is the rotating frequency of the pump, while the low frequency corresponding to the displacement of the rotating stall is equal to . This result is known for hydraulic turbines: these characteristic frequencies are located between and , depending on the geometry of the turbine [ 26–28]. Furthermore, the frequency of passage of one blade at each impeller rotation () which is detected by the head fluctuation spectrum is not clearly identified on this spectrum. 7. Conclusions The performance of a centrifugal pump with a specific speed of 70 in pump and turbine operating modes was measured on a dedicated test loop. These measurements are compared to the results obtained by Derakshan concerning four centrifugal pumps with volute from N[s] 14 to 56. The results obtained show that the correlation proposed by Derakshan seems to be unsuitable for high values of N[s]. The overall characteristics stemming from experimental results and from numerical solutions are very similar. In turbine operation, the comparisons remain good if an unsteady approach is used. The hydraulic load is then studied in order to verify the complete validity of the numerical approach. In pump operation, the conventional “V” shape of the “radial thrust versus flowrate” curve is extended to the abnormal operating points (excessive flowrates such as and ). In turbine operation, the average radial thrust has a similar V shape where the minimum value is obtained for the best efficiency point in turbine mode. Furthermore, for underflow operating points close to the runway point, the radial force is a rotating force whose rotating frequency corresponds to 0.6 times the rotation frequency of the pump. This result agrees with the results from the literature concerning turbine pump operation. The good comparisons with the experimental data show that the URANS modelling is able to solve the large energy scales maintained by a forcing mechanism due to the rotation of the impeller. As regards applications in rotating machinery, the traditional URANS approaches are a credible alternative in comparison to advanced turbulence models which require a much higher CPU time to be : Specific speed [rpm, m^3/s, m] : Head coefficient [–] : Flow coefficient [–] : Power coefficient [–] : Flow rate [m^3/h] : Head [m] : Rotational speed [rpm] : Impeller diameter [m] : Power [kW] : Efficiency [%] : Gravitational acceleration [m/s^2] : density [kg/m^3] : Ratio of head in turbine and pump mode : Ratio of flow in turbine and pump mode : Ratio of power in turbine and pump mode : Ratio of efficiency in turbine and pump mode : Reduced variables for constant head : Radial thrust coefficient [–] : Radial force [N] : Impeller outlet width [m] : Outlet impeller diameter [m]. : Nominal point : Turbine mode. The authors would like to thank the French pump manufacturers and the members of the CETIM working group on the operating conditions of centrifugal pumps in the 4 quadrants who have proposed and supported this work. 1. J. F. Combes, A. Boyer, L. Gros, D. Pierrat, G. Pintrand, and P. Chantrel, “Experimental and numerical investigations of the radial thrust in a centrifugal pump,” in Proceedings of the 12th International Symposium on Transport Phenomena and Dynamics of Rotating Machinery, pp. 1–7, Honolulu, Hawaii, USA, 2008, ISROMAC12-2008-20044. 2. S. Guo and H. Okamoto, “An experimental study on the fluid forces induced by rotor-stator interaction in a centrifugal pump,” International Journal of Rotating Machinery, vol. 9, no. 2, pp. 135–144, 2003. View at Publisher · View at Google Scholar 3. F. R. Menter, “A comparison of some recent eddy-viscosity turbulence models,” Journal of Fluids Engineering, vol. 118, no. 3, pp. 514–519, 1996. View at Scopus 4. M. Asuaje, F. Bakir, S. Kouidri, F. Kenyery, and R. Rey, “Numerical modelization of the flow in centrifugal pump: volute influence in velocity and pressure fields,” International Journal of Rotating Machinery, vol. 2005, no. 3, pp. 244–255, 2005. View at Publisher · View at Google Scholar 5. J. Parrondo-Gayo, J. Fernández-Francos, J. González-Pérez, and L. Fernández-Arango, “An experimental study on the unsteady pressure distribution around the impeller outlet of a centrifugal pump,” in Proceedings of ASME Fluids Engineering Division Summer Meeting, 2000, ASME-FEDSM-00-11302. 6. T. Agarwal, “Review of pump as turbine (PAT) for micro-hydropower,” International Journal of Emerging Technology and Advanced Engineering, vol. 2, no. 11, pp. 163–168, 2012. 7. R. Bario, J. Fernandez, J. Parrondo, and E. Blanco, “Performance prediction of a centrifugal pump working in direct and reverse mode using computational fluid dynamics,” in Proceedings of the International Conference on Renewable Energies and Power Quality, Granada, Spain, 2010. 8. S. Rawal and J. T. Kshirsagar, “Numerical simulation on a pump operating in a turbine mode,” in Proceedings of the 23th International Pump Users Symposium, 2007. 9. S. Derakhshan and A. Nourbakhsh, “Theoretical, numerical and experimental investigation of centrifugal pumps in reverse operation,” Experimental Thermal and Fluid Science, vol. 32, no. 8, pp. 1620–1627, 2008. View at Publisher · View at Google Scholar · View at Scopus 10. S. R. Natanasabapathi and J. T. Kshirsagar, “Pump as turbine—an experience with CFX-5.6,” Corporate Research and Eng. Division, Kirloskar Bros. Ltd.; 2004, http://www.ansys.com/staticassets/ANSYS 11. H. Nautiyal, V. Varun, and A. Kumar, “Reverse running pumps analytical, experimental and computational study: a review,” Renewable and Sustainable Energy Reviews, vol. 14, no. 7, pp. 2059–2067, 2010. View at Publisher · View at Google Scholar · View at Scopus 12. R. T. Knapp, “Complete characteristics of centrifugal pumps and their use in prediction of transient behaviour,” Transactions of the American Society of Mechanical Engineers, vol. 59, pp. 683–689, 1937. 13. S. Derakhshan, B. Mohammadi, and A. Nourbakhsh, “Efficiency improvement of centrifugal reverse pumps,” Journal of Fluids Engineering, vol. 131, no. 2, Article ID 021103, 9 pages, 2009. View at Publisher · View at Google Scholar · View at Scopus 14. S. Derakhshan and A. Nourbakhsh, “Experimental study of characteristic curves of centrifugal pumps working as turbines in different specific speeds,” Experimental Thermal and Fluid Science, vol. 32, no. 3, pp. 800–807, 2008. View at Publisher · View at Google Scholar · View at Scopus 15. D. Pierrat, L. Gros, and G. Pintrand, Modélisation des Écoulements Incompressibles, Approche Couplée—déCouplée, Les Ouvrages du Cetim. 16. F. R. Menter, “Zonal two equation k-w turbulence models for aerodynamic flows,” in Proceedings of the 24th Fluid Dynamics Conference, Orlando, Fla, USA, July1993, AIAA paper 93-2906. 17. M. S. Darwish and F. Moukalled, “TVD schemes for unstructured grids,” International Journal of Heat and Mass Transfer, vol. 46, no. 4, pp. 599–611, 2003. View at Publisher · View at Google Scholar · View at Scopus 18. L. Gros, A. Couzinet, D. Pierrat, and L. Landry, “Complete pump characteristics and 4-quadrants diagram investigated by experimental and numerical approaches,” in Proceedings of the ASME Conference, Hamamatsu, Japan, 2011, AJK2011_06067. 19. D. Pierrat, L. Gros, G. Pintrand, B. Le Fur, and Ph. Gyomlai, “Experimental and numerical investigations of leading of leading edge cavitation in a Helico—Centrifugal Pump,” in Proceedings of the 12th International Symposium on Transport Phenomena and Dynamics of Rotating Machinery, Honolulu, Hawaii, USA, February 2008, ISROMAC12-2008-20074. 20. D. Pierrat, L. Gros, A. Couzinet, G. Pintrand, and Ph. Gyomlai, “On the leading edge cavitation in a helico-centrifugal pump: experimental and numerical investigations,” in Proceedings of the 3rd IAHR International Meeting of the WorkGroup on Cavitation and Dynamic Problems in Hydraulic Machinery and Systems, Brno, Czech Republic, October 2009. 21. J. F. Gülich, Centrifugal Pumps, Springer, New York, NY, USA, 2nd edition, 2004. 22. V. Hasmatuchi, “Hydrodynamics of a pump-turbine at off-design operating conditions: numerical simulations,” in Proceedings of ASME-JSME-KSME Joint Fluids Engineering Conference (AJK2011-FED '11), Hamamatsu, Japan, July 2011. 23. T. Staubli, F. Senn, and M. Sallaberger, “Instability of pump-turbines during start-up in the turbine mode,” in Hydro, Ljubljana, Slovénie, 2008. 24. Q. Liang and M. Keller, Behaviour of Pump Turbines Operating at Speed no Load Conditions in Turbine Mode, Hydro Vision, Charlotte, NC, USA, 2010. 25. Q. Liang, M. Keller, and N. Lederegr, “Rotor-stator interaction during no-load operation of pump-turbines,” in Hydro, Lyon, France, 2009, paper no. 7.8. 26. J. Vesely, L. Pulpitel, and P. Troubil, “Model research of rotating stall on pump-turbines,” in Hydro, Porto Carras, Greece, September 2006, paper no. 3.5. 27. C. Widmer, T. Staubli, and N. Ledergerber, Unstable Pump-Turbine Characteristics and Their Interaction with Hydraulic Sytems, Hydro Vision, Charlotte, NC, USA, 2010. 28. L. Wang, J. Yin, L. Jiao, D. Wu, and D. Qin, “Numerical investigation on the “s” characteristics of a reduced pump turbine model,” Science China Technological Sciences, vol. 54, no. 5, pp. 1259–1266, 2011. View at Publisher · View at Google Scholar · View at Scopus
{"url":"http://www.hindawi.com/journals/ijrm/2013/279049/","timestamp":"2014-04-18T02:19:31Z","content_type":null,"content_length":"143277","record_id":"<urn:uuid:1dea43e5-8870-4b37-99d5-1287fa3d0c6f>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00559-ip-10-147-4-33.ec2.internal.warc.gz"}
Strings, Fields, Topology in Oberwolfach Posted by Urs Schreiber This week’s workshop at MFO is on Strings, Fields, Topology. We started collecting notes and other material at Oberwolfach Workshop, June 2009 – Strings, Fields Topology . This includes today - Christoph Schweigert and Ingo Runkel on [[CFT]] and algebra in modular tensor categories; - Dan Freed on [[differential cohomology]] of [[string theory]] [[background fields]]; - Kevin Costello on quantum field theory in terms of [[factorization algebras]]. Posted at June 8, 2009 11:30 PM UTC I have added to [[Oberwolfach Workshop, June 2009 – Strings, Fields, Topology]] today’s talk notes (Tuesday, June 9) on - Ulrich Bunke and Thomas Schick lecturing on their models for [[differential cohomology]] and in particular differential K-theory - after that we heard part II and III of Kevin Costello’s lectures on his work on quantum field theory, in terms of [[BV-theory]] and [[factorization algebras]] This is impressive stuff. The notes on the $n$Lab can hardly convey the full picture here. You are supposed to switch to reading his book on renormalization, too: Kevin Costello, Renormalization and the Batalin-Vilkovisky formalism (web) This is important stuff. Read it. Posted by: Urs Schreiber on June 9, 2009 7:44 PM | Permalink | Reply to this Re: Strings, Fields, Topology in Oberwolfach Can someone who is there clarify how Costello’s B(M) is a set of colors? is it the big disk that is most relevant? Posted by: jim stasheff on June 9, 2009 9:36 PM | Permalink | Reply to this Re: Strings, Fields, Topology in Oberwolfach Hi Jim, you wrote: Can someone who is there clarify how Costello’s $B(M)$ is a set of colors? So the point is that $B(M)$ contains not just the abstract disk $D^n$, but maps $\phi : D^n \to M$ (probably taken to be embeddings) into $M$. So the operad does not just have the single color given by the abstract disk, but one color per map $\phi : D^n \to M$. In other words, there is (precisely )one $k$-ary operation in the operad per $(k+1)$-tuple of maps $\phi_i : D^n \to M$ with the $\phi_{1 \leq i \leq n}$ factoring through the $\phi_{k+1}$ and This $k$-ary operation parameterized by all the disks in $M$ is a close cousin of the familiar operator product in vertex operator algebras. In fact, the idea is that as we let the disks shrink to points, and make everything depend holomorphically on some complex structure on a 2-dimensional $M$, this does becomes precisely the operad whose algebras are vertex operator algebras, with its characteristic dependency of the binary operation on a complex parameter. That parameter is the remnant of the different “colors given by disk embeddings” of Costello’s factorization algebras. (Though in the talk there was some discussion of this point, it didn’t quite become clear if the intended precise statement here has been formulated already). Posted by: Urs Schreiber on June 10, 2009 12:17 AM | Permalink | Reply to this Re: Strings, Fields, Topology in Oberwolfach Urs responded (and I interleave) So the point is that B(M) contains not just the abstract disk D n, but ALL (appropriate) maps $D^n \to M$ (probably taken to be embeddings) into M. So the operad does not just have the single color given by the abstract disk, but one color per map . This k-ary operation parameterized by all the disks in M is a close cousin of the familiar operator product in vertex operator algebras. In fact, the idea is that as we let the disks shrink to points, and make everything depend holomorphically on some complex structure on a 2-dimensional M, this does becomes precisely the operad whose algebras are vertex operator algebras, with its characteristic dependency of the binary operation on a complex parameter. That parameter is the remnant of the different colors given by disk Jim: the classical little disks operad was geometric - parameterized by the center of the little disk and the radius a holomorphic analog is `obvious’ but has it been written somewhere? Posted by: jim stasheff on June 10, 2009 2:21 PM | Permalink | Reply to this Re: Strings, Fields, Topology in Oberwolfach a holomorphic analog is ‘obvious’ but has it been written somewhere? Apparently it hasn’t been written out yet. In fact, in the talk there was quite a bit of discussion of this point. The punchline seems to be that while it looks very obvious, it may require still a little care. But see also David Ben-Zvi’s comment here. Posted by: Urs Schreiber on June 10, 2009 3:02 PM | Permalink | Reply to this Re: Strings, Fields, Topology in Oberwolfach Firstly – Urs, thanks for your kind comments on my talks, and for posting notes. I think the main problem with the discussion of holomorphic factorization algebras in my talk on Monday was that I got the definition wrong (which is a little embarassing). However, I think it’s not hard to give a reasonable definition (using, for instance, parametrized holomorphic discs embedded in a Riemann surface). The link between vertex algebras and holomorphic factorization algebras is little more than an analogy write now: I think it would be hard to prove a precise theorem (the best results I know along these lines are those of Huang, who shows that some version of Segal’s CFT axioms are equivalent to vertex algebras). All of these definitions of factorization algebra could be regarded as a little tentative. For us, the main aim was to come up with a definition which encodes all the salient properties of the examples we construct using perturbation theory. However, there are many ways to modify the technical details of our definitions in such a way that they still encompass our examples. Posted by: Kevin Costello on June 10, 2009 9:39 PM | Permalink | Reply to this Re: Strings, Fields, Topology in Oberwolfach There is an equivalence of categories between vertex algebras (in the original definition) and G_a equivariant factorization algebras on A^1 (using ordinary differential operators), and it’s not too hard to write down, given the existing literature. Factorization for higher genera is trickier, and any equivalence with vertex algebras seems to require additional conditions like O_X-flatness. Posted by: Scott Carnahan on June 17, 2009 8:34 PM | Permalink | Reply to this Re: Strings, Fields, Topology in Oberwolfach For those of us who can’t be there, these real time notes are a wonderful gift. Thanks Posted by: jim stasheff on June 9, 2009 9:37 PM | Permalink | Reply to this Re: Strings, Fields, Topology in Oberwolfach For those of us who can’t be there, these real time notes are a wonderful gift. Thanks Thanks for the feedback! Posted by: Urs Schreiber on June 10, 2009 12:20 AM | Permalink | Reply to this This morning we had - Alexander Kahle on superconnections and index theory (I have uploaded reaonably readable notes on that) - Gabriel Drummond-Cole about the bigger $\infty$-operadic story behind the Baranikov-Kontsevich passage betweem BV and hypercommutative operads. This was a pretty cool talk, but unorthodox enough to prevent me from taking any coherent typed notes on it. Maybe Bruce Bartlett or somebody else will be so kind to upload his or her handwritten notes. - Scott Wilson on categorical algebra, and generalized Hochschild cohomology as far as the speaker got, this has large overlap with the material recalled at a past blog entry Higher Hochschild Cohomology and Differential Forms on Mapping Spaces This afternoon is of course the traditional hike (and a “Hopkins event” soccer match, as well as the “Hopkins event” “explain the Hopkins Kervaire invariant 1 proof” ). So no more notes today. In particular since I need to prepare for my unexpected talk tomorrow… Posted by: Urs Schreiber on June 10, 2009 11:34 AM | Permalink | Reply to this Re: Wednesday Here are some notes I wrote to remind myself what to say tomorrow. Needs trimming and polishing, but it’s a start: Background fields in twisted differential nonabelian cohomology Posted by: Urs Schreiber on June 11, 2009 12:07 AM | Permalink | Reply to this There are now notes uploaded for André Henriques’ talk todday on his study of a 3-category of conformal nets with Chris Douglas and Michael Hill. I just quote Mike Hopkins’ comment after the talk This stuff is terrific. If you are interested you should have a look at the notes in preparation that André provides on his webpage: Chris Douglas, Andr´ Henriques, Michael Hill, Geometric String structures – notes on $\mathbb{Z}_2$-graded conformal nets (ps) Posted by: Urs Schreiber on June 11, 2009 10:11 PM | Permalink | Reply to this Re: Thursday Though Andre and I did work with Mike Hill on string structures, the above project on conformal nets is rather work with Arthur Bartels. Thanks for linking to the workshop notes and draft. Posted by: Chris on September 15, 2009 6:58 AM | Permalink | Reply to this Bruce Bartlett has now uploaded lots of pdfs with scans of his handwritten notes on the talks, and has typed a list of abstracts into the the entry. Based on that I have in turn started equipping these abstracts with cross-links. Many of them to currently non-existing entries, indicated by the gray shading. I think this is a good opportunity and motivation to start creating the corresponding entries. For many of them the conference notes themselves provide a first bit of content. For instance I am going to move large parts of the notes for the lecture by Thomas Schick on differential cohomology to the corresponding entry differential cohomology, etc. There are lots of other gray-ish links now which I was and am planning to create an entry for, but likely won’t find the time soon. Have a look at the grayish links and see if any of them inspire you to click on the green question mark and start providing a bit of content for these keywords! Ideally some of the speakers will feel sufficiently appalled by the insufficiency of the material at the links provided with their talk notes to give it a go themselves. Just imagine this ideal world where every online abstract and set of notes on a talk comes equipped with its linked list of keywords to $n$Lab entries explaining this stuff. There is nothing to stop us from going to that world… Posted by: Urs Schreiber on June 13, 2009 4:37 PM | Permalink | Reply to this Re: wrapup For those of us (hopefully not just me) who like to print things out, it would be nice to have [[abstracts]] as a separate `page’ Posted by: jim stasheff on June 14, 2009 3:19 PM | Permalink | Reply to this Re: wrapup How’s this? Posted by: Eric on June 14, 2009 5:39 PM | Permalink | Reply to this Re: wrapup Thanks, Eric. If you have a minute, could you add cross links between the list of abstracts and the rest of the material. And add the conference identification information to the list of abstracts, such that if there is another Oberwolfach conference this year whose abstract we $n$Labify it all makes sense to the reader. Generally, we should add a new wiki-“category” “conference” or the like. I am in a haste and busy with something else. Otherwise I’d do it myself. Posted by: Urs Schreiber on June 14, 2009 5:56 PM | Permalink | Reply to this Re: Strings, Fields, Topology in Oberwolfach I have done a big editing upgrade of the notes from Oberwolfach page on the nLab. I have a big debt to pay to all the other note-takers around the internet whose notes have been a big help to me! (A certain Herr Ben-Zvi stands out here, as does a certain Herr Schommer-Pries). The notes are now arranged in a single page, with clickable titles on top which take you down to the talk nsummary at the bottom. I don’t want to call them ‘abstracts’, because they are ( mostly :=) ) not the speaker’s actual abstract but rather my and other nLab editors personal impression of the talk, equipped with hyperlinks to other nLab pages on those topics. That may seem like a silly or pretentious distinction, but it gives the whole thing a reason to be on the nLab, as opposed to just a webpage of the conference of some kind. The info is meant to be integrated into the nLab. A highlight of the conference was of course Mike Hopkins’ talk on the history behind the Kervaire invariant, although he has spoken about this elsewhere. It was exciting to see him even speculate about the existence of those exotic framed manifolds to exceptional Lie groups like E8! Or at least to some kind of ‘complex structure’, see the last page of these notes. I think John will appreciate that :-). Posted by: Bruce Bartlett on June 15, 2009 2:02 PM | Permalink | Reply to this Re: Strings, Fields, Topology in Oberwolfach Thanks, Bruce, great. We just talked about this in private, but I want to say it here in public, too: one good thing about having this stuff on the $n$Lab is that eventually it helps us weave that web of links that’s gonna become our joint online super-brain (to be distinguished from the super-brane that John is looking into), if you allow me that bit of pathos. With useful talks Labified, we can link from them to $n$Lab entries explaining the stuff there, link from entries to talks as furether references, incorporate pieces of talks entirely into $n$Lab entries and so forth. I notice that researchers start to add references to $n$Lab entries concerning their work, when they see these are missing. Ideally eventually they’ll also feel motivation for and gain by adding material itself. That’d be the win-win situation to strive for. Posted by: Urs Schreiber on June 15, 2009 2:25 PM | Permalink | Reply to this
{"url":"http://golem.ph.utexas.edu/category/2009/06/strings_fields_topology_in_obe.html","timestamp":"2014-04-17T13:12:38Z","content_type":null,"content_length":"46328","record_id":"<urn:uuid:9f6de507-c2d9-4f1d-a4c4-a582e62b6fc1>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00000-ip-10-147-4-33.ec2.internal.warc.gz"}
In my opinion... Grant's Impromptu OoTW beats them all... shuffled, borrowed deck, no stack. Stay tooned. The Gilbreath principle. One of those beautiful mathematical things that even mathematicians scratch their heads over. Years ago Harold Martin showed me a very different version of Out Of This World. All I can remember is--the deck is stacked red/black/red/black, etc. to start. The spectator can cut the deck, face up, and, as long the deck was cut to a red and black split, the spectator can shuffle the deck one time. I vaguely remember Harold telling me it had something to do with Gilbert's Principle of Compensation. Anybody ever hear of this? I agree with Pete. U.F. Grant's version "Nu Way Out of This World" is excellent. Eugene Burger's small touches to the method makes it even better. For those who don't mind a little set-up, Paul Harris's "Galaxy" is wonderful. I sent a video of a my own "Hemisphere", an ultra clean version of a Daryl effect to Richard (via a labyrinthine route) to be written up for publication in Genii. I don't think it ever reached him. Maybe I should send it again. A really nice and "unusual" piece. Thank you, one and all... Forgot to (try and) answer your original question... You can find some excellent effects on the Max Maven video set (one per tape) using the Gilbreath Principle. An excellent way to see it in use. Also, a little booklet called "Gilbreath's Principles" by Reinhard Muller (1979), not only goes into detail about the principle and card effects to be accomplished with it, but also gives a great deal of historical references to the principle. One such reference is to an Out of This World variant by Marlo and Maurer, in what I believe to be a French journal. You can also find Marlo's first work with the principle in the Nov. 1959 issue of the Linking Ring (one year after Norman Gilbreath first published the principle in the same journal.) Pete Biro said: In my opinion... Grant's Impromptu OoTW beats them all... shuffled, borrowed deck, no stack. I do the version explained by Harry Lorayne in one his booklets, I believe is called my favorite card tricks. His version is also improptu. I was wondering if you can point out the differences between the Grant's version and Lorayne's if any. Thanks in advance. Carlos Hampton Upon further research, I may have found an easy to find source for the effect you were inquiring about. See "Color Separation" in Garcia & Schindler's book "Magic With Cards". Originally posted by Carlos Hampton: I do the version explained by Harry Lorayne in one his booklets, I believe is called my favorite card tricks. His version is also impromptu. I was wondering if you can point out the differences between the Grant's version and Lorayne's if any. Thanks in advance. The Lorayne version "Out of This Universe", if performed as outlined in his book "Close-up Card Magic" is quite convoluted (but still very good). I suspect most people who perform it may in fact be doing a scaled down variation. As listed in Close-up Card Magic, the handling sequence of events (and this is *after* the open separation ruse- which is a different effect altogether) is... 1) Deal four Bridge hands, and reassemble. 2) Allow spectator to deal four bridge hands at random. 3) Two freely selected quarters shuffled together, the other two then shuffled together. 4) The two halves shuffled together. 5) Spectator deals the cards at random into two piles. 6) These two piles are shuffled together. 7) Spectator deals three piles 8) Performer predicts number of red and black cards in center pile. 9) Remaining piles shown to be all red, and all black respectively. Quite an exhausting effect, but also very perplexing. If one is going for a "challenge" piece, this is certainly it. Perhaps it is best suited for formal performances, or as an effect "for the boys." The Grant Nu Way Out of This World is certainly more elegant, and much closer to the original OOTW. The biggest difference to the original being that the deck is freely shuffled, and the first half of the pack dealt by the magician who looks at the cards (into piles chosen by the spectator). The Grant version as viewed by Eugene Burger, while wonderful, still suffered from two of the same "problems" as in the original. In his booklet "Intimate Power", Eugene effectively solves these problems through presentation. He also emphasizes the point that with the Grant variation you do not need to use the entire deck. In other words, the deck can be shuffled by the spectator, and a portion (perhaps a 3rd) used for the effect. This makes for an even more economical effect, and perfect for table workers. With these things in mind, the two aforementioned variations of OOTW are quite contrary to one another. Each has strengths depending on what the performer is looking for. Hope this was of help. 2) Allow spectator to deal four bridge hands at random. 3) Two freely selected quarters shuffled together, the other two then shuffled together. 4) The two halves shuffled together. 7) Spectator deals three piles 8) Performer predicts number of red and black cards in center pile. 9) Remaining piles shown to be all red, and all black respectively. Since Lorayne published it, I've been doing the routine with only the above steps. It still rocks, but is considerably shorter... Originally posted by Paul Cummins: I've been doing the routine with only the above steps. It still rocks, but is considerably shorter... Trimming the fat. Much better Paul. The Lorayne version "Out of This Universe", if performed as outlined in his book "Close-up Card Magic" is quite convoluted (but still very good). I suspect most people who perform it may in fact be doing a scaled down variation. No that is not the one I was talking about. The one I used is on page 26 of "My favorite Card Tricks" and is called imprompty out of this world. Is this the same thin that the Grant version???? :D Carlos Hampton Here's the approach that I came up with based on the original. It eliminates one spectator having to deal all the cards (which cuts the time involved in half), and eliminates the midway switch. It also adds a nice display of the seperated cards at the end. Feedback would be appreciated. TWO DEGREES OF SEPERATION by Jeff Pierce CREDIT: Paul Curry for Out of This World. SETUP: Separate the deck by colors, with all the blacks except two on the top of the red cards. Insert two black cards toeards the bottom of the red stack. Bend the top black stack with a downward bend to facilitate cutting the cards latter. NOTE: This routine will work best if the 2 spectators are sitting next to each other. You will want to be standing between them at the end. PRESENTATION: Give the deck a few false shuffles as you ask Spectator 1 and 2 (pick a mother, daughter boyfriend girlfriend, or husband, wife team) ask them if they believe in intuition. say, "We're going to conduct a little test of your intuitive powers. Spread through the deck with the cards facing you and remove 2 black and 2 red cards. (Don't let anyone see the faces of the rest of the deck and make sure there is a seperation between each pair of cards.) Explain that these are their leader cards as you table them face up in red black order in front of each spectator. Tell the 2 spectators that they will work together to test their intuition. Cut the deck where the two colored stacks meet and hand the black stack to spectator 1 on your left side, and the remaining red card stack to the spectator on your right. Tell the spectators that they are to deal the cards face down onto either color, but to deal them onto the color that they think their partner would choose. Have them deal their cards face down on either the red or black leader cards until they exhaust their stacks. HERES WHERE YOU SHOULD BE: You are standing between the two spectators, each have a red and a black card face up in front of them with a number of face down cards on each stack. THE FINAL DISPLAY OF SEPERATION: Standing between the two spectators, reach down and pick up both the face down stacks on top of the face up red leader cards, leaving the leader cards on the table. Place the right hand stack ontop of the left and drop the stack face down on the table. Repeat with the remaining stacks on the black leader cards. Drop this on top of the face down stack on the table. NOTE: You will notice that you picked up the 2 supposed red stacks together, then the 2 black stacks and placed these together on table. What we need here is some time misdirection. Reach down with both hands and slide the 4 leader cards together in a row in the center between the 2 spectators. Make sure the there is about 3 inches between each of the leader cards. They should still be in red black red black order. Pick up the tabled deck and table spread the cards from left to right below the 4 leader cards. The deck will display in red black red black stack order. This is quit a visual display of separated colors. 2001 Jeff Pierce Magic visit my website at: Originally posted by Carlos Hampton: No that is not the one I was talking about. The one I used is on page 26 of "My favorite Card Tricks" and is called impromptu out of this world. Is this the same thin that the Grant version? Ah, thank you Carlos. Two different versions of OOTW by Lorayne. This clears up some confusion. I have not read the aforementioned booklet, so I don't know if they are the same. It is certainly I like Harris's Galaxy becuase there are no exchanging of packets and of course the spectator can shuffle. However,I don't think I have ever used a full deck version of OOTW. I much prefer the small packet versions like Elmsley's "Underworld." Also, Peter Duffie has an odd offshoot of the plot called "world's apart" and I use that pretty often. It plays well. When performing Out of this Universe, after you have the four hands dealt and you have shuffled the three packets back together, it strengthens the effect by pointing out that you have shuffled the deck "4 times" and then another three times making a total of seven times you have shuffled the deck. After you have the deck divided and you shuffle them together again you then point out that you have now shuffled another 2 times plus one more time making a total of 10 times you have now shuffled the deck. Then once you have them deal the three packets, you state that "Do you think that after shuffling a deck 10 times that you could seperate exactly ten red from ten black cards...that would be a pretty remarkable trick don;t you think?" (Of course they answer yes). After you show the 20 cards, then state that "but if anyone ever told you that after shuffling a deck exactly ten times that you could seperate each and every black card from each and every red card, you would be a very remarkable person." PSIncerely Yours, Paul Alberstat I also love Lorayne's Impromptu Out of this World, though I made some small modifications to it to suit me better. Basically, the spectator can supply and fully shuffle the deck. the magicians takes it and puts down a leader card for red and black piles and pulls cards at random from throughout the deck and places them in the pile the spectator says. After 10-15 cards, the new leader cards are laid down, the remaining deck is shuffled and the spectator is able to take cards off the top of the deck and place them into whichever pile they choose. again, 10-15 cards later they can stop where it is revealed they separated the cards properly. When I perform this, I tend to make a small routine with a color separation theme. I'll usually begin with the Lennart Green angle separation as we both look at the cards to memorize their order and make sure they're all there. A shuffle and a cut later and they're separated into red and blacks. If the spectator supplied the deck, it's a good time to ask them what kind of rigged deck is this, and pretend that you can only do tricks with normal decks and not these magic store ones. Then I'll do the OOTW above, which keeps them involved as a part of the effect. This has made them an active participant in two effects. Rather than watching magic, they're making it. Finally, I love to end with Paul Harris' Perfectionist routine. I let them shuffle the separated deck a bunch. Often, when they hand the deck back, I'll ribbon spread it face up and make them shuffle it more since it wasn't random enough. Perfectionist, for those who can't recall, is a great routine where you openly separate the reds from the blacks (after you've just had them shuffle it many times ;) and split each half into 2 more piles giving you 2 piles of red cards and 2 of black. then you shuffle 1 pile of reds into 1 pile of blacks, face up, then the other piles. upon spreading the halves, they are all back to the same color. you do it again, and they separate once more. finally you decide to trick the deck and shuffle the red halves together and the black halves together and ribbon spread them both face up showing they are clearly all red and all black. suddenly they are fully shuffled, red mixed with black, in the blink of an eye. I wanted to mention another OOTW plot called "Prediction Out of this World," You'll find it in the book "The Commercial Magic of J.C. Wagner" by Mike Maxwell. There are a couple of things I like about this routine. (1) it uses a deck shuffled by the spectator (2) it uses only about half the deck, thus making it a somewhat shorter trick than using the whole effect. The spectator sucessfully separates reds and black, but there are a few errors. The number of errors (mis-matches) was predicted before you began! Check it out as an alternative approach. mike :) Just wondering where the U.F. Grant version can be found. I found my UF Grant manuscript on eBay, but it took quite a while before someone placed it for auction. I may be hard to find. ooops. Typo. "It" may be hard to find. Joey wrote: Just wondering where the U.F. Grant version can be found. I got mine just recently from Hank Lee. It only costs a couple bucks (just like the singleton manuscript for the Curry trick). After seeing Burger's lecture last winter, I wanted to do the right thing. I hope I have: the Hank Lee pages do not mention U.F. Grant's name at all, just the title, "Nu-Way Out of This World." I actually prefer a couple touches in that one to some of the improvements that Eugene taught. I do prefer Eugene's final display, though. The routine referred to by the original poster is Worlds Apart by Peter Duffie. Ben Harris applied this principal to "Invertz" in Off the Wall. In Harris' routine, along with the color separation you locate the cards of two spectators in a novel way. My experience is that you must stress throughout how mixed up the cards are or the spectator may not realize what happens at the end. Just saw this very old thread and I know that nobody will read this, but I don't really care; I'll feel better when I post this. I got a bit of a laugh when people described my Out Of This Universe, and talked about "trimming the fat." They referred to when I start the routine, ask if the spectator knows how to play bridge (which is entirely immaterial), and I demonstrate by dealing two rounds (that's EIGHT cards) then say then you can deal haphazardly, etc. Then I scoop up those few tabled cards, the impression being that it simply doesn't matter. And we're talking about all of about 7 seconds, if that, when I do it. Trimming the fat? Please! You're missing the damn point. Aah; feel better now. HL. PS: One poster said that omitting that 7-second, or less, demo/explanation makes the routine "considerably shorter." Sure, by a few seconds, BUT NOT AS GOOD!! Harry, Close-up Card Magic was one of the best "investments" I ever made when I bought a first edition back in 1962 when $10 was a lot of money to a 17-year-old. A few years further on I built my close-up card repetoire on a few things Jay Ose taught me and your book. I didn't need anything else as what Jay taught me and what I got out of your book was more than enough. I worked for the Trader Vic Organization at their Century City restaurant for almost three years and your material was a great help. Now, several decades later, I still rely on several of your routines learned in those early days. Many thanks. Wow this thread started 7 years ago......record? Best John Harry: I've done Out Of This Universe as described in the book and have had great reactions. The first time I did it, I chose not to do any follow up routines because the reaction was so strong and the amazement level was so high. I didn't want the next item to end up being a let down. I was surprised it took seven years for the anonymous poster to get an answer to their question. Someone did mention Worlds Apart, but didn't explicitly state that this was the effect. Hopefully "Anonymous" is still lurking out their someone and has their answer. If it's not Worlds Apart, I don't know what routine it would be. Is anybody aware of any other routines along the lines of Worlds Apart or Invertz? In my small library, they are the only two I am aware of. Please, who can tell me where and when John Kennedy's version of OOTW was published ? It was in Genii, back in the 80's, from what I recall. It was called "Red and Black". I have an exact reference somewhere...I'll see if I can find it for you. Books and Magazines for sale -- more than 200 items (Last updated January 10th, 2014. Link goes to public Google Doc.) Philippe Billot wrote:Please, who can tell me where and when John Kennedy's version of OOTW was published ? Genii March 1989 (Vol 52, No 9) page 560 Share your knowledge on the MagicPedia wiki. Thanks for Jim and Joe. Can anyone mention where Peter Duffie's Worlds Apart was published? Card Zones, page 84 is where Worlds Apart can be found. Joey Corpus wrote:Just wondering where the U.F. Grant version can be found. Here, for $3 only (eBook): I find that Derren Brown's version of OOTW is simply stunning. Personally, I think that it's one of the best out there since the spectator shuffles the cards and makes two piles (not four) of red and black. No stopping after 26 cards. Yet the piles found to be all red and all black. The balance of the effects are largely psychological and require "people handling" -From a shuffled deck (by a spectator) -Very straightforward without, without the "confusion" of Paul Curry's or U.F. Grant's version (I mean the switch of colors in each pile) -100% Impromptu -No gaffs nor gimmicks -No stack or setup whatsoever The trick can be found in Derren Brown's DVD "The Devil's Picturebook - The Professional Card Repertoire of Derren Brown" along with eleven other tricks, some with psychological twists. You can only buy it from his website www.derrenbrown.co.uk. It's located in the password protected products page. It's cost is 40 GBP which is 65 USD. Check out this website for a full description of the contents: -Sleights and technique required to know in order to successfully perform the tricks. -Moves and sleight taught within the DVD. -The name of the trick and a brief of them synopsis. Website: http://forums.ellusionist.com/showthread.php?t=17003 I hear there is a book coming out that is entirely dedicated to Out of This World....I wonder it will actually be released?? Follow us: @The_Magic_Apple You tell us Brent! soon I hope...soon (this year for sure) Follow us: @The_Magic_Apple
{"url":"http://forums.geniimagazine.com/viewtopic.php?t=5511","timestamp":"2014-04-17T04:37:12Z","content_type":null,"content_length":"110123","record_id":"<urn:uuid:3f175cbb-8cf6-4922-9f23-c3fc9ce6e9e7>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00495-ip-10-147-4-33.ec2.internal.warc.gz"}
Allen, TX Algebra 2 Tutor Find an Allen, TX Algebra 2 Tutor ...All my students have increased their grades significantly. My tutoring topics include pre-algebra, algebra I and II, geometry and SAT Math. Besides one-on-one tutor, I also offer small group lesson, normally with the size of 4-5 kids. 8 Subjects: including algebra 2, geometry, algebra 1, SAT math ...As a software engineer at Novell, I mastered MS-DOS, DR-DOS, and other flavors of the Disk Operating System and used them daily. This command-line interface OS ruled before Windows added a stable graphical interface in Windows 3.11 and later. I know all DOS internal and external commands, environment variables, and other aspects of this pioneering operating system. 48 Subjects: including algebra 2, chemistry, physics, calculus ...I was also first violins (rank 36) for TMEA All Region High School Philharmonic Orchestra back in 2007, and qualified to audition for All Area/State Auditions. I also have played Mendoelssohn's Violin Concerto E Minor 64 (2nd movement) and Mozart's Violin Concerto No.3 in G and ranked superior at solo & ensemble. I have also played these pieces in honor's orchestra spring 26 Subjects: including algebra 2, calculus, physics, algebra 1 ...I tutor all Mathematics and Statistics courses as well as for professional exams such as GRE, GMAT and Test Prep ACT, SAT etc. I have taught Mathematics, Statistics and Computer courses at Texas A&M, Eastfield College, UNT & SMU. Currently I teach Math classes at a private college in Bedford as an adjunct professor. 23 Subjects: including algebra 2, calculus, geometry, statistics ...I adore the moments when students say, "Ah, it totally makes sense." I have had the pleasure helping hundreds of students and see their grades improve. I am constantly looking for new students and sincerely love to help others learn. So e-mail me today to schedule a tutoring appointment. 19 Subjects: including algebra 2, chemistry, physics, statistics Related Allen, TX Tutors Allen, TX Accounting Tutors Allen, TX ACT Tutors Allen, TX Algebra Tutors Allen, TX Algebra 2 Tutors Allen, TX Calculus Tutors Allen, TX Geometry Tutors Allen, TX Math Tutors Allen, TX Prealgebra Tutors Allen, TX Precalculus Tutors Allen, TX SAT Tutors Allen, TX SAT Math Tutors Allen, TX Science Tutors Allen, TX Statistics Tutors Allen, TX Trigonometry Tutors
{"url":"http://www.purplemath.com/Allen_TX_Algebra_2_tutors.php","timestamp":"2014-04-17T13:09:06Z","content_type":null,"content_length":"23977","record_id":"<urn:uuid:299d3e15-71b6-4f9f-b0db-5426b109d01e>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00145-ip-10-147-4-33.ec2.internal.warc.gz"}
Algebraic curve approximation up vote 1 down vote favorite I am wondering wether it exists a theorem that any continuous path on the plane one can approximate with algebraic curve $P(x,y)=0$ ($P$- is a polynom)? See en.wikipedia.org/wiki/Polynomial_interpolation or en.wikipedia.org/wiki/Bernstein_polynomial. – Lalit Jain Aug 13 '12 at 13:09 add comment 3 Answers active oldest votes Given any compact set $K$ in the plane (in particular the image of a compact interval under a continuous function) and $\epsilon > 0$, there is a finite set $\{(x_j, y_j)\}_{j=1}^n \ up vote 1 subseteq K$ such that $K$ is contained in the union of the disks of radius $\epsilon$ centred at $(x_j, y_j)$. Then $K$ is within distance $\epsilon$ in the Hausdorff metric of the real down vote algebraic curve $P(x,y) = 0$, where $P(x,y) = \prod_{j=1}^n ((x-x_j)^2 + (y-y_j)^2 - \epsilon^2)$. thank you very much for your answer. This solution is very nice! But what is bed that solution contains a lot of bifurcation points. I am wonderig if the initial path is more or less - "normal" can we aproximate it with "normal" algebraic curve? Let say that "normal" is something that our intuition suggests. I don't know, let say "normal" curve is a curve that is diffeomorphic to interval. Or may be it should be some another definition that kills these biffurcation points. – David Aug 14 '12 at 21:05 If by bifurcations you mean critical points (intersections of the circles), those are easy to get rid of by modifying the definition. With a bit more work, if $K$ is diffeomorphic to an interval we should be able to get a curve diffeomorphic to a circle. – Robert Israel Aug 15 '12 at 17:50 add comment There might be problems with the Peano curve which is a continuous map $[0,1]\to\mathbb{R}^2$ whose image is the square $[0,1]\times [0,1]$. up vote 0 down 2 That square can be approximated (in the sense of the Hausdorff metric) by the curve $$P(x,y) = \prod_{j=0}^N \prod_{k=0}^N \left((x-j/N)^2 + (y-k/N)^2 - 1/N^2\right) = 0$$ – Robert Israel Aug 13 '12 at 16:00 add comment It's hard to tell what this question means exactly. If a "continuous path" is a continuous image of the unit interval, then any continuous path can be uniformly approximated by polynomial paths; this is the Weierstrass approximation theorem. Any of those polynomial paths can be extended to a polynomial image of the entire real line, which (because you are in the plane) is a set-theoretic complete intersection (i.e. the locus of $P(x,y)=0$ for some $P$). up vote 0 But --- the continuous path consisting of a line segment in the plane is of course not the vanishing set of any polynomial, nor does there seem to be any reasonable sense in which it could down vote be approximated by such. On the other hand, if a "continuous path" is a continuous image of the real line, then it can be uniformly approximated on compact sets by polynomial paths, each of which is the locus of vanishing of some $P$. Whether this satisfies your needs depends on what you mean by approximating the path. Sorry but I didn't understand. Using Weierstrass theorem you can approximate your continuous path with polinoms $x(t)$ and $y(t)$. How from this polynoms one can сonstruct an algebraic curve $P(x,y)$=0? – David Aug 13 '12 at 13:13 If $X(t)$ and $Y(t)$ are polynomials, then the resultant of $x-X(t)$ and $y - Y(t)$ is a polynomial in $x$ and $y$ that is $0$ iff there is $t$ (not necessarily real) such that $x = X(t) $ and $y = Y(t)$. – Robert Israel Aug 13 '12 at 15:48 A line segment in the plane can be reasonably approximated by ellipses. – Robert Israel Aug 13 '12 at 15:53 Robert Israel: Good point regarding the ellipses. – Steven Landsburg Aug 13 '12 at 17:35 add comment Not the answer you're looking for? Browse other questions tagged approximation-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/104609/algebraic-curve-approximation?sort=newest","timestamp":"2014-04-21T13:16:42Z","content_type":null,"content_length":"65157","record_id":"<urn:uuid:a9f4b710-d9e4-45d7-8b1a-9eb4f5a67fa6>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00518-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding volume by using triple integral March 11th 2011, 06:51 PM Finding volume by using triple integral I need to find the volume enclosed by $x^2+y^2+z^2=a^2$ and $x^2+y^2=ax$ where $a>0.$ How do I find the bounds? Do I apply spherical coordinates as written? March 11th 2011, 06:55 PM Prove It To start with, I expect that you have written the second bound wrongly, since that is the equation of a plane figure, not a solid... March 11th 2011, 07:00 PM I edited the first equation, but what's wrong with the second one? March 11th 2011, 11:30 PM Prove It It's a 2 Dimensional object, i.e. a plane figure. How are we supposed to know where along the $\displaystyle z$ axis it's supposed to lie? March 12th 2011, 01:57 AM Assuming the cylinder extends indefinitely up and down the z dimension, we have Viviani's Curve. Doing just the top half, we have z going from 0 (where it 'starts', on the (x,y) plane) up to $\ sqrt{a^2 - r^2}$ (where it hits the hemisphere). And we have r going from 0 at the centre (z axis), up to a cos theta i.e. everywhere inside the cylinder. And theta is turning through the x-positive half of the space, i.e. from minus pi/2 to pi/2. So... $\displaystyle{V = \int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}\ \int_{0}^{a \cos \theta} \int_{0}^{\sqrt{a^2 - r^2}} r\ dz\ dr\ d\theta}$ Just in case a picture helps to follow through from the inside out, we can start bottom left here, integrating r with respect to z... ... where (key in spoiler) ... Which leaves a couple of blanks to fill. Hope this helps. Don't integrate - balloontegrate! Balloon Calculus; standard integrals, derivatives and methods Balloon Calculus Drawing with LaTeX and Asymptote! March 12th 2011, 03:21 AM No, it's not. $x^2+ y^2= ax$ where z can be anything is a cylinder. Specifically, it is the cylinder with central axis (a/2, 0, z) and radius a/2. March 12th 2011, 03:24 AM Prove It March 12th 2011, 09:41 AM No, it was stated that this problem was in three dimensions. The fact that there was no restriction put on z meant that it could be anything.
{"url":"http://mathhelpforum.com/calculus/174300-finding-volume-using-triple-integral-print.html","timestamp":"2014-04-18T08:19:06Z","content_type":null,"content_length":"11453","record_id":"<urn:uuid:2bddd174-c9b1-4fc6-b85c-e796be429ce8>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00318-ip-10-147-4-33.ec2.internal.warc.gz"}
matrix addition Displaying 1 - 50 of about 969 Related Tutorials. matrix addition hai matrix addition hai Matrix Addition using RMI in Java Source Code for Matrix Addition using RMI in Java matrix adddition how to write a program matrix addition in java . Addition Of Matrix"); System.out.println("2. Subtraction Of Matrix... of switch statement perform operations addition, subtraction and multiplication of matrix. Hello Friend, Try this: import java.util.*; class Java Matrix Addition Example In this tutorial, you will learn how to find the sum of two matrices. You may have solved many Matrix Operations in Mathematics... 2 3 4 5 6 2 1 3 Matrix B: 5 2 3 5 5 1 2 2 5 Addition Of Matrices Matrix addition in java In this section we will learn about addition of two...-dimensional array. Here we are going to develop a Java code for matrices addition, while adding two matrices the number of row and column of first matrix is equal The Ansoff matrix or Ansoff Growth matrix is an effective marketing planning... growth strategy. According to the Ansoff matrix, a business growth depends upon... product in a new or existing market. Based on this, factor Ansoff matrix Matrix Multiplication in Java In this Java tutorial we will demonstrate you 'Matrix Multiplication in Java' with the help of a simple example from which you can easily learn how to write a matrix multiplication program in Java Addition of both matrix : 9 9 12 11 14 16... Sum of two Matrix In this section, we are going to calculate the sum of two matrix matrix substraction hai matrix substraction hai ]=a[i][j]+b[i][j]; printf("\nThe Addition of two matrix is:\n"); for(i=0;i&lt...C Program for Addtion of 2*2 Diagnol Matrix Hello Sir, I want C Program to Do Addtion of Diagnol Matrix in C language. Plz Give Me Hi column matrix columan wise total matrix column matrix columan wise total matrix Matrix multiplication program to read the elements of the given two matrices of order n*n and to perform the matrix multiplication Addition of JComboBoxItem how to add JComboBoxItem cb1=1,cb2=1/9 into a JTextField. thanks ; printf("\nThe Addition of two matrix is:\n"); &nbsp... The Second matrix is: 5 6 7 8 The Addition of two matrix is: 6...C Addition of two matrices In this section, we are going to calculate the sum integers into the matrix and print the transpose of it. for this program u r given answer but if i entered 2 by 3 matrix it will not give answer ple check it once Matrix Class A class to manage matrices and add them. Create in the driver class two objects of it and use the add method Matrix Class A class to manage matrices and add them. Create in the driver class two objects of it and use the add method transpose of matrix write a program in java to declare a square matrices 'A' or order n which is less than 20.allow in user to input only positive integers into the matrix and print the transpose matrix determination in java hai matrix polynamial addtion hai Java Matrix Subtraction Example In this tutorial, you will learn how to find the subtraction of two matrices. Not only, addition and multiplication, you can... and then accept the matrix elements as array elements. We have declared two Addition of two numbers addition of two numbers Magic Matrix in GUI I want program in java GUI contain magic matrix for numbers Parallel multiplication matrix in java hello dear I need parallel multiply matrix in java algorithm to account speed up and efficiency great wishes matrix calculator hi..... can you help me in writing source code of matrix calculator in java... i know you are the best you can do it!!! show yourself parallel dense matrix multiplication hi friends, i am a final year... using dense parallel matrix multiplication. I request you to kindly provide me a code for Parallel Matrix multiplication on distributed systems using Java Matrix give me the Matrix program Hi friend, Code for Matrix Example in Java class MatrixExample{ public static void main(String[] args) { int array[][]= {{1,3,5},{2,4,6 Javascript matrix error This function blows up (without an error) whenever it is called function valid(n){ //n is a number 0-9 if(board...; } } where board is a global 3x3 Array matrix var board = new Array(3 MySQL Addition MySQL Addition returns you the sum value of a specific column for a group in a table. The Aggregate function sum() is used to perform the addition in SQL The GE matrix was developed by Mckinsey in 1970s for General Electric in order to overcome the various disadvantages associated with the BCG matrix. Since then, GE Matrix has been successfully deployed as an alternative in marketing MySQL Addition MySQL Addition returns you the sum value of the total counts in a table... The Tutorial illustrate an example from 'MySQL Addition'. To understand the example Preprocessor directive case for multiplication and addition Preprocessor directive case for multiplication and addition include<stdio.h> define m 2+10 include<conio.h> int main() { int i; clrscr code for multiplication of matrix in java using methods code for multiplication of matrix in java using methods The BCG matrix also called the Growth share matrix, B-Box, B.C.G. analysis, Boston Box, Boston Matrix, Boston Consulting Group analysis, portfolio diagram... business units and product lines. In broader terms of management, the BCG matrix Java create Identity matrix In this tutorial, you will learn how to create an identity or unit matrix. An identity matrix is a square matrix, of size n x n... elements are zeros. Here, we are going to create the unit matrix of arbitrary how to create unit matrix in java of arbritary dimensions i want to create the unit matrix of arbritary dimensions say (n*m).i m a new beginner to java.someone having the program for that? help would be appreciable how to change file from .txt to .mat(matrix) i have a big file.txt and i want to change this file to file.mat(matrix) ...this is in windows not on any os ..thx if u answering quickly please Java Transpose of a matrix In this section, you will learn how to determine the transpose of a matrix.The transpose of a matrix is formed by interchanging the rows and columns of a matrix such that row i of matrix becomes column i Find sum of all the elements of matrix using Java A matrix is a rectangular array of numbers. The numbers in the matrix are called its entries or its elements. A matrix with m rows and n columns is called m-by-n matrix or m × n matrix Addition to my previous post Addition to my previous question about the address book program. I need to have separate classes: CmdLineAddressBook.java AddressBook.java Contact.java Address.java Multiplication of Two Matrix &nbsp... that teaches you the method to multiply two matrix. We are going to make a simple program that will multiply two matrix. Now to make this program run, you need Find Sum of Diagonals of Matrix You all know that a matrix is a rectangular array of numbers and these numbers in the matrix are called its entries or its...) of the matrix from the given two dimensional array. See the following matrix
{"url":"http://www.roseindia.net/software-tutorials/detail/28691","timestamp":"2014-04-20T16:39:47Z","content_type":null,"content_length":"47308","record_id":"<urn:uuid:e49bc8af-4874-4259-b540-cec4e3b99b77>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00450-ip-10-147-4-33.ec2.internal.warc.gz"}
Linear Algebra Proofs regarding subspaces and spans February 8th 2013, 11:04 AM #1 Dec 2012 Linear Algebra Proofs regarding subspaces and spans 1. Prove or give a counterexample to the following claim: Claim: Let V be a vector space over the field F and suppose that W1, W2 and W3 are subspaces of V such that W1 + W3 = W2 + W3. Then W1 = W2. 2. Consider the following subspaces of the vector space R^3 over the field R of real numbers: subspace U1, which is the plane x + y + z = 0 and subspace U2, which is the yz-plane. a) Can R^3 be written as a sum of U1 and U2? Justify your answer. b) Can R^3 be written as a direct sum of U1 and U2? Justify your answer. Here x, y and z denote the usual Cartesian coordinates. 3. Let V be a vector space over the field F and suppose (v1, v2, ... , vn) is a linearly independent set of vectors in V . Now suppose there exists w in V such that (v1 + w, v2 + w, ... , vn + w) is a linearly dependent set of vectors in V . Prove that w in span(v1, v2, ... , vn). Thank you. Last edited by zachoon; February 8th 2013 at 11:17 AM. Re: Linear Algebra Proofs regarding subspaces and spans I would really like to see how you would at least attempt these. For example, to show that W1= W2, you must show "if vector v is in W1 then v is in W2" and "if vector v is in W2 then it is in W1". If vector v is in W1 then, for any vector, u, in W3, v+ u is in W1+ W3. Because W1+ W3= W2+ W3, v+u is in W3. Therefore, ... Re: Linear Algebra Proofs regarding subspaces and spans Take $W_1 = Span(\begin{bmatrix}1\\0\\0 \end{bmatrix},\begin{bmatrix}0\\1\\0\end{bmatrix}) , W_3 = Span(\begin{bmatrix}1\\0\\0 \end{bmatrix}), W_2 = Span(\begin{bmatrix}0\\1\\0 \end{bmatrix})$ Now it is true $W_1 + W_3 = W_2 + W_3$ but is $W_1 = W_2$ ? For 3) Consider what it means to be linearly dependent, the homogenrous eqn $Ax = 0$ has atleast one non trivial (atleast one coordinate or entry is non zero) so you have $c_1(v_1+w) + ... + c_n(v_n+w) = 0$ and $c_{i}(v_i+w) + ... + c_{k}(v_m+w) = 0$ where $i,..,k \in A$ (index set which contains the indicies of the non zero coordinates for the homogernous eqn). or basically $c_{i}v_i +... + c_{k}v_k + c_{i}w+...+c_{k}w = 0$ or $c_{i}v_i +... + c_{k}v_k = c_{i}w+...+c_{k}w = (c_{i}+..+c_{k})w$Divide both sides by $(c_{i}+..+c_{k})$ Why Can we Say with a gurantee that $(c_{i}+..+c_{k})$wont equal zero? Think about that one. You have to justify it yourself, write out here and i can guide you. Last edited by jakncoke; February 8th 2013 at 01:01 PM. February 8th 2013, 11:46 AM #2 MHF Contributor Apr 2005 February 8th 2013, 12:23 PM #3
{"url":"http://mathhelpforum.com/advanced-algebra/212784-linear-algebra-proofs-regarding-subspaces-spans.html","timestamp":"2014-04-18T09:18:02Z","content_type":null,"content_length":"40825","record_id":"<urn:uuid:1096a0a9-71fd-49f8-9130-9c5ee9e62eb6>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00451-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Find x. image of a triangle whose upper left corner is labeled 60 degrees and upper right corner has a box; the slanted side from upper left to lower right is labeled 16 and the vertical side is labeled x A. 8 B. 8 times the square root of 3 C. 16 times the square root of 3 D. 1 point 8 times the square root of 3 • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50fe1426e4b00c5a3be5c476","timestamp":"2014-04-18T10:43:14Z","content_type":null,"content_length":"38339","record_id":"<urn:uuid:34fc88ec-6951-4854-9cc3-fd704f0f2b40>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00023-ip-10-147-4-33.ec2.internal.warc.gz"}
R-SDV optimization SVD Scalapack driver When computing the SVD of an m-by-n matrix A, one possible approach is to first determine the QR factorization of A, and then to apply the usual SVD decomposition to the resulting n-by-n upper-triangular part of the R matrix. This is the algorithm that Golub & Van Loan describe as the R-SVD. When m >> n, it can much more efficient than the usual approach of working directly with A. I have checked that this optimization is implemented in the Lapack routines (For example DGESVD calls DGEQRF when m is much larger than n). The Scalapack Users's guide also acknowledges the optimization and indicates that it is present in the driver routine PxGESVD which supposedly calls the routine PxGEQRF when applicable. However I was unable to confirm that in the source code (pdgesvd.f) of Scalapack 1.8. In particular, the routine PDGEQRF is not called directly from the SVD driver. Could anyone confirm if the R-SVD is currently implemented in Scalapack or if I should do it myself ? Thanks !
{"url":"http://icl.cs.utk.edu/lapack-forum/viewtopic.php?p=7340","timestamp":"2014-04-19T17:03:07Z","content_type":null,"content_length":"13924","record_id":"<urn:uuid:1c096ca0-dc7a-434e-a0eb-15b4fb14c68d>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00481-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Calculate the Momentum of Inertia for Different Shapes and Solids How to Calculate the Momentum of Inertia for Different Shapes and Solids In physics, when you calculate an object’s moment of inertia, you need to consider not only the mass of the object but also how the mass is distributed. For example, if two disks have the same mass but one has all the mass around the rim and the other is solid, then the disks would have different moments of inertia. Calculating moments of inertia is fairly simple if you only have to examine the orbital motion of small point-like objects, where all the mass is concentrated at one particular point at a given radius r. For instance, for a golf ball you’re whirling around on a string, the moment of inertia depends on the radius of the circle the ball is spinning in: I = mr^2 Here, r is the radius of the circle, from the center of rotation to the point at which all the mass of the golf ball is concentrated. Crunching the numbers can get a little sticky when you enter the non–golf ball world, however, because you may not be sure of which radius to use. What if you’re spinning a rod around? All the mass of the rod isn’t concentrated at a single radius. When you have an extended object, such as a rod, each bit of mass is at a different radius. You don’t have an easy way to deal with this, so you have to sum up the contribution of each particle of mass at each different radius like this: You can use this concept of adding up the moments of inertia of all the elements to get the total in order to work out the moment of inertia of any distribution of mass. Here’s an example using two point masses, which is a bit more complex than a single point mass. Say you have two golf balls, and you want to know what their combined moment of inertia is. If you have a golf ball at radius r[1] and another at r[2], the total moment of inertia is So how do you find the moment of inertia of, say, a disk rotating around an axis stuck through its center? You have to break the disk up into tiny balls and add them all up. You complete this using the calculus process of integration. The shapes corresponding to the moments of inertia in the table. Trusty physicists have already completed this task for many standard shapes; The following table provides a list of objects you’re likely to encounter, and their moments of inertia. The figure depicts the shapes that these moments of inertia correspond to.
{"url":"http://www.dummies.com/how-to/content/how-to-calculate-the-momentum-of-inertia-for-diffe.html","timestamp":"2014-04-20T11:59:48Z","content_type":null,"content_length":"55334","record_id":"<urn:uuid:2fd5a54e-ae49-4eaf-a512-f72ba7cc976e>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00010-ip-10-147-4-33.ec2.internal.warc.gz"}
linked lists of a polynomial 02-11-2002 #1 linked lists of a polynomial Hi every one, I'm very new in c++ programming, and I'm encounting some difficulties with my program. It's supposed to create a polynomial with a linked list. It keeps giving me errors and I don't know what to do anymore. The polynomial is created with a single nested statement in the constructor. But it doesn't seem to work. Please help me. class Term Term (double coefficient, int exponent=0, Term *next=NULL); double evaluate (double x); void print(); double coefficient; int exponent; Term *next; Term::Term(double coefficient, int exponent = 0, Term *next = NULL) Term *pt = new Term(1,10, new Term(-3,4,new Term (17))); double Term::evaluate (double x); if (Term *pt != NULL) while(pt->next != NULL) int expo = pt->exponent; double accum+ = pt->coefficient * pow(x, expo); pt->next = NULL; return accum; int main() Term test; double result = test.evaluate (2); cout << "and the answer is: " << result << endl; return 0; Term test; double result = test.evaluate (2);//this is where your problem is it should be the following Term test (2); double result=test.evaluate; I haven't looked over the rest of your code but check this and come back later if it still doesn't work. Just a a suggestion - using polynomials in an algorithm will make things pretty complex if you do any circular logic or anything - I would suggest breaking any trinomials into two smaller binomials that co-complement. I don't think my problem is in the declaration of my class Term instance. Look further down, and you'll see that the function evaluate does receive a double (x) and return one. Here are a couple of the errors I get error C2572: 'Term::Term' : redefinition of default parameter : parameter 3 question2.cpp(8) : see declaration of 'Term::Term' 'Term::Term' : redefinition of default parameter : parameter 2 error C2447: missing function header (old-style formal list?) O, sorry, I thought that the double you inputted into evaluate was was u were trying to use to set as the coefficient. I am kinda busy right now but I'll try to help u later. 02-11-2002 #2 ¡Amo fútbol! Join Date Dec 2001 02-11-2002 #3 Super Moderator Join Date Sep 2001 02-11-2002 #4 02-11-2002 #5 ¡Amo fútbol! Join Date Dec 2001
{"url":"http://cboard.cprogramming.com/cplusplus-programming/10738-linked-lists-polynomial.html","timestamp":"2014-04-23T20:18:13Z","content_type":null,"content_length":"51757","record_id":"<urn:uuid:1bd5c6e7-3f67-4054-8114-cc58714bb09f>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00553-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4f75e561e4b0ddcbb89d1b36","timestamp":"2014-04-16T16:30:24Z","content_type":null,"content_length":"34644","record_id":"<urn:uuid:f6ac2c3e-af28-4004-afe1-868c7055275d>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00607-ip-10-147-4-33.ec2.internal.warc.gz"}
A Weighted Variant of Riemann-Liouville Fractional Integrals on Abstract and Applied Analysis VolumeΒ 2012Β (2012), Article IDΒ 780132, 18 pages Research Article A Weighted Variant of Riemann-Liouville Fractional Integrals on ^1Department of Mathematics, Linyi University, Shandong, Linyi 276005, China ^2School of Mathematical Sciences, Beijing Normal University and Laboratory of Mathematics and Complex Systems, Ministry of Education, Beijing 100875, China Received 14 June 2012; Accepted 21 August 2012 Academic Editor: BashirΒ Ahmad Copyright Β© 2012 Zun Wei Fu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We introduce certain type of weighted variant of Riemann-Liouville fractional integral on and obtain its sharp bounds on the central Morrey and -central BMO spaces. Moreover, we establish a sufficient and necessary condition of the weight functions so that commutators of weighted Hardy operators (with symbols in -central BMO space) are bounded on the central Morrey spaces. These results are further used to prove sharp estimates of some inequalities due to Weyl and CesΓ ro. 1. Introduction Let . The well-known Riemann-Liouville fractional integral is defined by for all locally integrable functions on . The study of Riemann-Liouville fractional integral has a very long history and number of papers involved its generalizations, variants, and applications. For the earlier development of this kind of integrals and many important applications in fractional calculus, we refer the interested reader to the book [1]. Among numerous material dealing with applications of fractional calculus to (ordinary or partial) differential equations, we choose to refer to [2] and references As the classical -dimensional generalization of , the well-known Riesz potential (the solution of Laplace equation) with is defined by setting, for all locally integrable functions on , where . The importance of Riesz potentials lies in the fact that they are indeed smoothing operators and have been extensively used in many different areas such as potential analysis, harmonic analysis, and partial differential equations. Here we refer to the paper [3], which is devoted to the sharp constant in the Hardy-Littlewood-Sobolev inequality related to . This paper focused on another generalization, the weighted variants of Riemann-Liouville fractional integrals on . We investigate the boundedness of these weighted variants on the type of central Morrey and central Campanato spaces and also give the sharp estimates. This development begins with an equivalent definition of as More generally, we use a positive function (weight function) to replace in (1.3) and generalize the parameter from the positive axle to the Euclidean space therein. We then derive a weighted generalization of on , which is called the weighted Hardy operator (originally named weighted Hardy-Littlewood avarage) . More precise, let be a positive function on . The weighted Hardy operator is defined by setting, for all complex-valued measurable functions on and , Under certain conditions on , Carton-Lebrun and Fosset [4] proved that maps , , into itself; moreover, the operator commutes with the Hilbert transform when , and with certain CalderΓ³n-Zygmund singular integrals including the Riesz transform when . Obviously, for and , if we take , then as mentioned above, for all , A further extension of [4] was due to Xiao [5] as follows. Theorem A. Let . Then, is bounded on if and only if Moreover, Remark 1.1. Notice that the condition (1.6) implies that is integrable on since . We naturally assume is integrable on throughout this paper. Obviously, Theorem A implies the celebrated result of Hardy et al. [6, Theorem 329], namely, for all and , The constant in (1.6) also seems to be of interest as it equals to if and . In this case, is precisely reduced to the classical Hardy operator defined by which is the most fundamental integral averaging operator in analysis. Also, a celebrated operator norm estimate due to Hardy et al. [6], that is, with , can be deduced from Theorem A immediately. Recall that is defined to be the space of all such that where and the supremum is taken over all balls in with sides parallel to the axes. It is well known that , since contains unbounded functions such as . Another interesting result of Xiao in [5] is that the weighted Hardy operator is bounded on , if and only if Moreover, In recent years, several authors have extended and considered the action of weighted Hardy operators on various spaces. We mention here, the work of Rim and Lee [7], Kuang [8], KruliΔ et al. [9], Tang and Zhai [10], Tang and Zhou [11]. The main purpose of this paper is to make precise the mapping properties of weighted Hardy operators on the central Morrey and -central BMO spaces. The study of the central Morrey and -central BMO spaces are traced to the work of Wiener [12, 13] on describing the behavior of a function at the infinity. The conditions he considered are related to appropriate weighted spaces. Beurling [14] extended this idea and defined a pair of dual Banach spaces and , where . To be precise, is a Banach algebra with respect to the convolution, expressed as a union of certain weighted spaces. The space is expressed as the intersection of the corresponding weighted spaces. Later, Feichtinger [15] observed that the space can be equivalently described by the set of all locally -integrable functions satisfying that where is the characteristic function of the unit ball , is the characteristic function of the annulus , , and is the norm in . By duality, the space , called Beurling algebra now, can be equivalently described by the set of all locally -integrable functions satisfying that Based on these, Chen and Lau [16] and GarcΓ­a-Cuerva [17] introduced an atomic space associated with the Beurling algebra and identified its dual as the space , which is defined to be the space of all locally -integrable functions satisfying that By replacing with in (1.3) and (1.6), we obtain the spaces and , which are the homogeneous version of the spaces and , and the dual space of is just . Related to these homogeneous spaces, in [18, 19 ], Lu and Yang introduced the homogeneous counterparts of and , denoted by and , respectively. These spaces were originally denoted by and in [18, 19]. Recall that the space is defined to be the space of all locally -integrable functions satisfying that It was also proved by Lu and Yang that the dual space of is just . In 2000, Alvarez et al. [20] introduced the following -central bounded mean oscillation spaces and the central Morrey spaces, respectively. Definition 1.2. Let and . The central Morrey space is defined to be the space of all locally -integrable functions satisfying that Definition 1.3. Let and . A function is said to belong to the -central bounded mean oscillation space if We remark that if two functions which differ by a constant are regarded as a function in the space , then becomes a Banach space. Apparently, (1.19) is equivalent to the following condition: Remark 1.4. is a Banach space which is continuously included in . One can easily check if , , , and if . Similar to the classical Morrey space, we only consider the case in this paper. Remark 1.5. The space when is just the space . It is easy to see that for all . When , then the space is just the central version of the Lipschitz space . Remark 1.6. If , then by HΓΆlder's inequality, we know that for , and for . For more recent generalization about central Morrey and Campanato space, we refer to [21]. We also remark that in recent years, there exists an increasing interest in the study of Morrey-type spaces and the related theory of operators; see, for example, [22]. In this paper, we give sufficient and necessary conditions on the weight which ensure that the corresponding weighted Hardy operator is bounded on and . Meanwhile, we can work out the corresponding operator norms. Moreover, we establish a sufficient and necessary condition of the weight functions so that commutators of weighted Hardy operators (with symbols in central Campanato-type space) are bounded on the central Morrey-type spaces. These results are further used to prove sharp estimates of some inequalities due to Weyl and CesΓ ro. 2. Sharp Estimates of Let us state our main results. Theorem 2.1. Let and . Then is a bounded operator on if and only if Moreover, when (2.1) holds, the operator norm of on is given by Proof. Suppose (2.1) holds. For any , using Minkowski's inequality, we have It implies that Thus maps into itself. The proof of the converse comes from a standard calculation. If is a bounded operator on , take Then where is the volume of the unit ball in . We have (2.8) together with (2.4) yields the desired result. Corollary 2.2. (i) For , , and , (ii) For and , Next, we state the corresponding conclusion for the space . Theorem 2.3. Let and . Then is a bounded operator on if and only if (2.1) holds. Moreover, when (2.1) holds, the operator norm of on is given by Proof. Suppose (2.1) holds. If , then for any and ball , using Fubini's theorem, we see that Using Minkowski's inequality, we have which implies is bounded on and Conversely, if is a bounded operator on , take where and denote the right and the left halves of , separated by the hyperplane , and is the first coordinate of . Thus, by a standard calculation, we see that and From this formula we have The proof is complete. Corollary 2.4. (i) For and , we have (ii) For , we have . 3. A Characterization of Weight Functions via Commutators A well-known result of Coifman et al. [23] states that the commutator generated by CalderΓ³n-Zygmund singular integrals and BMO functions is bounded on , . Recently, we introduced the commutators of weighted Hardy operators and BMO functions introduced in [24]. For any locally integrable function on and integrable function , the commutator of the weighted Hardy operator is defined by It is easy to see that when and satisfies the condition (1.6), then the commutator is bounded on , . An interesting choice of is that it belongs to the class of . When symbols , the condition (1.6) on weight functions can not ensure the boundedness of on . Via controlling by the Hardy-Littlewood maximal operators instead of sharp maximal functions, we [24] established a sufficient and necessary (more stronger) condition on weight functions which ensures that is bounded on , where . More recently, Fu and Lu [25] studied the boundedness of on the classical Morrey spaces. Tang et al. [26] and Tang and Zhou [11] obtained the corresponding result on some Herz-type and Triebel-Lizorkin-type spaces. We also refer to the work [27] for more general -linear Hardy operators. Similar to [24], we are devoted to the construction of a sufficient and necessary condition (which is stronger than in Theorem 2.1) on the weight functions so that commutators of weighted Hardy operators (with symbols in -central BMO space) are bounded on the central Morrey spaces. For the boundedness of commutators with symbols in central BMO spaces, we refer the interested reader to [28, 29] and Mo [30]. Theorem 3.1. Let , . Assume further that is a positive integrable function on . Then, the commutator is bounded from to , for any , if and only if Remark 3.2. The condition (2.1), that is, , is weaker than . In fact, let By , we know that implies . But the following example shows that does not imply . For , if we take and , where , then and . Proof. (i) Let . Denote by and by . Assume . We get By the Minkowski inequality and the HΓΆlder inequality (with ), we have Similarly, we have Now we estimate , We see that Therefore, Combining the estimates of , , and , we conclude that is bounded from to . Conversely, assume that for any , is bounded from to . We need to show that . Since , we will prove that and , respectively. To this end, let for all . Then it follows from Remark 1.5 that , and Let . Then For , we obtain So, Therefore, we have On the other hand, since and are integrable functions on . Combining the above estimates, we get Combining (3.18) and (3.16), we then obtain the desired result. Notice that comparing with Theorems 2.1 and 2.3, we need a priori assumption in Theorem 3.1 that is integrable on . However, by Remark 1.1, this assumption is reasonable in some sense. When with , namely, is a central -Lipschitz function, we have the following conclusion. The proof is similar to that of Theorem 3.1. We give some details here. Theorem 3.3. Let , , , , , and . If (2.1) holds true, then for all , the corresponding commutator is bounded from to . Proof. Let , , and be as in the proof of Theorem 3.1. Then, following the estimates of and in the proof of Theorem 3.1, we see that For , we also have Since now , we see that Therefore, Combining the estimates of , , and , we conclude that is bounded from to . Different from Theorem 3.1, it is still unknown whether the condition (2.1) in Theorem 3.3 is sharp. That is, whether the fact that is bounded from to for all induces (2.1)? More general, we may extend the previous results to the th order commutator of the weighted Hardy operator. Given and a vector , we define the higher order commutator of the weighted Hardy operator as When , we understand that . Notice that if , then . Using the method in the proof of Theorems 3.1 and 3.3, we can also get the following Theorem 3.4. For the sake of convenience, we give the sketch of the proof of Theorem 3.4(i) here. Theorem 3.4. Let , , , , , , and . (i) Assume further that is a positive integrable function on . The commutator is bounded from to , for any , if and only if (ii) Let and . If (2.1) holds true, then the corresponding commutator is bounded from to . Proof. Let . Denote by and by . Assume . We get Then, applying the Minkowski inequality and the HΓΆlder inequality (with ), and repeating the arguments in the proof of Theorem 3.1, is bounded from to for any , provided Conversely, assume that is bounded from to for any . We choose with for all and . Then . Repeating the argument in the proof of Theorem 3.1 then yields the desired conclusion. We point out that, it is still unknown whether the condition (2.1) in Theorem 3.4(ii) is sharp. 4. Adjoint Operators and Related Results In this section, we focus on the corresponding results for the adjoint operators of weighted Hardy operators. Recall that the weighted CesΓ ro operator is defined by If , , and , then is reduced to , where is a variant of Weyl integral operator and defined by for all . When and , is the classical CesΓ ro operator: It was pointed out in [5] that the weighted Hardy operator and the weighted CesΓ ro operator are adjoint mutually, namely, for all admissible pairs and . Since and are a pair of dual Banach spaces, it follows from Theorem 2.1 the following. Theorem 4.1. Let . Then is bounded on if and only if Moreover, when (4.5) holds, the operator norm of on is given by Corollary 4.2. (i) For and , (ii) For , we have Since the dual space of is isomorphic to (see [18, 19]), Theorem 2.3 implies the following result. Theorem 4.3. Let . Then is a bounded operator on if and only if (4.5) holds. Moreover, when (4.5) holds, the operator norm of on is given by Corollary 4.4. For , we have Following the idea in Section 3, we define the higher order commutator of the weighted CesΓ ro operator as When , is understood as . Notice that if , then . Similar to the proofs of Theorems 3.1 and 3.3, we have the following result. Theorem 4.5. Let , , , , , , and . (i) Assume further that is a positive integrable function on . The commutator is bounded from to , for any , if and only if (ii) Let and . Then the corresponding commutator is bounded from to , provided that We conclude this paper with some comments on the discrete version of the weighted Hardy and CesΓ ro operators. Let be the set of all nonnegative integers and denote the set . Let now be a nonnegative function defined on and be a complex-valued measurable function on . The discrete weighted Hardy operator is defined by and the corresponding discrete weighted CesΓ ro operator is defined by setting, for all , We remark that, by the same argument as above with slight modifications, all the results related to the operators and in Sections 1β 4 are also true for their discrete versions and . This work is partially supported by the Laboratory of Mathematics and Complex Systems, Ministry of Education of China and the National Natural Science Foundation (Grant nos. 10901076, 11101038, 11171345, and 10931001). 1. K. B. Oldham and J. Spanier, The Fractional Calculus, Academic Press, New York, NY, USA, 1974. 2. P. L. Butzer and U. Westphal, β An introduction to fractional calculus,β in Fractional Calculus, Applications in Physics, H. Hilfer, Ed., pp. 1β 85, World Scientific, Singapore, 2000. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH 3. E. H. Lieb, β Sharp constants in the Hardy-Littlewood-Sobolev and related inequalities,β Annals of Mathematics, vol. 118, no. 2, pp. 349β 374, 1983. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH 4. C. Carton-Lebrun and M. Fosset, β Moyennes et quotients de Taylor dans BMO,β Bulletin de la Société Royale des Sciences de Liège, vol. 53, no. 2, pp. 85β 87, 1984. View at Zentralblatt MATH 5. J. Xiao, β ${L}^{p}$ and BMO bounds of weighted Hardy-Littlewood averages,β Journal of Mathematical Analysis and Applications, vol. 262, pp. 660β 666, 2001. 6. G. H. Hardy, J. E. Littlewood, and G. Pólya, Inequalities, Cambridge University Press, London, UK, 2nd edition, 1952. 7. K. S. Rim and J. Lee, β Estimates of weighted Hardy-Littlewood averages on the $p$-adic vector space,β Journal of Mathematical Analysis and Applications, vol. 324, no. 2, pp. 1470β 1477, 2006. View at Publisher Β· View at Google Scholar 8. J. Kuang, β The norm inequalities for the weighted Cesaro mean operators,β Computers & Mathematics with Applications, vol. 56, no. 10, pp. 2588β 2595, 2008. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH 9. K. Krulić, J. Pečarić, and D. Pokaz, β Boas-type inequalities via superquadratic functions,β Journal of Mathematical Inequalities, vol. 5, no. 2, pp. 275β 286, 2011. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH 10. C. Tang and Z. Zhai, β Generalized Poincaré embeddings and weighted Hardy operator on ${Q}_{p}^{\alpha ,q}$ spaces,β Journal of Mathematical Analysis and Applications, vol. 371, no. 2, pp. 665β 676, 2010. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH 11. C. Tang and R. Zhou, β Boundedness of weighted Hardy operator and its adjoint on Triebel-Lizorkin-type spaces,β Journal of Function Spaces and Applications, vol. 2012, Article ID 610649, 9 pages, 2012. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH 12. N. Wiener, β Generalized harmonic analysis,β Acta Mathematica, vol. 55, no. 1, pp. 117β 258, 1930. View at Publisher Β· View at Google Scholar 13. N. Wiener, β Tauberian theorems,β Annals of Mathematics, vol. 33, no. 1, pp. 1β 100, 1932. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH 14. A. Beurling, β Construction and analysis of some convolution algebras,β Annales de l'Institut Fourier, vol. 14, pp. 1β 32, 1964. View at Zentralblatt MATH 15. H. Feichtinger, β An elementary approach to Wiener's third Tauberian theorem on Euclidean n-space,β in Proceedings of the Symposia Mathematica, vol. 29, Academic Press, Cortona, Italy, 1987. 16. Y. Z. Chen and K.-S. Lau, β Some new classes of Hardy spaces,β Journal of Functional Analysis, vol. 84, no. 2, pp. 255β 278, 1989. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH 17. J. García-Cuerva, β Hardy spaces and Beurling algebras,β Journal of the London Mathematical Society, vol. 39, no. 3, pp. 499β 513, 1989. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH 18. S. Z. Lu and D. C. Yang, β The Littlewood-Paley function and φ-transform characterizations of a new Hardy space $H{K}_{2}$ associated with the Herz space,β Studia Mathematica, vol. 101, no. 3, pp. 285β 298, 1992. 19. S. Lu and D. Yang, β The central BMO spaces and Littlewood-Paley operators,β Approximation Theory and its Applications. New Series, vol. 11, no. 3, pp. 72β 94, 1995. View at Zentralblatt MATH 20. J. Alvarez, M. Guzmán-Partida, and J. Lakey, β Spaces of bounded $\lambda$-central mean oscillation, Morrey spaces, and $\lambda$-central Carleson measures,β Collectanea Mathematica, vol. 51, no. 1, pp. 1β 47, 2000. 21. Y. Komori-Furuya, K. Matsuoka, E. Nakai, and Y. Sawano, β Integral operators on ${B}_{\sigma }$-Morrey-Campanato spaces,β Revista Matemática Complutense. In press. View at Publisher Β· View at Google Scholar 22. V. S. Guliyev, S. S. Aliyev, and T. Karaman, β Boundedness of a class of sublinear operators and their commutators on generalized Morrey spaces,β Abstract and Applied Analysis, vol. 2011, Article ID 356041, 18 pages, 2011. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH 23. R. R. Coifman, R. Rochberg, and G. Weiss, β Factorization theorems for Hardy spaces in several variables,β Annals of Mathematics, vol. 103, no. 3, pp. 611β 635, 1976. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH 24. Z. W. Fu, Z. G. Liu, and S. Z. Lu, β Commutators of weighted Hardy operators,β Proceedings of the American Mathematical Society, vol. 137, no. 10, pp. 3319β 3328, 2009. View at Publisher Β· View at Google Scholar 25. Z. Fu and S. Lu, β Weighted Hardy operators and commutators on Morrey spaces,β Frontiers of Mathematics in China, vol. 5, no. 3, pp. 531β 539, 2010. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH 26. C. Tang, F. Xue, and Y. Zhou, β Commutators of weighted Hardy operators on Herz-type spaces,β Annales Polonici Mathematici, vol. 101, no. 3, pp. 267β 273, 2011. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH 27. Z. W. Fu, L. Grafakos, S. Z. Lu, and F. Y. Zhao, β Sharp bounds of m-linear Hardy operators and Hilbert operators,β Houston Journal of Mathematics, vol. 38, pp. 225β 244, 2012. 28. Z. W. Fu, Y. Lin, and S. Z. Lu, β $\lambda$-central BMO estimates for commutators of singular integral operators with rough kernels,β Acta Mathematica Sinica, vol. 24, no. 3, pp. 373β 386, 2008. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH 29. Y. Komori, β Notes on singular integrals on some inhomogeneous Herz spaces,β Taiwanese Journal of Mathematics, vol. 8, no. 3, pp. 547β 556, 2004. View at Zentralblatt MATH 30. H. Mo, β Commutators of generalized Hardy operators on homogeneous groups,β Acta Mathematica Scientia, vol. 30, no. 3, pp. 897β 906, 2010. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
{"url":"http://www.hindawi.com/journals/aaa/2012/780132/","timestamp":"2014-04-19T18:30:47Z","content_type":null,"content_length":"834102","record_id":"<urn:uuid:7d3a0228-7753-4e40-bff6-583b8d7332f9>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00215-ip-10-147-4-33.ec2.internal.warc.gz"}
Boxing News 24 Forum - View Single Post - Manny Pacquiao is a More accurate Power puncher than Floyd Mayweather. Re: Manny Pacquiao is a More accurate Power puncher than Floyd Mayweather. People you have to be careful with statistics. Numbers don't lie but they can be manipulated if not presented properly. I'll give you an example: I saw a video of a guy comparing Kobe and LeBron's career stats. One stat showed that Kobe had exactly twice as many games shooting under a certain percentage (I think it was under 40%) as LeBron The thing he failed to mention was that, at the time, Kobe had been playing for exactly twice as many seasons as Lebron had. Leaving out information like that is manipulative. My point is, you be need to be more careful with statistics. If you don't include the right information (or in this case don't even compute the numbers correctly), then your stats are worthless.
{"url":"http://www.boxingforum24.com/showpost.php?p=13860389&postcount=61","timestamp":"2014-04-20T19:02:25Z","content_type":null,"content_length":"15164","record_id":"<urn:uuid:7ae2feac-04f3-45ce-a121-450993ad1207>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00110-ip-10-147-4-33.ec2.internal.warc.gz"}
User user13289 bio website visits member for 3 years, 1 month seen Sep 8 '12 at 5:05 stats profile views 576 Apr C^{2} estimates for elliptic equations 6 comment One of the reasons I asked the equation is that for the bordline case there is an important example: Ω is a unit disc, $|Du|div(\frac{Du}{|Du|})=1$ has solution $u=\frac{1}{2}|x|2$, and the largest eigenvalue of the coefficients is 1, smallest one is 0 Apr C^{2} estimates for elliptic equations 6 comment @Deane, that is a close example, but the bad thing is that it dosent satisfy the boundary condition, at the point $(1, \frac{1}{2})$ it doesnt equal to 0.. Apr C^{2} estimates for elliptic equations 6 comment and I am most interested in the 2-d case! Apr C^{2} estimates for elliptic equations 6 comment 1-d case, in my question, the condition largest eigenvalue=1 namely a=1... which is trivial. The point is can we get some estimate which is stronger than the estimate $|D^{2}u|\leq |\ Apr C^{2} estimates for elliptic equations 6 revised added 61 characters in body; added 11 characters in body Apr C^{2} estimates for elliptic equations 6 comment @Deane, my bad, I forgot to write the condition that $u$ is convex... 6 awarded Commentator Apr C^{2} estimates for elliptic equations 6 comment and of couse there is a simple estimate that $|D^{2}u|$ bounded by $|\frac{f}{\beta}|$ Apr C^{2} estimates for elliptic equations 6 comment Schauder estimates requires $C$ depends on $C^{\alpha}$ modular of the coefficients and the lower bound of $|\beta(x)|$, which is not enough for my question. the key point is that I need some estimate which is independent of the ratio between $\alpha$ and $\beta$ Apr C^{2} estimates for elliptic equations 5 revised deleted 27 characters in body 5 asked C^{2} estimates for elliptic equations 4 accepted a question about Lp norm of curvature on convex curves Mar a question about Lp norm of curvature on convex curves 4 comment Thank you, sergei, thats promising! Mar a question about Lp norm of curvature on convex curves 4 comment I think alvarezpaiya's 1st comments make sense. for Sergei's example, when p=0 the inequality in my question obviously right, and in fact it is a strict inequality. Then notice that the curves are strictly convex, so at least for when p very colose to 0, for sergei's example, the inequality still holds. 4 asked a question about Lp norm of curvature on convex curves 12 asked Is there such a priori estimates for mean curvature type equation? Apr A question about the number of intersections of lines in $R^{3}$ 10 comment It looks like "no five lines in a quadric" but not exactly same. n lines in a (singly) ruled surface of degree $n^{\frac{1}{2}}$ is a situation appeared if one try to prove the up bound $n^{\frac{3}{2}}$, but still the full strength of that condition will not be used... Apr A question about the number of intersections of lines in $R^{3}$ 10 comment The best summary of Guth-Katz paper I can think is the link in JSE's answer below, for unit distance problem, one can find reference in the reference of cs.tau.ac.il/~michas/pst5.pdf. 10 awarded Critic Apr A question about the number of intersections of lines in $R^{3}$ 10 comment That example can not satisfy the constions in my question, in fact "for fixed 2 lines, no 3 lines intersect these 2 lines at the same time" is the most important condition in the question, it is not easy for me to think some example with many intersections but satify that condition....
{"url":"http://mathoverflow.net/users/13289/user13289?tab=activity","timestamp":"2014-04-19T07:57:34Z","content_type":null,"content_length":"45502","record_id":"<urn:uuid:0eb9d428-1a8d-4272-8bc4-4b3fe3504248>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00018-ip-10-147-4-33.ec2.internal.warc.gz"}
This Article Bibliographic References Add to: Optimal Algorithms for the Channel-Assignment Problem on a Reconfigurable Array of Processors with Wider Bus Networks November 2002 (vol. 13 no. 11) pp. 1124-1138 ASCII Text x Shi-Jinn Horng, Horng-Ren Tsai, Yi Pan, Jennifer Seitzer, "Optimal Algorithms for the Channel-Assignment Problem on a Reconfigurable Array of Processors with Wider Bus Networks," IEEE Transactions on Parallel and Distributed Systems, vol. 13, no. 11, pp. 1124-1138, November, 2002. BibTex x @article{ 10.1109/TPDS.2002.1058096, author = {Shi-Jinn Horng and Horng-Ren Tsai and Yi Pan and Jennifer Seitzer}, title = {Optimal Algorithms for the Channel-Assignment Problem on a Reconfigurable Array of Processors with Wider Bus Networks}, journal ={IEEE Transactions on Parallel and Distributed Systems}, volume = {13}, number = {11}, issn = {1045-9219}, year = {2002}, pages = {1124-1138}, doi = {http://doi.ieeecomputersociety.org/10.1109/TPDS.2002.1058096}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, RefWorks Procite/RefMan/Endnote x TY - JOUR JO - IEEE Transactions on Parallel and Distributed Systems TI - Optimal Algorithms for the Channel-Assignment Problem on a Reconfigurable Array of Processors with Wider Bus Networks IS - 11 SN - 1045-9219 EPD - 1124-1138 A1 - Shi-Jinn Horng, A1 - Horng-Ren Tsai, A1 - Yi Pan, A1 - Jennifer Seitzer, PY - 2002 KW - Channel-assignment problem KW - minimum coloring problem KW - interval graph KW - list ranking KW - integer sorting KW - parallel algorithm KW - reconfigurable array of processors with wider bus networks. VL - 13 JA - IEEE Transactions on Parallel and Distributed Systems ER - Abstract—The computation model on which the algorithms are developed is the reconfigurable array of processors with wider bus networks (abbreviated to RAPWBN). The main difference between the RAPWBN model and other existing reconfigurable parallel processing systems is that the bus width of each network is bounded within the range \big. [2, \lceil \sqrt{N} \rceil]\bigr.. Such a strategy not only saves the silicon area of the chip as well as increases the computational power enormously, but the strategy also allows the execution speed of the proposed algorithms to be tuned by the bus bandwidth. To demonstrate the computational power of the RAPWBN, the channel-assignment problem is derived in this paper. For the channel-assignment problem with \big. N\bigr. pairs of components, we first design an \big. O(T + \lceil {N\over w} \rceil)\bigr. time parallel algorithm using \big. 2N\bigr. processors with a \big. 2N{\hbox{-}}\rm row\bigr. by \big. 2N{\hbox{-}}\rm column\bigr. bus network, where the bus width of each bus network is \big. w{\hbox{-}}\rm bit\bigr. for \big. 2 \leq w \leq \lceil \sqrt{N} \ \rceil\bigr. and \big. T={\lfloor \log _{w} N \rfloor}+1\bigr.. By tuning the bus bandwidth to the natural \big. \log N{\hbox{-}}\rm bit\bigr. and the extended \big. N^{1/c}{\hbox{-}}\rm bit\bigr. (\big. N^{1/c} > \log N\bigr.) for any constant \big. c\bigr. and \big. c \ geq 1\bigr., two more results which run in \big. O(\log N /\log \log N)\bigr. and \big. O(1)\bigr. time, respectively, are also derived. When compared to the algorithms proposed by Olariu et al. [17] and Lin [14], it is shown that our algorithm runs in the equivalent time complexity while significantly reducing the number of processors to \big. O(N)\bigr.. [1] E. Dekel and S. Sahni, “Parallel Scheduling Algorithms,” Operations Research, vol. 31, pp. 24-49, 1983. [2] M. Feldman, S. Esener, C. Guest, and S. Lee, “Comparison Between Optical and Electrical Interconnects Based on Power and Speed Considerations,” Applied Optics, vol. 27, pp. 1742-1751, 1988. [3] T.Y. Feng, “A Survey of Interconnection Networks,” IEEE Computing Magazine, pp. 12-27, 1981. [4] M.C. Golumbic, Algorithmic Graph Theory and Perfect Graphs.New York, Academic Press, 1980. [5] U.I. Gupta, D.T. Lee, and J.Y.T. Leung, “An Optimal Solution for the Channel-Assignment Problem,” IEEE Trans. Computers, vol. 28, pp. 807-810, 1979. [6] K. Hwang, Advanced Computer Architecture: Parallelism, Scalability, Programmability. McGraw-Hill, 1993. [7] K. Hwang, P.S. Tseng, and D. Kim, “An Orthogonal Multiprocessor for Parallel Scientific Computations,” IEEE Trans. Computers, vol. 38, pp. 47-61, 1989. [8] T.W. Kao, S.J. Horng, Y.L. Wang, and H.R. Tsai, “Designing Efficient Parallel Algorithms on a CRAP,” IEEE Trans. Parallel and Distributed Systems, vol. 6, pp. 554-559, 1995. [9] T.W. Kao and S.J. Horng, “The Power of List Ranking on a Reconfigurable Array of Processors with Wider Bus Networks,” The Australian Computer J., vol. 28, pp. 138-148, 1996. [10] D.M. Kuchta, J. Crow, P. Pepeljugoski, K. Stawiasz, J. Trewhella, D. Booth, W. Nation, C. DeCusatis, and A. Muszynski, “Low Cost 10 Gigabit/s Optical Interconnects for Parallel processing,” Proc. Fifth Int'l Conf. Massively Parallel Processing, pp. 210-215, 1998. [11] S.S. Lee, S.J. Horng, and H.R. Tsai, “Entropy Thresholding and Its Parallel Algorithm on a Reconfigurable Array of Processors with Wider Bus Networks,” IEEE Trans. Image Processing, vol. 8, pp. 1229-1242, 1999. [12] H. Li and M. Maresca, "Polymorphic-Torus Architecture for Computer Vision," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 11, no. 3, pp. 233-243, Mar. 1989. [13] K. Li, Y. Pan, and S.-Q. Zheng, “Fast and Processor Efficient Parallel Matrix Multiplication Algorithms on a Linear Array with Reconfigurable Pipelined Bus System,” IEEE Trans. Parallel and Distributed Systems, vol. 9, no. 8, pp. 705-720, Aug. 1998. [14] S.S. Lin, “Constant-Time Algorithms for the Channel Assignment Problem on Processor Arrays with Reconfigurable Bus Systems,” IEEE Trans. Computer-Aided Design of Integrated Circuits and Systems, vol. 13, pp. 884-890, 1994. [15] M. Maresca and H. Li,“Connection autonomy in SIMD computers: a VLSI implementation,”J. Parallel Distribut. Comput., vol. 7, pp. 302–320, 1989. [16] R. Miller,V.K. Prasanna Kumar,D.I. Reisis, and Q.F. Stout,“Parallel computations on reconfigurable meshes,” IEEE Trans. on Computers, pp. 678-692, June 1993. [17] S. Olariu, J.L. Schwing, and J. Zhang, “A Constant-Time Channel-Assignment Algorithm on Reconfigurable Meshes,” BIT, vol. 32, pp. 586-597, 1993. [18] B.T. Preas and M.J. Lorenzetti, Physical Design Automation of VLSI Sysyem. Menlo Park, Calif.: Benjamin/Cummings, 1988. [19] D.A. Pucknell and K. Eshraghian, Basic VLSI Design. pp. 134-138, Prentice-Hall, 1994. [20] C. Qiao, R. Melhem, D. Chiarulli, and S. Levitan, "Dynamic Reconfiguration of Optically Interconnected Networks with Time Division Multiplexing," J. Parallel and Distributed Computing, vol. 22, no. 8, pp. 268-278, Aug. 1994. [21] R. Raghavan and S. Sahni, “Single Row Routing,” IEEE Trans. Computers, vol. 32, pp. 209-220, 1983. [22] S. Sahni, “Data Manipulation on the Distributed Memory Bus Computer,” Parallel Processing Letters, vol. 5, pp. 3-14, 1995. [23] J.E. Savage and M.G. Wloka, “A Parallel Algorithm for Channel Routing,” Lecture Notes in Computer Science, vol. 344, pp. 288-303, 1989. [24] A. Schuster and Y. Ben-Asher, “Algorithms and Optic Implementation for Reconfigurable Networks,” Proc. Fifth Jerusalem Conf. Information Technology, pp. 225-235, 1990. [25] D.B. Shu and J.G. Nash, The Gated Interconnection Network for Dynamic Programming. S.K. Tewsburg et al. eds., New York: Concurrent Computing, Plenum, 1988. [26] A.P. Sprague and K.H. Kulkarni, “Optimal Parallel Algorithms for Finding Cut Vertices and Bridges of Interval Graphs,” Information Processing Letters, vol. 42, pp. 229-234, 1992. [27] Z. Syed, A.E. Gamal, and M.A. Breuer, “On Routing for Custom Integrated Circuits,” Proc. 19th Design Automation Conf., pp. 887-893, 1982. [28] J.L. Trahan, R. Vaidyanathan, and C.P. Subbaraman, “Constant Time Graph Algorithms on the Reconfigurable Multiple Bus Machine,” J. Parallel and Distributed Computing, vol. 46, pp. 1-14, 1997. [29] S. Tsukiyama, E.S. Kuh, and I. Shirakawa, "An Algorithm for Single-Row Routing with Prescribed Street Congestions," IEEE Trans. Circuits and Systems, vol. 27, pp. 765-771, 1980. [30] B. F. Wang and G. H. Chen,“Constant time algorithms for the transitive closure problem and some related graph problems on processor arrays with reconfigurable bus systems,” IEEE Trans. on Parallel and Distributed Systems, vol. 1, no. 4, pp. 500-507, 1991. [31] J.S. Wang and R.C.T. Lee, “An Efficient Channel Routing Algorithm to Yield an Optimal Solution,” IEEE Trans. Computers, vol. 39, pp. 957-962, 1990. [32] C.H. Wu, S.J. Horng, and H.R. Tsai, “Efficient Parallel Algorithms for Hierarchical Clustering on Arrays with Reconfigurable Optical Buses,” J. Parallel and Distributed Computing, vol. 60, pp. 1137-1153, 2000. [33] M.S. Yu, C.L. Chen, and R.C.T. Lee, “An Optimal Parallel Algorithm for Minimum Coloring of Intervals,” Proc. Int'l Conf. Parallel Processing, vol. III, pp. 162-168, 1990. [34] M.S. Yu and C.H. Yang, “A Simple Optimal Algorithm for the Minimum Coloring Problem on Interval Graphs,” Information Processing Letters, vol. 48, pp. 48-51, 1993. Index Terms: Channel-assignment problem, minimum coloring problem, interval graph, list ranking, integer sorting, parallel algorithm, reconfigurable array of processors with wider bus networks. Shi-Jinn Horng, Horng-Ren Tsai, Yi Pan, Jennifer Seitzer, "Optimal Algorithms for the Channel-Assignment Problem on a Reconfigurable Array of Processors with Wider Bus Networks," IEEE Transactions on Parallel and Distributed Systems, vol. 13, no. 11, pp. 1124-1138, Nov. 2002, doi:10.1109/TPDS.2002.1058096 Usage of this product signifies your acceptance of the Terms of Use
{"url":"http://www.computer.org/csdl/trans/td/2002/11/l1124-abs.html","timestamp":"2014-04-18T22:13:19Z","content_type":null,"content_length":"66809","record_id":"<urn:uuid:058df139-cb4c-48fc-8995-cff7fdbc3a99>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00053-ip-10-147-4-33.ec2.internal.warc.gz"}
Concord, CA ACT Tutor Find a Concord, CA ACT Tutor Hi, my name is Dave. I have tutored hundreds of students. Although many came to me afraid of math, most of them left with a well-earned sense of confidence and mastery. 14 Subjects: including ACT Math, statistics, geometry, GRE ...I was extensively trained at Score and continue to apply my knowledge with all my students, specifically devising plans for students with learning disabilities and those who prepare to take academic tests such as TOEFL. Moreover, I teach study skills to my ESL students to maximize their study ha... 73 Subjects: including ACT Math, reading, Spanish, writing ...Teaching these study skills to my students has helped them dramatically increase their grades. Feel free to check out my feedback for positive testimonials. I am an expert on math standardized testing, as stated in my reviews from previous students. 59 Subjects: including ACT Math, chemistry, reading, physics ...Bruce Barbee at UCLA and also a workshop leader and presenter at Covel Commons and the UCLA Career Center. My Previous SAT/AP Examination Standings: I took many AP exams during high school, achieving the AP Scholar with Distinction award by the end of my junior year. Some of the exams that I h... 45 Subjects: including ACT Math, English, reading, writing ...My freshman year of college, I took the ASVAB and scored in the 99th percentile on the AFQT. How do I achieve these scores? By sticking to a simple routine that I practiced until it became second nature. 18 Subjects: including ACT Math, reading, writing, algebra 1 Related Concord, CA Tutors Concord, CA Accounting Tutors Concord, CA ACT Tutors Concord, CA Algebra Tutors Concord, CA Algebra 2 Tutors Concord, CA Calculus Tutors Concord, CA Geometry Tutors Concord, CA Math Tutors Concord, CA Prealgebra Tutors Concord, CA Precalculus Tutors Concord, CA SAT Tutors Concord, CA SAT Math Tutors Concord, CA Science Tutors Concord, CA Statistics Tutors Concord, CA Trigonometry Tutors Nearby Cities With ACT Tutor Alameda ACT Tutors Antioch, CA ACT Tutors Berkeley, CA ACT Tutors Danville, CA ACT Tutors Hayward, CA ACT Tutors Lafayette, CA ACT Tutors Martinez ACT Tutors Oakland, CA ACT Tutors Piedmont, CA ACT Tutors Pittsburg, CA ACT Tutors Pleasant Hill, CA ACT Tutors Richmond, CA ACT Tutors San Francisco ACT Tutors Vallejo ACT Tutors Walnut Creek, CA ACT Tutors
{"url":"http://www.purplemath.com/Concord_CA_ACT_tutors.php","timestamp":"2014-04-16T04:23:38Z","content_type":null,"content_length":"23456","record_id":"<urn:uuid:b32b61cc-8417-4c1b-83bf-74ef3156f396>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00094-ip-10-147-4-33.ec2.internal.warc.gz"}
[SOLVED] A differential equation July 20th 2008, 08:13 PM #1 [SOLVED] A differential equation I must do something wrong. I have to solve $f'(t)=t^2f^2(t)$ with the initial condition $f(0)=\frac{1}{2}$. So I wrote $\frac{df}{dt}=t^2f^2(t) \Leftrightarrow \frac{df}{f^2(t)}=t^2dt \Leftrightarrow \int \frac{df}{f^2(t)dt}=$$\int t^2 dt \Leftrightarrow -\frac{1}{f(t)}=\frac{x^3}{3}+C \Leftrightarrow f(t)=-\frac{3}{t^3}+C$. I cleary see that $f$ in $0$ don't exist, so how could it be equal to $\frac{1}{2}$, since whatever value of $C$ won't satisfy the impossible. I must do something wrong. I have to solve $f'(t)=t^2f^2(t)$ with the initial condition $f(0)=\frac{1}{2}$. So I wrote $\frac{df}{dt}=t^2f^2(t) \Leftrightarrow \frac{df}{f^2(t)}=t^2dt \Leftrightarrow \int \frac{df}{f^2(t)dt}=$$\int t^2 dt \Leftrightarrow -\frac{1}{f(t)}=\frac{x^3}{3}+C \Leftrightarrow f(t)=-\frac{3}{t^3}+C$. I cleary see that $f$ in $0$ don't exist, so how could it be equal to $\frac{1}{2}$, since whatever value of $C$ won't satisfy the impossible. from $-\frac{1}{f(t)}=\frac{x^3}{3}+C$: we have $f(t) = -\frac{1}{\frac{t^3}{3}+C}$ this is different from $f(t)=-\frac{3}{t^3}+C$ July 20th 2008, 08:16 PM #2 Global Moderator Nov 2005 New York City July 20th 2008, 08:19 PM #3
{"url":"http://mathhelpforum.com/differential-equations/44169-solved-differential-equation.html","timestamp":"2014-04-20T18:12:49Z","content_type":null,"content_length":"41925","record_id":"<urn:uuid:dcfce531-a7fe-4c4e-a7cc-264f22ceb66b>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00125-ip-10-147-4-33.ec2.internal.warc.gz"}
normal distribution April 18th 2009, 03:45 PM #1 Junior Member Nov 2007 normal distribution can anyone help with this problem please: the problem involves oranges n the masses in grams the mass X grams of a particular variety of orange is normally distributed with mean 205g and standard deviation 25g if the smallest 30% of oranges are graded 'small', how do you determine the maximum weight of an orange graded 'small'? thanks, n apologies as that is all the information i have. i mainly looking for help how to determine it rather than the answer please. can anyone help with this problem please: the problem involves oranges n the masses in grams the mass X grams of a particular variety of orange is normally distributed with mean 205g and standard deviation 25g if the smallest 30% of oranges are graded 'small', how do you determine the maximum weight of an orange graded 'small'? thanks, n apologies as that is all the information i have. i mainly looking for help how to determine it rather than the answer please. Solve for $a$: $\Pr(X \leq a) = 0.3$. April 18th 2009, 04:03 PM #2
{"url":"http://mathhelpforum.com/statistics/84343-normal-distribution.html","timestamp":"2014-04-17T12:36:48Z","content_type":null,"content_length":"34605","record_id":"<urn:uuid:c46a47c1-e8a4-417a-b342-49351fc3f999>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00101-ip-10-147-4-33.ec2.internal.warc.gz"}
Problem 162. High school cafeteria Given an input vector of positive integers, return a row vector with the primes first (in increasing order) and the composites next (also in increasing order). The number 1 is neither prime nor composite. Put it with the composites for this problem. Problem Comments 4 Comments Show 1 older comment Alfonso Nieto-Castanon on 29 Jan 2012 need to add non-sorted test samples (e.g. [3,2,1])? on 26 Feb 2013 It looks like 1 is being incorrectly sorted as a composite number in the testcases. the cyclist on 27 Feb 2013 I somehow Alfonso's older comment until I saw Tom's newer one. Alfonso: I've added a non-sorted test sample, and had the problem rescored. Tom: 1 is neither prime nor composite. I've added instructions on how to handle that. Ned Gulley on 20 Jun 2013 I had assumed that "in order" meant "in the order provided". If that were the case, then [5 1 3 2 4] would return [5 3 2 1 4]. Maybe you could say you want them sorted rather than in order. Solution Comments 1 Comment on 10 Mar 2012 Maybe a test case with a non-sorted input would be nice
{"url":"http://mathworks.com/matlabcentral/cody/problems/162-high-school-cafeteria","timestamp":"2014-04-16T13:14:55Z","content_type":null,"content_length":"27686","record_id":"<urn:uuid:f0647593-2c6b-4f25-a63f-bfa498eef71e>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00087-ip-10-147-4-33.ec2.internal.warc.gz"}
In implementing the Geometry process and content performance indicators, it is expected that students will identify and justify geometric relationships, formally and informally. For example, students will begin with a definition of a figure and from that definition students will be expected to develop a list of conjectured properties of the figure and to justify each conjecture informally or with formal proof. Students will also be expected to list the assumptions that are needed in order to justify each conjectured property and present their findings in an organized manner. The intent of both the process and content performance indicators is to provide a variety of ways for students to acquire and demonstrate mathematical reasoning ability when solving problems. The variety of approaches to verification and proof is what gives curriculum developers and teachers the flexibility to adapt strategies to address these performance indicators in a manner that meets the diverse needs of our students. Local curriculum and local/state assessments must support and allow students to use any mathematically correct method when solving a problem. Throughout this document the performance indicators use the words investigate, explore, discover, conjecture, reasoning, argument, justify, explain, proof, and apply. Each of these terms is an important component in developing a student’s mathematical reasoning ability. It is therefore important that a clear and common definition of these terms be understood. The order of these terms reflects different stages of the reasoning process. Investigate/Explore - Students will be given situations in which they will be asked to look for patterns or relationships between elements within the setting. Discover - Students will make note of possible relationships of perpendicularity, parallelism, congruence, and/or similarity after investigation/exploration. Conjecture - Students will make an overall statement, thought to be true, about the new discovery. Reasoning - Students will engage in a process that leads to knowing something to be true or false. Argument - Students will communicate, in verbal or written form, the reasoning process that leads to a conclusion. A valid argument is the end result of the conjecture/reasoning process. Justify/Explain - Students will provide an argument for a mathematical conjecture. It may be an intuitive argument or a set of examples that support the conjecture. The argument may include, but is not limited to, a written paragraph, measurement using appropriate tools, the use of dynamic software, or a written proof. Proof - Students will present a valid argument, expressed in written form, justified by axioms, definitions, and theorems using properties of perpendicularity, parallelism, congruence, and similarity with polygons and circles. Apply - Students will use a theorem or concept to solve a geometric problem. Students will build new mathematical knowledge through problem solving. G.PS.1 Use a variety of problem solving strategies to understand new mathematical content Students will solve problems that arise in mathematics and in other contexts. G.PS.2 Observe and explain patterns to formulate generalizations and conjectures G.PS.3 Use multiple representations to represent and explain problem situations (e.g., spatial, geometric, verbal, numeric, algebraic, and graphical representations) Students will apply and adapt a variety of appropriate strategies to solve problems. G.PS.4 Construct various types of reasoning, arguments, justifications and methods of proof for problems G.PS.5 Choose an effective approach to solve a problem from a variety of strategies (numeric, graphic, algebraic) G.PS.6 Use a variety of strategies to extend solution methods to other problems G.PS.7 Work in collaboration with others to propose, critique, evaluate, and value alternative approaches to problem solving Students will monitor and reflect on the process of mathematical problem solving. G.PS.8 Determine information required to solve a problem, choose methods for obtaining the information, and define parameters for acceptable solutions G.PS.9 Interpret solutions within the given constraints of a problem G.PS.10 Evaluate the relative efficiency of different representations and solution methods of a problem Students will recognize reasoning and proof as fundamental aspects of mathematics. G.RP.1 Recognize that mathematical ideas can be supported by a variety of strategies G.RP.2 Recognize and verify, where appropriate, geometric relationships of perpendicularity, parallelism, congruence, and similarity, using algebraic strategies Students will make and investigate mathematical conjectures. G.RP.3 Investigate and evaluate conjectures in mathematical terms, using mathematical strategies to reach a conclusion Students will develop and evaluate mathematical arguments and proofs. G.RP.4 Provide correct mathematical arguments in response to other students’ conjectures, reasoning, and arguments G.RP.5 Present correct mathematical arguments in a variety of forms G.RP.6 Evaluate written arguments for validity Students will select and use various types of reasoning and methods of proof. G.RP.7 Construct a proof using a variety of methods (e.g., deductive, analytic, transformational) G.RP.8 Devise ways to verify results or use counterexamples to refute incorrect statements G.RP.9 Apply inductive reasoning in making and supporting mathematical conjectures Students will organize and consolidate their mathematical thinking through communication. G.CM.1 Communicate verbally and in writing a correct, complete, coherent, and clear design (outline) and explanation for the steps used in solving a problem G.CM.2 Use mathematical representations to communicate with appropriate accuracy, including numerical tables, formulas, functions, equations, charts, graphs, and diagrams Students will communicate their mathematical thinking coherently and clearly to peers, teachers, and others. G.CM.3 Present organized mathematical ideas with the use of appropriate standard notations, including the use of symbols and other representations when sharing an idea in verbal and written form G.CM.4 Explain relationships among different representations of a problem G.CM.5 Communicate logical arguments clearly, showing why a result makes sense and why the reasoning is valid G.CM.6 Support or reject arguments or questions raised by others about the correctness of mathematical work Students will analyze and evaluate the mathematical thinking and strategies of others. G.CM.7 Read and listen for logical understanding of mathematical thinking shared by other students G.CM.8 Reflect on strategies of others in relation to one’s own strategy G.CM.9 Formulate mathematical questions that elicit, extend, or challenge strategies, solutions, and/or conjectures of others Students will use the language of mathematics to express mathematical ideas precisely. G.CM.10 Use correct mathematical language in developing mathematical questions that elicit, extend, or challenge other students’ conjectures G.CM.11 Understand and use appropriate language, representations, and terminology when describing objects, relationships, mathematical solutions, and geometric diagrams G.CM.12 Draw conclusions about mathematical ideas through decoding, comprehension, and interpretation of mathematical visuals, symbols, and technical writing Students will recognize and use connections among mathematical ideas. G.CN.1 Understand and make connections among multiple representations of the same mathematical idea G.CN.2 Understand the corresponding procedures for similar problems or mathematical concepts Students will understand how mathematical ideas interconnect and build on one another to produce a coherent whole. G.CN.3 Model situations mathematically, using representations to draw conclusions and formulate new situations G.CN.4 Understand how concepts, procedures, and mathematical results in one area of mathematics can be used to solve problems in other areas of mathematics G.CN.5 Understand how quantitative models connect to various physical models and representations Students will recognize and apply mathematics in contexts outside of mathematics. G.CN.6 Recognize and apply mathematics to situations in the outside world G.CN.7 Recognize and apply mathematical ideas to problem situations that develop outside of mathematics G.CN.8 Develop an appreciation for the historical development of mathematics Students will create and use representations to organize, record, and communicate mathematical ideas. G.R.1 Use physical objects, diagrams, charts, tables, graphs, symbols, equations, or objects created using technology as representations of mathematical concepts G.R.2 Recognize, compare, and use an array of representational forms G.R.3 Use representation as a tool for exploring and understanding mathematical ideas Students will select, apply, and translate among mathematical representations to solve problems. G.R.4 Select appropriate representations to solve problem situations G.R.5 Investigate relationships between different representations and their impact on a given problem Students will use representations to model and interpret physical, social, and mathematical phenomena. G.R.6 Use mathematics to show and understand physical phenomena (e.g., determine the number of gallons of water in a fish tank) G.R.7 Use mathematics to show and understand social phenomena (e.g., determine if conclusions from another person’s argument have a logical foundation) G.R.8 Use mathematics to show and understand mathematical phenomena (e.g., use investigation, discovery, conjecture, reasoning, arguments, justification and proofs to validate that the two base angles of an isosceles triangle are congruent) Note: The algebraic skills and concepts within the Algebra process and content performance indicators must be maintained and applied as students are asked to investigate, make conjectures, give rationale, and justify or prove geometric concepts. Students will use visualization and spatial reasoning to analyze characteristics and properties of geometric shapes. Geometric Relationships Note: Two-dimensional geometric relationships are addressed in the Informal and Formal Proofs band. G.G.1 Know and apply that if a line is perpendicular to each of two intersecting lines at their point of intersection, then the line is perpendicular to the plane determined by them G.G.2 Know and apply that through a given point there passes one and only one plane perpendicular to a given line G.G.3 Know and apply that through a given point there passes one and only one line perpendicular to a given plane G.G.4 Know and apply that two lines perpendicular to the same plane are coplanar G.G.5 Know and apply that two planes are perpendicular to each other if and only if one plane contains a line perpendicular to the second plane G.G.6 Know and apply that if a line is perpendicular to a plane, then any line perpendicular to the given line at its point of intersection with the given plane is in the given plane G.G.7 Know and apply that if a line is perpendicular to a plane, then every plane containing the line is perpendicular to the given plane G.G.8 Know and apply that if a plane intersects two parallel planes, then the intersection is two parallel lines G.G.9 Know and apply that if two planes are perpendicular to the same line, they are parallel G.G.10 Know and apply that the lateral edges of a prism are congruent and parallel G.G.11 Know and apply that two prisms have equal volumes if their bases have equal areas and their altitudes are equal G.G.12 Know and apply that the volume of a prism is the product of the area of the base and the altitude G.G.13 ○ lateral edges are congruent ○ lateral faces are congruent isosceles triangles ○ volume of a pyramid equals one-third the product of the area of the base and the altitude G.G.14 ○ ○ volume equals the product of the area of the base and the altitude ○ lateral area of a right circular cylinder equals the ○ product of an altitude and the circumference of the base G.G.15 ○ lateral area equals one-half the product of the slant height and the circumference of its base ○ volume is one-third the product of the area of its base and its altitude G.G.16 Apply the properties of a sphere, including: ○ the intersection of a plane and a sphere is a circle ○ a great circle is the largest circle that can be drawn on a sphere ○ two planes equidistant from the center of the sphere and intersecting the sphere do so in congruent circles ○ surface area is volume is G.G.17 Construct a bisector of a given angle, using a straightedge and compass, and justify the construction G.G.18 Construct the perpendicular bisector of a given segment, using a straightedge and compass, and justify the construction G.G.19 Construct lines parallel (or perpendicular) to a given line through a given point, using a straightedge and compass, and justify the construction G.G.20 Construct an equilateral triangle, using a straightedge and compass, and justify the construction G.G.21 Investigate and apply the concurrence of medians, altitudes, angle bisectors, and perpendicular bisectors of triangles G.G.22 Solve problems using compound loci G.G.23 Graph and solve compound loci in the coordinate plane Students will identify and justify geometric relationships formally and informally. Informal and Formal Proofs G.G.24 Determine the negation of a statement and establish its truth value G.G.25 Know and apply the conditions under which a compound statement (conjunction, disjunction, conditional, biconditional) is true G.G.26 Identify and write the inverse, converse, and contrapositive of a given conditional statement and note the logical equivalences G.G.27 Write a proof arguing from a given hypothesis to a given conclusion G.G.28 Determine the congruence of two triangles by using one of the five congruence techniques (SSS, SAS, ASA, AAS, HL), given sufficient information about the sides and/or angles of two congruent triangles G.G.29 Identify corresponding parts of congruent triangles G.G.30 Investigate, justify, and apply theorems about the sum of the measures of the angles of a triangle G.G.31 Investigate, justify, and apply the isosceles triangle theorem and its converse G.G.32 Investigate, justify, and apply theorems about geometric inequalities, using the exterior angle theorem G.G.33 Investigate, justify, and apply the triangle inequality theorem G.G.34 Determine either the longest side of a triangle given the three angle measures or the largest angle given the lengths of three sides of a triangle G.G.35 Determine if two lines cut by a transversal are parallel, based on the measure of given pairs of angles formed by the transversal and the lines G.G.36 Investigate, justify, and apply theorems about the sum of the measures of the interior and exterior angles of polygons G.G.37 Investigate, justify, and apply theorems about each interior and exterior angle measure of regular polygons G.G.38 Investigate, justify, and apply theorems about parallelograms involving their angles, sides, and diagonals G.G.39 Investigate, justify, and apply theorems about special parallelograms (rectangles, rhombuses, squares) involving their angles, sides, and diagonals G.G.40 Investigate, justify, and apply theorems about trapezoids (including isosceles trapezoids) involving their angles, sides, medians, and diagonals G.G.41 Justify that some quadrilaterals are parallelograms, rhombuses, rectangles, squares, or trapezoids G.G.42 Investigate, justify, and apply theorems about geometric relationships, based on the properties of the line segment joining the midpoints of two sides of the triangle G.G.43 Investigate, justify, and apply theorems about the centroid of a triangle, dividing each median into segments whose lengths are in the ratio 2:1 G.G.44 Establish similarity of triangles, using the following theorems: AA, SAS, and SSS G.G.45 Investigate, justify, and apply theorems about similar triangles G.G.46 Investigate, justify, and apply theorems about proportional relationships among the segments of the sides of the triangle, given one or more lines parallel to one side of a triangle and intersecting the other two sides of the triangle G.G.47 Investigate, justify, and apply theorems about mean proportionality: ○ the altitude to the hypotenuse of a right triangle is the mean proportional between the two segments along the hypotenuse ○ the altitude to the hypotenuse of a right triangle divides the hypotenuse so that either leg of the right triangle is the mean proportional between the hypotenuse and segment of the hypotenuse adjacent to that leg G.G.48 Investigate, justify, and apply the Pythagorean theorem and its converse G.G.49 Investigate, justify, and apply theorems regarding chords of a circle: ○ perpendicular bisectors of chords ○ the relative lengths of chords as compared to their distance from the center of the circle G.G.50 Investigate, justify, and apply theorems about tangent lines to a circle: ○ a perpendicular to the tangent at the point of tangency ○ two tangents to a circle from the same external point ○ common tangents of two non-intersecting or tangent circles G.G.51 Investigate, justify, and apply theorems about the arcs determined by the rays of angles formed by two lines intersecting a circle when the vertex is: ○ inside the circle (two chords) ○ on the circle (tangent and chord) ○ outside the circle (two tangents, two secants, or tangent and secant) G.G.52 Investigate, justify, and apply theorems about arcs of a circle cut by two parallel lines G.G.53 Investigate, justify, and apply theorems regarding segments intersected by a circle: ○ along two tangents from the same external point ○ along two secants from the same external point ○ along a tangent and a secant from the same external point ○ along two intersecting chords of a given circle Students will apply transformations and symmetry to analyze problem solving situations. Transformational Geometry G.G.54 Define, investigate, justify, and apply isometries in the plane (rotations, reflections, translations, glide reflections) Note: Use proper function notation. G.G.55 Investigate, justify, and apply the properties that remain invariant under translations, rotations, reflections, and glide reflections G.G.56 Identify specific isometries by observing orientation, numbers of invariant points, and/or parallelism G.G.57 Justify geometric relationships (perpendicularity, parallelism, congruence) using transformational techniques (translations, rotations, reflections) G.G.58 Define, investigate, justify, and apply similarities (dilations and the composition of dilations and isometries) G.G.59 Investigate, justify, and apply the properties that remain invariant under similarities G.G.60 Identify specific similarities by observing orientation, numbers of invariant points, and/or parallelism G.G.61 Investigate, justify, and apply the analytical representations for translations, rotations about the origin of 90º and 180º, reflections over the lines Students will apply coordinate geometry to analyze problem solving situations. G.G.63 Determine whether two lines are parallel, perpendicular, or neither, given their equations G.G.64 Find the equation of a line, given a point on the line and the equation of a line perpendicular to the given line G.G.65 Find the equation of a line, given a point on the line and the equation of a line parallel to the desired line G.G.66 Find the midpoint of a line segment, given its endpoints G.G.67 Find the length of a line segment, given its endpoints G.G.68 Find the equation of a line that is the perpendicular bisector of a line segment, given the endpoints of the line segment G.G.69 Investigate, justify, and apply the properties of triangles and quadrilaterals in the coordinate plane, using the distance, midpoint, and slope formulas G.G.70 Solve systems of equations involving one linear equation and one quadratic equation graphically G.G.71 Write the equation of a circle, given its center and radius or given the endpoints of a diameter G.G.72 Write the equation of a circle, given its center and radius or given the endpoints of a diameter Note: The center is an ordered pair of integers and the radius is an integer. G.G.73 Find the center and radius of a circle, given the equation of the circle in center-radius form G.G.74 Graph circles of the form (x − h)^2 + (y − k)^2 = r^2 Jump to: │Table of Contents│Prekindergarten │Kindergarten│Grade 1│ Grade 2 │ │ Grade 3 │ Grade 4 │ Grade 5 │Grade 6│ Grade 7 │ │ Grade 8 │ Algebra │Algebra 2 and Trigonometry │
{"url":"http://www.p12.nysed.gov/ciai/mst/math/standards/geometry.html","timestamp":"2014-04-16T04:20:25Z","content_type":null,"content_length":"89664","record_id":"<urn:uuid:28597812-977e-4bc3-ab0c-82fb749f4eec>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00460-ip-10-147-4-33.ec2.internal.warc.gz"}
Problems that Involve Mixing 2 Beakers of Acid You will be given a problem involving the mixing of two beakers of acid. After typing in a step, hit the "Enter" key. For the last column in the table and in the equation, do not multiply the expression out unless you are multiplying by 0 or 1. For example, write expressions as .09(7-x), but if the expression is 1(7-x) or 0(7-x) write it as 7-x or 0 respectively. This activity will ask you to complete 12 mixing problems. It should get easier as you become more experienced. In order to succeed with this activity, it is expected that you already know how to solve linear equations. If you need help, click on the "Hint" button. Good luck! Back to Larry Green's Java Applets Back to the Lake Tahoe Community College Math Department Home Page
{"url":"http://www.ltcconline.net/greenl/java/BasicAlgebra/mixing/Mixing.html","timestamp":"2014-04-20T05:45:18Z","content_type":null,"content_length":"16341","record_id":"<urn:uuid:8691d3a1-a6fc-4377-94d1-0b592d8d924f>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00635-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts from April 28, 2007 on The Unapologetic Mathematician There is an interesting preorder we can put on the nonzero elements of any commutative ring with unit. If $r$ and $s$ are nonzero elements of a ring $R$, we say that $r$ divides $s$ — and write $r|s$ — if there is an $x\in R$ so that $rx=s$. The identity $1$ trivially divides every other nonzero element of $R$. We can easily check that this defines a preorder. Any element divides itself, since $r1=r$. Further, if $r|s$ and $s|t$ then there exist $x$ and $y$ so that $rx=s$ and $sy=t$, so $r(xy)=t$ and we have $r|t$. On the other hand, this preorder is almost never a partial order. In fact since $r(-1)=-r$ and $-r(-1)=r$ we see that $r|-r$ and $-r|r$, and most of the time $req-r$. In general, when both $r|s$ and $r|s$ we say that $r$ and $s$ are associates. Any unit $u$ comes with an inverse $u^{-1}$, so we have $u|1$ and $1|u$. If $r=su$ for some unit $u$, then $r$ and $s$ are associates because $s=ru^{-1}$ We can pull a partial order out of this preorder with a little trick that works for any preorder. Given a preorder $(P,\preceq)$ we write $a\sim b$ if both $a\preceq b$ and $b\preceq a$. Then we can check that $\sim$ defines an equivalence relation on $P$, so we can form the set $P/\sim$ of its equivalence classes. Then $\preceq$ descends to an honest partial order on $P/\sim$. One place that divisibility shows up a lot is in the ring of integers. Clearly $n$ and $-n$ are associate. If $m$ and $n$ are positive integers with $m|n$, then there is another positive integer $x$ so that $mx=n$. If $x=1$ then $m=n$. Otherwise $m\lneq n$. Thus the only way two positive integers can be associate is if they are the same. The preorder of divisibility on $\mathbb{Z}^\times$ induces a partial order of divisibility on $\mathbb{N}^+$. • Recent Posts • Blogroll • Art • Astronomy • Computer Science • Education • Mathematics • Me • Philosophy • Physics • Politics • Science • RSS Feeds • Feedback Got something to say? Anonymous questions, comments, and suggestions at • Subjects • Archives
{"url":"http://unapologetic.wordpress.com/2007/04/28/","timestamp":"2014-04-17T15:35:19Z","content_type":null,"content_length":"45137","record_id":"<urn:uuid:7c44fa43-bbec-4f60-a26c-b50bf36d1a4c>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00542-ip-10-147-4-33.ec2.internal.warc.gz"}
This Is Brad DeLong's Grasping Reality... Ranjan Bhaduri writes: allaboutalpha.com: Welcome to AllAboutAlpha.com: By: Ranjan Bhaduri, special to AllAboutAlpha.com: The word “liquidity” gets bandied about quite a lot, but it is surprising how many portfolio managers take a naïve approach to liquidity. It is well known that one should be compensated for investing in less liquid instruments (liquidity premium), but how much? What is the value of liquidity? It is dangerous in merely trust one’s intuition on the value of liquidity. Consider the following one-person game: The “Balls in the Hat Game” The game consists of a hat that contains 6 black balls and 4 white balls. The player picks balls from the hat and gains $1 for each white ball, and loses $1 for each black ball. The selection is done without replacement. At the end of each pick, the player may choose to stop or continue. The player has the right to refuse to play (i.e. not pick any balls at all). Given these rules, and a hat containing 6 black balls and 4 white balls, would you play? (Why?) Mathematically one can prove that there is a POSITIVE expected value (of 1/15) in playing this game, so one SHOULD play! The ability to stop any time is analogous to perfect liquidity (i.e. being able to pull out of an investment at any time without the action having an impact on the value of the investment). This value of liquidity helps overcome the imbalance between the black and white balls, and thus makes this game profitable. This is interesting from a behavioral finance point of view, since it seems to suggest that humans are wired such that they will tend to underestimate the value of liquidity. The mathematics behind calculating the value of liquidity can be complex as there can be subtle nuances. Niall Whelan of Scotia Capital and I wrote a pair of papers coming out which tackles the above game in the asymptotic case (i.e. hats of infinite size) and connect the value of liquidity to option pricing. Niall is one of the best quants north of the South Pole and much of these papers were hammered out in an all-night bus ride that we were forced to take from NY to Toronto (our flight from La Guardia got cancelled but we both needed to be back in Toronto in the morning for important meetings)... I suspect that this is wrong. What is going on here, I think, is not so much liquidity as mean reversion: if you sample with replacement the effect goes away. The reason that it makes sense to play even though one might at first glance think the odds are unfavorable is that if you lose in the early stages the chances of winning in the later stages go up--that the ability to keep playing provides a degree of insurance in the cases in which things break badly. (Of course, "liquidity" does play a role: if you could never stop playing that would be offset by the fact that if you won in the early stages the odds would then move away from you.) In order to see what is going on here, let us write down the value function: V(m,n) is the expected value of playing the game (and dropping out at the optimal point) when there are m white balls and n black balls in the hat. To begin with: V(1,0) = 1; if there is one white ball left in the hat, the value of the game is 1--you play, and collect the white ball. V(0,1) = 0; if there is one black ball left in the hat, the value of the game is 0--you don't play at all. What is the value of V(1,1)? If you do play, then half the time you will draw a black ball--and be down 1--but then you will be playing the game V(1,0), which is worth one, so if you draw a black ball next, you are even. And if you do play, then half the time you will draw a white ball--and be up one--and then you will be playing the game V(0,1), and so you stop and that is worth zero. The value of V(1,1) is therefore: V(1,1) = (1/2)(-1+V(1,0)) + (1/2)(1 + V(0,1)) = (1/2)0 + 1/2(1) = 1/2 This is interesting: you might initially think that this is a fair game--there is, after all, one white and one black ball, so you have a 50-50 chance of being up after the first draw. But it is rigged in your favor. More generally: V(m,n) = max[0, (m/(m+n))(1+V(m-1,n)) + (n/(m+n))*(-1+V(m,n-1))]
{"url":"http://delong.typepad.com/sdj/2007/11/the-balls-in-th.html","timestamp":"2014-04-21T14:43:27Z","content_type":null,"content_length":"44561","record_id":"<urn:uuid:522414fd-f54c-46d6-a5d4-498446f2d095>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00320-ip-10-147-4-33.ec2.internal.warc.gz"}
MATLAB Golf: Frequencies - Rules The objective in the game of golf is to reach the hole with the fewest strokes. In MATLAB golf, the objective is to go from one variable (the tee) to another (the hole) with the fewest keystrokes. Example: How many positive elements in vector a? One approach (score = 19): A better approach (score = 10): The winner is the contestant who solves the problem with the fewest characters. An entry's score is determined by counting the total number of characters. The shortest entry that passes the test suite wins. If two entries use the same number of characters, the first one submitted is the winner. There are three important exceptions to the character-counting rule. You are not penalized for using spaces, newlines, or semicolons (this corresponds to ASCII 10, 13, 32, and 59). Because of this, we encourage you to make liberal use of them in order to keep your code as readable as possible. Notice that the following pieces of code both do the same thing and have the same score. Please use the second approach. b=ones(a);b(:)=4;if numel(b)>10 c=10;else c=0;end b = ones(a); b(:) = 4; if numel(b) > 10 c = 10; c = 0; Ranking is "king of the hill" style. In order to move into first place, your entry must be shorter than the current leader. If it is, your entry takes over first place. The entry that was bumped out of first place moves into second place, the entry that was in second place moves into third, and so on. Even if your entry is longer than the current leader, we encourage you to submit it if your approach is novel. The diversity adds to the fun, and it may provide inspiration to someone else who can then make further improvements. In addition, we reserve the right to give out extra prizes for originality when the contest has closed, so your quirky but slightly longer entry may bring you fame. Warning! MATLAB Golf can lead to obfuscated code and may cause headaches and dizziness. Conscientious coders may want to avoid staring at contest code for prolonged periods of time. The authors of this contest make no claims about the merits of potentially dangerous coding tricks revealed herein. The test suites represent the final word: if it passes the test suite, then it's a legal entry, even if a more comprehensive test suite might plausibly have failed the same entry. Keep in mind that MATLAB Golf is a game. Execution Time There is no penalty for execution time, but an entry can't take more than a couple minutes to run through our test suite. This shouldn't be an issue for the complexity of the problems in this Collaboration and editing existing entries Once an entry has been submitted, it cannot be changed. However, any entry can be viewed, edited, and resubmitted as a new entry. You are free to view and modify any entries in the queue. The contest server maintains a history for each modified entry. If your modification of an existing entry improves its score, then you are the "author" for the purpose of determining the winners of this contest. We encourage you to examine and optimize existing entries. We also encourage you to discuss your solutions and strategies with others. You can do this by posting to the comp.soft-sys.matlab thread that we've started from our newsreader (see the link for "Message board" on the right). Fine Print The allowable functions are those contained in the basic MATLAB package available in $MATLAB/toolbox/matlab, where $MATLAB is the root MATLAB directory. Functions from other toolboxes will not be available. Entries will be tested against MATLAB version 6.5 (R13). The following are prohibited: Java commands or object creation eval, feval, etc. Shell escape such as !, dos, unix Handle Graphics commands ActiveX commands File I/O commands Debugging commands Printing commands Simulink commands Benchmark commands such as tic, toc, and flops Challenge: Frequencies Given a vector a and a vector b of non-negative integers, create a row vector c that contains b(1) copies of a(1), b(2) copies of a(2), and so on. As an additional condition, no two identical values can appear next to each other in the answer c. So when a = [ 1 2 3 ] b = [ 2 2 1 ] one answer is c = [ 1 2 3 1 2 ] About named visibility periods Contests are divided into segments where some or all of the scores and code may be hidden for some users. Here are the segments for this contest: • Darkness - You can't see the code or scores for any of the entries. • Twilight - You can see scores but no code. • Daylight - You can see scores and code for all entries. • Finish - Contest end time.
{"url":"http://www.mathworks.nl/matlabcentral/contest/contests/22/rules","timestamp":"2014-04-17T18:48:02Z","content_type":null,"content_length":"24266","record_id":"<urn:uuid:8845b742-887c-4afd-9600-1bbeefb46e30>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00097-ip-10-147-4-33.ec2.internal.warc.gz"}
Roseville, MI Precalculus Tutor Find a Roseville, MI Precalculus Tutor ...I graduated from the University of Michigan, earning a BSE in Nuclear Engineering and a minor in Mathematics. While at Michigan, I worked in the Department of Mathematics tutoring undergraduate math courses (precalculus, calculus, differential equations). I then went on to work at Knolls Atomic ... 12 Subjects: including precalculus, calculus, physics, algebra 1 ...I grew up in the northern part of Belgium where Dutch/Flemish is the official language. I went to school at a Dutch speaking high school and a Dutch speaking University where I obtained a Master's degree in Civil and Structural Engineering. I have a great command of, and am fully fluent in, Dutch and English. 10 Subjects: including precalculus, calculus, physics, ASVAB ...I am a professional tutor and work with students of all ages. Math is certainly my favorite subject and I know how to make it fun and simple. I tutor students in high school and middle school 39 Subjects: including precalculus, English, reading, biology ...My goal is to take the dryness out of the subject and make the lessons livelier by making it more relative to daily life activities, thereby making it more fun to learn. I had gotten a college scholarship to go to college for playing soccer. I have played in high school, in a college tournament... 28 Subjects: including precalculus, calculus, geometry, ESL/ESOL ...I think that trig is an interesting subject and would love to help anyone who wants to gain a better understanding of the material. I took AP Statistics in High School and passed the AP Exam. I took a 300-level Engineering Probability and Statistics course in undergrad. 20 Subjects: including precalculus, calculus, geometry, statistics Related Roseville, MI Tutors Roseville, MI Accounting Tutors Roseville, MI ACT Tutors Roseville, MI Algebra Tutors Roseville, MI Algebra 2 Tutors Roseville, MI Calculus Tutors Roseville, MI Geometry Tutors Roseville, MI Math Tutors Roseville, MI Prealgebra Tutors Roseville, MI Precalculus Tutors Roseville, MI SAT Tutors Roseville, MI SAT Math Tutors Roseville, MI Science Tutors Roseville, MI Statistics Tutors Roseville, MI Trigonometry Tutors
{"url":"http://www.purplemath.com/roseville_mi_precalculus_tutors.php","timestamp":"2014-04-16T19:47:02Z","content_type":null,"content_length":"24435","record_id":"<urn:uuid:fac3ebc5-53f4-430d-abee-b823b7b1ea02>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00336-ip-10-147-4-33.ec2.internal.warc.gz"}
Compact vs. ordinary induction for the Heisenberg group up vote 1 down vote favorite Let $G$ be a Heisenberg group over a local field. To construct an infinite-dimensional irreducible representation of $G$, we fix a central character $\psi$ and extend $\psi$ to a character $\ widetilde{\psi}$ of a maximal abelian subgroup $U$, then form the induced representation $\text{Ind}_U^G \ \widetilde{\psi}$. It is well-known that, in this case, induction with compact supports coincides with ordinary induction, but the proofs that I have seen (involving some calculation with double cosets) are unsatisfying in that they make this situation seem like a coincidence. This is especially interesting to me when compared to parabolic induction for reductive groups, where we construct representations by extending a representation of a Levi subgroup to a parabolic subgroup and then take the the induced representation. In that case, the compactly supported induction coincides with the ordinary one because parabolic subgroups are cocompact, but maximal abelian subgroups of the Heisenberg group are not cocompact and yet the analogous fact holds true. Perhaps this analogy is too tenuous, but I would hope that there is still a conceptual explanation. Note that some maximal abelian subgroups of the Heisenberg group (with compact center) are cocompact and used to give an alternative realization of the Schrödinger representation: ams.org/ mathscinet-getitem?mr=0216825 – Francois Ziegler Oct 29 '12 at 5:48 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged rt.representation-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/110963/compact-vs-ordinary-induction-for-the-heisenberg-group","timestamp":"2014-04-20T14:11:15Z","content_type":null,"content_length":"47372","record_id":"<urn:uuid:f081b322-f57a-46e4-86c4-cd2732929503>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00637-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: IEEE 754 vs Fortran arithmetic Tim Peters <tim@ksr.com> Sat, 27 Oct 90 00:54:50 -0400 From comp.compilers | List of all articles for this month | Newsgroups: comp.compilers From: Tim Peters <tim@ksr.com> Keywords: Fortran, arithmetic Organization: Compilers Central Date: Sat, 27 Oct 90 00:54:50 -0400 In article <144188@sun.Eng.Sun.COM> wsb@eng.Sun.COM (Walt Brainerd) >In article <9010242205.AA04208@lunch.ksr.com>, tim@ksr.com (Tim Peters) >> [X.EQ.Y means ((X)-(Y)).EQ.0 under F77 rules, and this creates >> problems under 754] > [walt explains that this clause applies only when X and Y are > different types] Yes, agreed -- thanks for clarifying. The example should have had, The conclusions are unchanged. >This text is there to explain how to do type conversion when >X and Y are different types. I agree that was the intent. Unfortunately, that's not what the F77 text says. A possible correction: F90 may have fixed this problem after all. I noticed today in the S8.115 draft of F90 that the goofy phrasing of the F77 standard is gone, replaced by words that clearly get at the type conversion intent without F77's harmful overspecification. Hope the new text survives. >> [tim whining about various 754 vendors evaluating entire expressions >> in an extended precision before cutting back to "storage precision"] >Since there is nothing in the F77 standard that indicates how much >precision should be used for anything, such an evaluation cannot >violate F77. (IMHO) ... We disagree, but launching into an argument about what F77 does & doesn't say would take us out of the "754 vs Fortran" topic, so I'll just sketch the opposing view below. More important to the topic at hand is that the extended precision gimmicks are clearly against both the letter and the spirit of the 754/854 stds, under any implementation that's claiming to map Fortran's "+" "-" "*" "/" REAL and DOUBLE operator symbols into the addition (etc) operations defined by 754/854. The results of the 754 operations are defined down to the last bit, and any implementation doing the extended-precision stuff cannot meet those definitions (e.g., because of double rounding errors). conforming processor from, e.g., evaluating "X+Y" in REAL X, Y, Z Z = X+Y in an extended precision (or, for that matter, in a reduced precision, or even unconditionally evaluate all sums to -6.5!), I do not agree that the std allows that freedom across the entire RHS in Z = X + (Y + Z) or even in Z = X + Y + Z Section 6.1.4 clearly defines the type of all *sub*expressions in the above to be REAL, and to say that the "mathematically equivalent" freedoms allow one to carry *more* than REAL precision out of the REAL subexpressions into their contexts is to squeeze all meaning out of "mathematically equivalent", Fortran's notion of type, or both. Since F77 is not in any sense a formal or rigorous std, this is a matter of interpretation (but so is everything else <grin>). I don't feel it's *reasonable* to take an interpretation that reduces parts of the std to > [walt patiently <grin> explaining that Fortran cannot guarantee > results consistent with what 754 requires] Yes, of course. But Fortran can & should change to allow 754 vendors to meet both stds simultaneously without logic-chopping and brutal contortions. The change in S8.115 to the definition of relationals is a good example of what can be done to further this end. It is unfortunate that F90 didn't consciously address 754/854 issues. It is also unfortunate-- & more so! --that 754/854 did not consciously address language binding issues from the start. As other people here are pointing out already, the *real* pain isn't the (very) few areas in which 754 and Fortran flat-out contradict each other, it's the many areas in which the 754 features are a poor fit with Fortran (and C ...). What to do with a NaN in an arithmetic IF is a darned good question. IF (X.NE.X) PRINT *, 'Found a NaN!' out of existence because X.NE.X is relationally equivalent to .FALSE. and the 754 user is in for more exceedingly nasty surprises. Have some vendors using their extended-precision gimmicks and others not and the fact that 754 defines a "portable" arithmetic is reduced to an academic footnote. Etc. Etc. it-is-indeed-a-mess-ly y'rs - tim Tim Peters Kendall Square Research Corp tim@ksr.com, ksr!tim@harvard.harvard.edu Post a followup to this message Return to the comp.compilers page. Search the comp.compilers archives again.
{"url":"http://compilers.iecc.com/comparch/article/90-10-123","timestamp":"2014-04-18T15:39:27Z","content_type":null,"content_length":"9922","record_id":"<urn:uuid:f7fc8aa8-ed53-44b8-9019-a7fa9e910f90>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00604-ip-10-147-4-33.ec2.internal.warc.gz"}
Class XI, PHYSICS, "Motion" If an object continuously changes its position with respect to its surrounding, then it is said to be in state of motion. Rectilinear Motion The motion along a straight line is called rectilinear motion. Velocity may be defined as the change of displacement of a body with respect time. Velocity = change of displacement / time Velocity is a vector quantity and its unit in S.I system is meter per second (m/sec). Average Velocity Average velocity of a body is defined as the ratio of the displacement in a certain direction to the time taken for this displacement. Suppose a body is moving along the path AC as shown in figure. At time t1, suppose the body is at P and its position w.r.t origin O is given by vector r2. Diagram Coming Soon Thus, displacement of the body = r2 - r1 = Δr Time taken for this displacement - t2 - t1 = Δt Therefore, average velocity of the body is given by Vav = Δr / Δt Instantaneous Velocity It is defined as the velocity of a body at a certain instant. V(ins) = 1im Δr / Δt Where Δt → 0 is read as "Δt tends to zero", which means that the time is very small. Velocity From Distance - Time Graph We can determine the velocity of a body by distance - time graph such that the time is taken on x-axis and distance on y-axis. Acceleration of a body may be defined as the time rate of change velocity. If the velocity of a body is changing then it is said to posses acceleration. Acceleration = change of velocity / time If the velocity of a body is increasing, then its acceleration will be positive and if the velocity of a body is decreasing, then its acceleration will be negative. Negative acceleration is also called retardation. Acceleration is a vector quantity and its unit in S.I system is meter per second per second. (m/sec2 OR m.sec-2) Average Acceleration Average acceleration is defined as the ratio of the change in velocity of a body and the time interval during which the velocity has changed. Suppose that at any time t1 a body is at A having velocity V1. At a later time t2, it is at point B having velocity V2. Thus, Change in Velocity = V2 - V1 = Δ V Time during which velocity has changed = t2 - t1 = Δ t Instantaneous Acceleration It is defined as the acceleration of a body at a certain instant a(ins) = lim Δ V / Δ t where Δt → 0 is read as "Δt tends to zero", which means that the time is very small. Acceleration from Velocity - Time Graph We can determine the acceleration of a body by velocity - time graph such that the time is taken on x-axis and velocity on y-axis. Equations of Uniformly Accelerated Rectilinear Motion There are three basic equations of motion. The equations give relations between Vi = the initial velocity of the body moving along a straight line. Vf = the final velocity of the body after a certain time. t = the time taken for the change of velocity a = uniform acceleration in the direction of initial velocity. S = distance covered by the body. Equations are 1. Vf = Vi + a t 2. S = V i t + 1/2 a t2 3. 2 a S = V f2 - V i 2 Motion Under Gravity The force of attraction exerted by the earth on a body is called gravity or pull of earth. The acceleration due to gravity is produced in a freely falling body by the force of gravity. Equations for motion under gravity are 1. Vf = Vi + g t 2. S = V i t + 1/2 g t2 3. 2 g S = Vr2 - Vi2 where g = 9.8 m / s2 in S.I system and is called acceleration due to gravity. Law of Motion Isaac Newton studied motion of bodies and formulated three famous laws of motion in his famous book "Mathematical Principles of Natural Philosophy" in 1687. These laws are called Newton's Laws of Newton's First Law of Motion A body in state of rest will remain at rest and a body in state of motion continues to move with uniform velocity unless acted upon by an unbalanced force. This law consists of two parts. According to first part a body at rest will remain at rest will remain at rest unless some external unbalanced force acts on it. It is obvious from our daily life experience. We observe that a book lying on a table will remain there unless somebody moves it by applying certain force. According to the second part of this law a body in state of uniform motion continuous to do so unless it is acted upon by some unbalanced force. This part of the law seems to be false from our daily life experience. We observe that when a ball is rolled in a floor, after covering certain distance, it stops. Newton gave reason for this stoppage that force of gravity friction of the floor and air resistance are responsible of this stoppage which are, of course, external forces. If these forces are not present, the bodies, one set into motion, will continue to move for ever. Qualitative Definition of Net Force The first law of motion gives the qualitative definition of the net force. (Force is an agent which changes or tends to change the state of rest or of uniform motion of a body). First Law as Law of Inertia Newton's first law of motion is also called the Law of inertia. Inertia is the property of matter by virtue of which is preserves its state of rest or of uniform motion. Inertia of a body directly related to its mass. Newton's Second Law of Motion If a certain unbalanced force acts upon a body, it produces acceleration in its own direction. The magnitude of acceleration is directly proportional to the magnitude of the force and inversely proportional to the mass of the body. Mathematical Form According to this law f ∞ a F = m a → Equation of second law Where 'F' is the unbalanced force acting on the body of mass 'm' and produces an acceleration 'a' in it. From equation 1 N = 1 kg x 1 m/sec2 Hence one newton is that unbalanced force which produces an acceleration of 1 m/sec2 in a body of mass 1 kg. Vector Form Equation of Newton's second law can be written in vector form as F = m a Where F is the vector sum of all the forces acting on the body. Newton's Third Law of Motion To every action there is always an equal and opposite reaction. For example, if a body A exerts force on body B (F(A) on B) in the opposite direction. This force is called reaction. Then according to third law of motion. 1. When a gun is fired, the bullet flies out in forward direction. As a reaction of this action, the gun reacts in backward direction. 2. A boatman, when he wants to put his boat in water pushes the bank with his oar, The reaction of the bank pushes the boat in forward direction. 3. While walking on the ground, as an action, we push the ground in the backward direction. As a reaction ground pushes us in the forward direction. 4. In flying a kite, the string is given a downward jerk and is then released. Thereupon the reaction of the air pushes the kite upward and makes it rise higher. Tension in a String Consider a body of weight W supported by a person with the help of a string. A force is experienced by the hand as well as by the body. This force is known as Tension. At B the hand experiences a downward force. So the direction of force at point B is downward. But at point A direction of the force is upward. These forces at point A and B are tensions. Its magnitude in both cases is same but the direction is opposite. At point A, Tension = T = W = mg Momentum of a Body The momentum of a body is the quantity of motion in it. It depends on two things 1. The mass of the object moving (m), 2. The velocity with which it is moving (V). Momentum is the product of mass and velocity. It is denoted by P. P = m V Momentum is a vector quantity an its direction is the same as that of the velocity. Unit of Momentum Momentum = mass x velocity = kg x m/s = kg x m/s x s/s = kg x m/s2 x s since kg. m/s2 is newton (N) momentum = N-s Hence the S.I unit of momentum is N-s. Unbalanced or Net Force is equal to the Rate of Change of Momentum i.e., F = (mVf = mVi) / t Consider a body of mass 'm' moving with a velocity Vl. A net force F acts on it for a time 't'. Its velocity then becomes Vf. Initial momentum of the body = m Vi Final momentum of the body = m Vf Time interval = t Unbalanced force = F Rate of change of momentum = (m Vf - m Vi) / t ....................... (1) (Vf - Vi) / t = a Rate of change of momentum = m a = F ..................... (2) Substituting the value of rate of change of momentum from equation (2) in equation (1), we get F = (m Vf - m Vi) / t ............................. Proved Law of Conservation of Momentum Isolated System When a number of bodies are such that they exert force upon one another and no external agency exerts a force on them, then they are said to constitute and isolated system. Statement of the Law The total momentum of an isolated system of bodies remains constant. If there is no external force applied to a system, then the total momentum of that system remains constant. Elastic Collision An elastic collision is that in which the momentum of the system as well as the kinetic energy of the system before and after collision, remains constant. Thus for an elastic collision. If P momentum and K.E is kinetic energy. P(before collision) = P(after collision) K.E(before collision) = K.E(after collision) Inelastic Collision An inelastic collision is that in which the momentum of the system before and after the collision remains constant but the kinetic energy before and after the collision changes. Thus for an inelastic collision P(before collision) = P(after collision) Elastic Collision in one Dimension Consider two smooth non rotating spheres moving along the line joining their centres with velocities U1 and U2. U1 is greater than U2, therefore the spheres of mass m1 makes elastic collision with the sphere of mass m2. After collision, suppose their velocities become V1 and V2 but their direction of motion is along same line as before. When two bodies are in contact, one upon the other and a force is applied to the upper body to make it move over the surface of the lower body, an opposing force is set up in the plane of the contract which resists the motion. This force is the force of friction or simply friction. The force of friction always acts parallel to the surface of contact and opposite to the direction of motion. When one body is at rest in contact with another, the friction is called Static Friction. When one body is just on the point of sliding over the other, the friction is called Limiting Friction. When one body is actually sliding over the other, the friction is called Dynamic Friction. Coefficient of Friction (μ) The ratio of limiting friction 'F' to the normal reaction 'R' acting between two surfaces in contact is called the coefficient of friction (μ). μ = F / R F = μ R Fluid Friction Stoke found that bodies moving through fluids (liquids and gases) experiences a retarding force fluid friction or viscous drag. If the moving bodies are spheres then fluid friction F is given by F = 6 π η r v Where η is the coefficient of viscosity, Where r is the radius of the sphere, Where v is velocity pf the sphere. Terminal Velocity When the fluid friction is equal to the downward force acting on the sphere, the sphere attains a uniform velocity. This velocity is called Terminal velocity. The Inclined Plane A plane which makes certain angle θ with the horizontal is called an inclined plane. Diagram Coming Soon Consider a block of mass 'm' placed on an inclined plane making certain angle θ with the horizontal. The forces acting on the block are 1. W, weight of the block acting vertically downward. 2. R, reaction of the plane acting perpendicular to the plane 3. f, force of friction which opposes the motion of the block which is moving downward. Diagram Coming Soon Now we take x-axis along the plane and y-axis perpendicular to the plane. We resolve W into its rectangular components. Component of W along x-axis = W sin θ Component of W along y-axis = W cos θ 1. If the Block is at Rest According to the first condition of equilibrium Σ Fx = 0 f - W sin θ = 0 f = W sin θ Σ Fy = 0 R - W cos θ = 0 R = W cos θ 2. If the Block Slides Down the Inclined Plane with an Acceleration W sin θ > f Net force = F = W sin θ - f Since F = m a and W = m g m a = m g sin θ - f 3. When force of Friction is Negligible Then f ≈ 0 equation (3) => m a = m g sin ≈ - 0 => m a = m g sin ≈ or a = g sin ≈ ............. (4) Particular Cases Case A : If the Smooth Plane is Horizontal Then 0 = 0º Equation (4) => a = g sin 0º => a = g x 0 => a = 0 Case B : If the Smooth Plane is Vertical Then θ = 90º Equation (4) => a = g sin 90º => a = g x 1 => a = g This is the case of a freely falling body. 15 comments: 1. good 2. marvolous.. keep it up 3. plzz give some nuemericals......... 4. plz write in asan 5. plz write in asan 6. good definitions 7. very good notes`please give numerical of this chapter. 8. TO GOOD 9. Yοu can аlso Help oneself Cinderella enclothe fοг the regal paгadе and ԁelves іnto knightly Illusion anԁ mоre Weѕtern themes, but hostѕ serѵerѕ to gаmers all over the reality. Thither аre too seѵeral ωeb broωser gamеs the aԁѵantageѕ of асting onlinе games? It is a very coloured game wіth funky Ϲare playing MΜOG in reсent multiplicatiοn. Teκkеn 6 is tοo equipped by the many new characters why today you get to witness numerous sеnior high-definіtion, lifе-lіke gamеs out On that point. Ӏt is onе of thοse tуреs of games thаt is you сan print to colouгing аnԁ make. 10. аt one time you Demonstrаtе them the reality Alsο visіt my webpage :: game 11. Тhеre arе dіsusе that уou're choosing a new On-line game not for your ego All the same for the toddler. Miniclip Miniclip claims 43 trillion singular users a calendar month, and are 11 selection 12. good,keep it up.... 13. For example, if you're looking to study Business English, make sure the school you choose has a good reputation for business studies and isn't just a general English language school. Write this on the board every day for the first two weeks: ' I am ' You are ' He is ' She is ' It is ' We are ' They are. Which places and which teachers offer you the best value for money, and which are the ones to stay away from. Stop by my web blog: cach hoc tieng anh 14. plzzz give numerical of this chapter....... 15. plzzzzzzzzzz write notes of furthur chapters
{"url":"http://all-notes.blogspot.com/2009/07/xi-physics-chapter-3-motion.html","timestamp":"2014-04-16T07:13:10Z","content_type":null,"content_length":"119030","record_id":"<urn:uuid:839aaed5-cf8e-4f5b-a6c1-ef3c2eab360d>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00197-ip-10-147-4-33.ec2.internal.warc.gz"}
search results Expand all Collapse all Results 1 - 3 of 3 1. CJM 2008 (vol 60 pp. 572) Non-Selfadjoint Perturbations of Selfadjoint Operators in Two Dimensions IIIa. One Branching Point This is the third in a series of works devoted to spectral asymptotics for non-selfadjoint perturbations of selfadjoint $h$-pseudodifferential operators in dimension 2, having a periodic classical flow. Assuming that the strength $\epsilon$ of the perturbation is in the range $h^2\ll \epsilon \ll h^{1/2}$ (and may sometimes reach even smaller values), we get an asymptotic description of the eigenvalues in rectangles $[-1/C,1/C]+i\epsilon [F_0-1/C,F_0+1/C]$, $C\gg 1$, when $\epsilon F_0$ is a saddle point value of the flow average of the leading perturbation. Keywords:non-selfadjoint, eigenvalue, periodic flow, branching singularity Categories:31C10, 35P20, 35Q40, 37J35, 37J45, 53D22, 58J40 2. CJM 2008 (vol 60 pp. 241) Semi-Classical Wavefront Set and Fourier Integral Operators Here we define and prove some properties of the semi-classical wavefront set. We also define and study semi-classical Fourier integral operators and prove a generalization of Egorov's theorem to manifolds of different dimensions. Keywords:wavefront set, Fourier integral operators, Egorov theorem, semi-classical analysis Categories:35S30, 35A27, 58J40, 81Q20 3. CJM 2005 (vol 57 pp. 771) The Resolvent of Closed Extensions of Cone Differential Operators We study closed extensions $\underline A$ of an elliptic differential operator $A$ on a manifold with conical singularities, acting as an unbounded operator on a weighted $L_p$-space. Under suitable conditions we show that the resolvent $(\lambda-\underline A)^{-1}$ exists in a sector of the complex plane and decays like $1/|\lambda|$ as $|\lambda|\to\infty$. Moreover, we determine the structure of the resolvent with enough precision to guarantee existence and boundedness of imaginary powers of $\underline A$. As an application we treat the Laplace--Beltrami operator for a metric with straight conical degeneracy and describe domains yielding maximal regularity for the Cauchy problem $\dot{u}-\Delta u=f$, $u(0)=0$. Keywords:Manifolds with conical singularities, resolvent, maximal regularity Categories:35J70, 47A10, 58J40
{"url":"http://cms.math.ca/cjm/msc/58J40?fromjnl=cjm&jnl=CJM","timestamp":"2014-04-16T10:21:22Z","content_type":null,"content_length":"29648","record_id":"<urn:uuid:789067ec-1b89-4c78-8ed3-ea2c6c9443cf>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00634-ip-10-147-4-33.ec2.internal.warc.gz"}
Teaching myself Trig. I somehow missed Trig in high school and college. Feeling left out, I set out to teach it to myself. I have the basic understanding of trig so far. But I have two questions that urk myself because I am unable to find the answer. I am using khan academy as my main resource. If you have any other great (free) resources, I'd be sincerely interested. Also, if you have any resources that gives out quote unquote 'homework', I'd be interested in that as well. 1. I understand that sin, cos, tan, and etc. is a ratio of side lengths to angles. But who discovered this where? And why? And how? How do you prove it works? (I hate to just assume that it works.) [edit] I hate to assume that soh cah toa works just because some guy on the internet says so. Can you prove to me that sine is equal to opposite over hypotenuse? Same with cosine and tangent? [/edit] 2. I understand the Unit Circle and its creation except for one part. Except for the 0* / 360*, 90*, and 180*, where do the coordinate values come from? Every resource I find has 'tricks' to remember it. No explanation on how to prove the value stated is the true coordinate point.
{"url":"http://mathhelpforum.com/trigonometry/152645-teaching-myself-trig.html","timestamp":"2014-04-17T01:59:10Z","content_type":null,"content_length":"42000","record_id":"<urn:uuid:034d1c87-06b5-4875-bd0b-e757c5cb1ca9>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00311-ip-10-147-4-33.ec2.internal.warc.gz"}
lim f/g = lim (f+o(1))/(g+o(1)) April 8th 2010, 01:25 PM #1 Senior Member Feb 2010 lim f/g = lim (f+o(1))/(g+o(1)) Let $\lim := \lim_{x \to a}$. Suppose $\lim f/g$ exists. Under what conditions can we say $\lim {f+o(1) \over g+o(1)}$ exists and is equal to $\lim f/g$? (Little o's are as $x\to a$.) Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/differential-geometry/137987-lim-f-g-lim-f-o-1-g-o-1-a.html","timestamp":"2014-04-17T01:57:22Z","content_type":null,"content_length":"25851","record_id":"<urn:uuid:92abfe51-4d03-428e-9ed8-a40dfe0a9223>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00618-ip-10-147-4-33.ec2.internal.warc.gz"}
Volumetric heat capacity From Wikipedia, the free encyclopedia Volumetric heat capacity (VHC) describes the ability of a given volume of a substance to store internal energy while undergoing a given temperature change, but without undergoing a phase change. It is different from specific heat capacity in that the VHC depends on the volume of the material, while the specific heat is based on the mass of the material. If given a specific heat value of a substance, one can convert it to the VHC by multiplying the specific heat by the density of the substance.^[1] Dulong and Petit predicted in 1818 that ρc[p] would be constant for all solids (the Dulong-Petit law). In fact, the quantity varies from about 1.2 to 4.5 MJ/m³K. For liquids it is in the range 1.3 to 1.9, and for gases it is a constant 1.0 kJ/m³K. The volumetric heat capacity is defined as having SI units of J/(m³·K). It can also be described in Imperial units of BTU/(ft³·F°). Thermal inertia Thermal inertia is a term commonly used by scientists and engineers modelling heat transfers and is a bulk material property related to thermal conductivity and volumetric heat capacity. For example, this material has a high thermal inertia, or thermal inertia plays an important role in this system, which means that dynamic effects are prevalent in a model, so that a steady-state calculation will yield inaccurate results. The term is a scientific analogy, and is not directly related to the mass-and-velocity term used in mechanics, where inertia is that which limits the acceleration of an object. In a similar way, thermal inertia is a measure of the thermal mass and the velocity of the thermal wave which controls the surface temperature of a material. In heat transfer, a higher value of the volumetric heat capacity means a longer time for the system to reach equilibrium. The thermal inertia of a material is defined as the square root of the product of the material's bulk thermal conductivity and volumetric heat capacity, where the latter is the product of density and specific heat capacity: $I=\sqrt{k \rho c}$ See also Thermal effusivity SI units of thermal inertia are J m ^− 2 K ^− 1 s ^− 1 / 2 also occasionally referred to as Kieffers^[2], or more rarely, tiu.^[3] For planetary surface materials, thermal inertia is the key property controlling the diurnal and seasonal surface temperature variations and is typically dependent on the physical properties of near-surface geologic materials. In remote sensing applications, thermal inertia represents a complex combination of particle size, rock abundance, bedrock outcropping and the degree of induration. A rough approximation to thermal inertia is sometimes obtained from the amplitude of the diurnal temperature curve (i.e., maximum minus minimum surface temperature). The temperature of a material with low thermal inertia changes significantly during the day, while the temperature of a material with high thermal inertia does not change as drastically. Deriving and understanding the thermal inertia of the surface can help to recognize small-scale features of that surface. In conjunction with other data, thermal inertia can help to characterize surface materials and the geologic processes responsible for forming these materials. Constant volume and constant pressure. For gases it is useful to distinguish between volumetric heat capacity at constant volume and at constant pressure. This distinction has the same meaning as for specific heat capacity. 1. ^ U.S. Army Corps of Engineers Technical Manual: Arctic and Subarctic Construction: Calculation Methods for Determination of Depths of Freeze and Thaw in Soils, TM 5-852-6/AFR 88-19, Volume 6, 1988, Equation 2-1 2. ^ http://scienceworld.wolfram.com/physics/ThermalInertia.html Eric Weisstein's World of Science - Thermal Inertia See also
{"url":"http://www.thefullwiki.org/Volumetric_heat_capacity","timestamp":"2014-04-16T07:15:50Z","content_type":null,"content_length":"29827","record_id":"<urn:uuid:6a851714-a65b-4942-ade4-e91c0c670fb4>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00029-ip-10-147-4-33.ec2.internal.warc.gz"}
Initial Value Problem April 29th 2009, 09:32 PM #1 Jan 2009 $dy/dt + 0.8ty = 2t$ $y(0) = 9$ This is what I have so far $dy/dt = 2t - 0.8ty$ $dy/(2- 0.8y) = t dt$ $y/2 - 0.8ln|y| = (t^2)/2 + C$ Solve C: $C = (9/2)-0.8ln|9|$ Now I'm having trouble isolating Y to get an answer. Have I taken all the right steps so far, and if so how do I go about isolating Y? Thank You for any help there are 2 things on this one 1. Since this is a linear DE the whole problem can be solved easier using an integrating factor is set up perfectly for this with the integrating factor e^(.4*t^2) However suppose you want to separate variables 2.first you were ok up to To integrate the left use u = 2 - 0.8y you tried to write as dy/2 -dy/(.8y) which is not true Whoa I didn't not even see that it was a integrating factor problem haha thank you very much. April 30th 2009, 12:05 AM #2 April 30th 2009, 01:57 AM #3 Jan 2009
{"url":"http://mathhelpforum.com/calculus/86565-initial-value-problem.html","timestamp":"2014-04-18T09:46:10Z","content_type":null,"content_length":"35482","record_id":"<urn:uuid:c5f474f8-3a81-4df9-8bfa-fe52871b9b58>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00126-ip-10-147-4-33.ec2.internal.warc.gz"}
Monthly Lease Payment Car Lease Payment Formula How lease payments are calculated - standard method Calculate monthly car lease payments using a simple formula We've already discussed the separate factors that contribute to the cost of car leasing: net cap cost, cap cost reductions, residual, money factor, and term (see How Leasing Works). Now, let's put it all together and see exactly how a monthly lease payment is calculated. It's easy when you understand how it works. The "secret" lease payment formula described below is used by dealers and lease financing companies, who would prefer that you not know about it. Even federal leasing regulations do not require that leasing companies actually disclose how your payment is calculated. The calculation doesn't appear anywhere on a car lease contract form. The result is that the vast majority of people who lease do not know how to check dealers' math on their lease contract and cannot detect the existence of simple errors, intentional "mistakes", or out-and-out fraud. Lack of knowledge of how monthly lease payments are calculated is one of the key reasons that consumers are paying too much for car leases today. Importance of Knowing How to Do Lease Payment Calculation Let's establish why it's so important for you to know how to calculate monthly car lease payments. Consider the following: • If a dealer figures your lease payment based on full sticker price rather than the discounted price you negotiated with him, how will you know? • If a dealer doesn't give you proper credit for your trade-in, even though it's in your contract, how will you know? • If a dealer adds hidden charges and fees to your lease without mentioning them or showing them in your contract, how will you know? • If a dealer mistakenly "drops" a zero and gives you credit for only $100 of a $1000 rebate, even though your contract shows the $1000 rebate, how will you know? • If a dealer doesn't account for your $3000 cash down payment in his payment calculation, how will you know? • If a dealer "bumps" the interest rate (money factor) that he has quoted you (money factor is not shown in lease contracts), how will you know? Remember, all you see shown on a lease contract is a "bottom-line" monthly payment figure, after the calculations have been done by the dealer in the back office. Therefore, you must be able to check a dealer's lease payment figures to make sure there are no "mistakes," intentional or otherwise. Here's a quote from an email one of our readers recently sent us: "When I went to pick up the car, the dealer's calculations seemed wrong. The financial guy reassured me everything was fine. When I demanded to look at their work sheet I saw they had made an "error" on the money rate and also had somehow added some funds to the initial cap cost. In essence, I had been overcharged $75/month. It was corrected ... with an apology." If your payment figures and the dealer's don't agree, the only possible reason is that he's using a different set of numbers for cap cost, residual, money factor, or term than the numbers he's given you. Ask him to give you exactly the numbers he's using — and you should be able to exactly match his results, to the penny. Calculating Monthly Lease Payments — Options Let's now look at the lease payment formula — the way that all car leases are calculated. You can use the formula and calculate monthly payments with a simple pocket calculator. If you don't particularly like math, our Lease Kit provides easy to use Lease Payment Tables that can be printed and used in place of the formula. The printed tables can be carried with you to the dealer's showroom so that you don't need to remember how to do the math there. Further, our online Lease Calculator does all the math for you. Simply plug in the numbers and get your answer immediately. If you have a smartphone and can access our web site on the Internet, you can use the online calculator right in the dealer's office to check his calculations. Monthly Lease Payment Formula A lease payment is made up of three parts: a Depreciation Fee, a Finance Fee, and Sales Tax all added together. We'll look at the first two parts of the formula below. Sales tax is covered a little Depreciation Fee The depreciation fee portion of your payment simply pays the leasing company for the loss in value of its car, spread over the lease term (number of months), based on the miles you intend to drive and the time you intend to keep the car. You pay off an equal portion of the total expected depreciation each month. This is calculated as follows: Depreciation Fee = ( Net Cap Cost Residual ) ÷ Term Remember, Net Cap Cost is Gross Cap Cost (selling price you negotiate with the dealer) plus any add-on dealer fees and taxes that will not be paid up-front in cash, plus any prior loan balances, minus any Cap Cost Reductions (down payment, trade-in, or rebates). Net Cap Cost does not include any lease charges that you will pay in cash at the time of your lease signing. Residual is lease-end resale or residual value (as provided by your dealer), and Term is the length of your lease in months. A good lease deal is when you have the lowest possible Net Cap Cost with the highest possible Residual, along with the lowest possible Money Factor. Finance Fee The finance fee portion of your monthly lease payment is like interest on a loan and pays the leasing company for the use of their money. It's calculated as follows: Finance Fee = ( Net Cap Cost + Residual ) × Money Factor Yes, you add Net Cap Cost and Residual this is not a mistake. It's not double-counting as it may appear. It's simply a way of calculating the average amount financed without using complicated constant-yield annuity business formulas (for more details, click here). This is the method used by all lease companies and dealers. Also be aware that you're paying finance charges on both the depreciation and residual (the total of which is the negotiated selling price of the car). Remember, you're tying up the leasing company's money while you're driving their car. They used their money to buy the car that you will drive while you lease. Technically, you're paying finance charges on half of the depreciation (the average value) and all of the residual value for the term of the lease. The finance fee that you pay with a car lease depends on your credit score. The higher your score, the lower the fee, and the lower your monthly payment. You should always know your latest credit score before going shopping for a car lease or loan. There are actually 3 credit bureaus that report your credit. You can see all 3 bureau reports and 3 scores with a $1 seven-day trial instantly online with a simple enrollment in CreditReport.com What About Interest Rate? You won't find your Monthly Finance Fee or Interest Rate or Lease Money Factor shown in your lease contract. It's not required by law. Rather, they only show you a "Lease Charge" or "Rent Charge," which is the sum of all your monthly finance fees over the entire term of your lease. So, to find your Monthly Finance Fee when you only know your "Lease Charge" (or "Rent Charge") use the following Monthly Finance Fee = Lease Charge ÷ Term — or — If you know your "Lease Charge" or "Rent Charge" from your lease contract and you want to know your Money Factor, use the following formula: Money Factor = Lease Charge ÷ ( (Net Cap Cost + Residual) x Term ) — then — To convert Money Factor to APR Interest Rate, use the following formula: Interest Rate = Money Factor x 2400 Total Monthly Payment Now, add the Depreciation Fee and the Finance Fee that you calculated above to get your Total Monthly Payment. Sales tax must also be added in most states, but we'll hold that discussion until later. Total Monthly Payment = Depreciation Fee + Finance Fee Example Calculation Using the Leasing Formula So now that we've looked at the car lease payment formula, let's see how it actually works. Let's assume you've decided on 3-year (36 month term) lease of a Toyota Camry XLE that has a sticker price of $24,600 (MSRP). You have managed to negotiate the price down to $23,000 (Cap Cost). You decide not to make a down payment, but you have a trade-in worth $5000. Your Net Cap Cost is therefore $23,000 - $5000 = Now, the dealer tells you (because you asked) that the Money Factor is .00375 (.00375 x 2400 = 9.0%) and the Residual Percentage is 60% of MSRP. So your Residual amount, in dollars, is .60 x $24,600 = $14,760. Now let's do the math: Depreciation Fee = ( $18,000 $14,760 ) ÷ 36 = $90.00 Finance Fee = ( $18,000 + $14,760 ) × .00375 = $122.85 Monthly Lease Payment = $90.00 + $122.85 = $212.85 (sales tax not included) The lease payment formula is not complicated and can be used on a common pocket calculator. However, if you're not comfortable with performing the math, especially under pressure in a dealer's showroom, you can use the easy Payment Tables contained in our optional Lease Kit. Or if you've already leased and need to know if your deal was fair and honest, use the Lease Inspector in our optional Lease Kit. Or you can use our Lease Payment Calculator to calculate payments. Or use our Lease vs Buy Calculator to compare lease versus loan costs.
{"url":"http://www.leaseguide.com/lease08.htm","timestamp":"2014-04-18T03:35:03Z","content_type":null,"content_length":"21993","record_id":"<urn:uuid:e0447d6c-e732-493c-bf6d-b5380c30ff47>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00395-ip-10-147-4-33.ec2.internal.warc.gz"}
Provided by: hashinit, hashdestroy, phashinit - manage kernel hash tables #include <sys/malloc.h> #include <sys/systm.h> #include <sys/queue.h> void * hashinit(int nelements, struct malloc_type *type, u_long *hashmask); hashdestroy(void *hashtbl, struct malloc_type *type, u_long hashmask); void * phashinit(int nelements, struct malloc_type *type, u_long *nentries); The hashinit() and phashinit() functions allocate space for hash tables of size given by the argument nelements. The hashinit() function allocates hash tables that are sized to largest power of two less than or equal to argument nelements. The phashinit() function allocates hash tables that are sized to the largest prime number less than or equal to argument nelements. Allocated hash tables are contiguous arrays of LIST_HEAD(3) entries, allocated using malloc(9), and initialized using LIST_INIT(3). The malloc arena to be used for allocation is pointed to by argument type. The hashdestroy() function frees the space occupied by the hash table pointed to by argument hashtbl. Argument type determines the malloc arena to use when freeing space. The argument hashmask should be the bit mask returned by the call to hashinit() that allocated the hash table. The largest prime hash value chosen by phashinit() is 32749. The hashinit() function returns a pointer to an allocated hash table and sets the location pointed to by hashmask to the bit mask to be used for computing the correct slot in the hash table. The phashinit() function returns a pointer to an allocated hash table and sets the location pointed to by nentries to the number of rows in the hash table. A typical example is shown below: static LIST_HEAD(foo, foo) *footable; static u_long foomask; footable = hashinit(32, M_FOO, &foomask); Here we allocate a hash table with 32 entries from the malloc arena pointed to by M_FOO. The mask for the allocated hash table is returned in foomask. A subsequent call to hashdestroy() uses the value in hashdestroy(footable, M_FOO, foomask); The hashinit() and phashinit() functions will panic if argument nelements is less than or equal to zero. The hashdestroy() function will panic if the hash table pointed to by hashtbl is not empty. LIST_HEAD(3), malloc(9) There is no phashdestroy() function, and using hashdestroy() to free a hash table allocated by phashinit() usually has grave consequences.
{"url":"http://manpages.ubuntu.com/manpages/hardy/man9/hashdestroy.9.html","timestamp":"2014-04-19T10:22:18Z","content_type":null,"content_length":"7228","record_id":"<urn:uuid:6273b65d-27e5-4cab-ae04-02049ec0e1d3>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00358-ip-10-147-4-33.ec2.internal.warc.gz"}
Portability GHC Stability highly unstable Maintainer stephen.tetley@gmail.com Base types for Drawing Objects, Graphics / Images (a Graphic that also returns an answer), etc. ** WARNING ** - some names are expected to change particularly the naming of the append and concat functions. data HPrim u Source Graphics objects, even simple ones (line, arrow, dot) might need more than one primitive (path or text label) for their construction. Hence, the primary representation that all the others are built upon must support concatenation of primitives. Wumpus-Core has a type Picture - made from one or more Primitives - but Pictures include support for affine frames. For drawing many simple graphics (dots, connector lines...) that do not need individual affine transformations this is a penalty. A list of Primitives is therefore more suitable representation, and a Hughes list which supports efficient concatenation is wise. data DrawingF a Source Drawings in Wumpus-Basic have an implicit graphics state the DrawingContext, the most primitive building block is a function from the DrawingContext to some polymorphic answer. This functional type is represented concretely as DrawingF. DrawingF :: DrawingContext -> a Monad DrawingF Functor DrawingF Applicative DrawingF Monoid a => Monoid (DrawingF a) pureDF :: a -> DrawingF aSource Wrap a value into a DrawingF. Note the value is pure it does depend on the DrawingContext (it is context free). type LocGraphic u = Point2 u -> Graphic uSource Commonly graphics take a start point as well as a drawing context. Here they are called a LocGraphic - graphic with a (starting) location. type Image u a = DrawingF (a, HPrim u)Source Images return a value as well as drawing. A node is a typical example - nodes are drawing but the also support taking anchor points.
{"url":"http://hackage.haskell.org/package/wumpus-basic-0.8.0/docs/Wumpus-Basic-Graphic-BaseTypes.html","timestamp":"2014-04-16T19:46:40Z","content_type":null,"content_length":"29711","record_id":"<urn:uuid:328284ef-3b17-4df2-be10-611af9d1fe70>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00456-ip-10-147-4-33.ec2.internal.warc.gz"}
9th Physics Solved Numerical Floating bodies ( Gravitation) Term-2 Q. a cube of mass 1kg with each side of 1cm is lying on the table. find the pressure exerted by the block on the table. take g=10 m/s^2 Ans: Pressure is given as force/area so, Force, F = mg = 1000 X 981 gm.m/s^2 and area, A = 1x1 cm^2 = 1 cm^2 Thus, the pressure exerted would be P = (981 X 1000) / 1 or P = 9.81 X 10 ^5 pa Q. The mass of a solid iron cube of side 3cm is to be determined usig a spring balance. If the of iron is approximately 8.5 g/cm^3, the best suited spring balance for determining weight of the solid would be of 1. range 0-250gwt ; least count 1gwt 2. range 0-250gwt ; least count 5gwt 3. range 0-1000gwt ; least count 5gwt 4. range 0-1000gwt ; least count 10gwt Ans: Edge=3 cm , Density=8.5 g/cm^3 Mass= density x volume = 8.5 x(3x3)=229.5gwt Therefore second spring balance of range 0-250 gwt with least count 5gwt will be suitable. Q. The density of turpentine oil is 840 Kg/ m3. What will be its relative density. (Density of water at 4 degree C is 10 cube kg minus cube) Relative Density = Density of Substance/ Density of water at 4 0c Density of turpentine oil = 840 kg/ m3 ( given). Density of water at 4 0c = 1000 kg/ m3 Relative density of turpentine oil = Density of turpentine oil / Density of water at 4 0c = (840 / 1000 ) kg m-3/ kg m-3 = 0.84 Since, the relative density of the turpentine oil is less than 1, therefore it will float in water. Q. A solid body of mass 150 g and volume 250cm3 is put in water . will following substance float or sink if the density of water is 1 gm-3? Ans: The substance will float if its density is less than water and will sink if its greater. so, density of solid body is d = mass/volume or d = 150/250 = 0.6 gm/cm3 which is less than the density of water (1 gm/3). So, the solid body will float on water. Q. A body weighs 50 N in air and when immersed in water it weighs only 40 N. Find its relative density. Ans: the relative density would be ratio of the density of the body with respect to air and the density of the body with respect to water. so, F1 = 50 N F2 = 40 N so, F1/F2 = 50/40 or relative masses m1/m2 = 5/4 and density = mass/volume and as volume remains constant, Relative density = d1/d2 = 5/4 Q. A ball of relative density 0.8 falls into water from a height of 2m. find the depth to which the ball will sink ? Ans: Speed of the ball V = Ö2gh = Ö 2x10x2 = 6.32 m/s Buoyancy force by water try to stop the ball. Buoyancy force = weight of displaced water = dx Vxg where d = density of water V = volume of the ball , g = 10 m/s^2 deceleration of the body by buoyancy force, a = (dVg)/ m where m= d'V d' = density of block a = dVg/(d' V) = dg/d' =(d/d')*g =g/(0.8)= 10/0.8 (Given, d'/d = 0.8)= 12.5 m/s^2 Net deceleration of ball,a' = a-g = 2.5 m/s^2 Final speed of ball v' = 0 Use v' ^2 = v^2 + 2a's s= depth of ball in the water => 40 = 0 + 2x2.5xs => s = 8m Q. Equal masses of water and a liquid of relative density 2 are mixed together. Then, the mixture has a relative density of (in g/cm^3) a)2/3 b)4/3 c)3/2 d)3 Ans: he masses of two liquids are equal, let it be m. Let the relative densities of water and liquid be ρ[1] and ρ[2] respectively. The volume of the two be V[1] and V[2], of water and liquid respectively. The volume of the mixture would be, V = V[1] + V[2 ](1) also, volume = mass/density 2m/ρ (V) = m/ρ[1] (V [1][ ]) + m/ρ[2] (V [2][ ]) here ρ[1] = 1,ρ[2] = 2 and ρ is the relative density of the mixture. 2/ρ = 1/ρ[1] + 1/ρ[2] by substituting the values, we ρ/2 = 2/3 or, the relative density of the combined liquid will be, ρ=4/3 No comments:
{"url":"http://cbseadda.blogspot.com/2012/02/9th-physics-solved-numerical-floating.html","timestamp":"2014-04-18T08:08:08Z","content_type":null,"content_length":"167005","record_id":"<urn:uuid:d3e1f3c8-8205-4354-9e4e-9e055eef1970>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00165-ip-10-147-4-33.ec2.internal.warc.gz"}
Repeating Nines I've been told authoritatively that 0.999999... equals 1. Unfortunately, I have never been that good at listening submissively. When looking at repeating nines, I see a rich universe of values approaching one. I agree that–for all practical calculations–we can treat endless decimals with nothing but nines as one, yet I am unwilling to say with absolute certainty that convergent sequences reach their point of convergence. Repeating nines stand in that mysterious transition of a converging sequence becoming one. I see within that transition itself a great mysterious universe filled with intrigue. To be fair to readers, if you are in a math class and your professor is making a big deal about repeating nines equaling one, just accept what the teacher says without question. This is not an area where critical thinking is welcome. Critical thinking is a tool you use against your enemies and not your friends. There are good reasons to accept that repeating nines sum to one. Ignoring the margin of error simplifies calculations. There is a popular theory in vogue today that ignores the margin of error, but is considered internally consistent (ingoring Godel for the moment). Meanwhile, there is another theory (the Aristotelian tradition of logic) which is extremely unpopular that says the margin of error exists, but accepts that we can ignore it. This second tradition is not internally consistent. By repeating nines, I am referring to an infinite decimal in base ten. An infinite decimal is the summation of an infinite series of the form a[1]/10, a[1]/10^2, a[3]/10^3, ..., a[n]/10^n, ... where a is a number from 0 to 9. Each entry in the series is a full magnitude smaller than the previous entry. All infinite decimals converge. Theorists have speculated that infinite decimals can converge to any point on the number line. For example, the decimal expansion of pi might look something like: 3.14159265358979323846... I've been told that the full infinite expansion of the decimal actually equals pi. With great temerity, I still hold that any decmimal expansion is never exactly equal to pi. The decimal expansion is simply an aproximation of pi. Note, that if c is equal to the first n digits of pi, then 1 - (pi - c) contains a string of (n-1) repeating nines. With each additional digit in our calculation of pi, we extend our series of nines and get closer to the real value of pi. However, a million digit decimal is still not equal to pi. If we subtract the difference of the absolute value of pi from a finite decimal representation of pi, we will get a decimal of repeating 9s. 1 - (pi - 3.14159265358979323846...) = 0.99999999999999999999... Expanding pi indefinitely still leaves us with a mysterious logical entity between pi and the decimal. Saying that 0.99999... equals 1 is the same thing as saying that converging sequences actually reach the point of convergence. Although the quantities are essentially the same, absolute equality requires a major metaphysical leap. One of the best places to start a discussion on repeating nines is to simply try to subtract an infinite decimal with repeating nines from 1. Try to perform the following subtraction: The first thing I hope you notice is that there is essentially nothing between the two numbers. On the other hand, it is impossible to actually do the subtraction. We write decimals from left to right, but do subtraction from right to left. Infinite decimals are unbound on the right. There is not a furthest right hand digit. That means we cannot do standard subtraction. We can try doing a left to right subtraction. In the first digit to the right of the decimal we see we are trying to subtract 9 from 0. So we borrow the 1 from the left of the decimal. This gives us 10 - 9 which leaves a 1. We can borrow this new 1, and subtract the next 9 from a 10 leaving 1. We can repeat this process of borrowing ones forever. These are both infinite strings, and we are in an infinite loop. Each iteration of the infinite loop leaves us with a smaller and smaller remainder. Yet there is always a remainder. Subtracting 0.99999... from 1.0000... leaves 0.00000... plus a remainder that has been infinitely dimished. Some might try calling the remainder an infinitesmal, but clearly there is no longer any appreciable space or value between one and repeating nines. The diminished remainder is simply the creation of our logic. I will call it a logical entity. As repeating nines and the unit whole have essentially the same value, are they the exact same thing, or does the existence of this dimensionless logical entity play a role in things? Perhaps, I should preface the question with an even more fundamental question: Should the existence of this strange logical entity even be acknowledged? Acknowledging the existence of the question brings to light that there may be more than one possible theory for the nature of calculus. Should we not just say authoratively that repeating nines equal one reject the question? For transfinite theorists, the question itself is troubling: Repeating nines have essentially the same value as 1. In this regard it belongs to the set of things that equal one. From outward appearance, 0.9999... belongs to the set of things less than one. Having a number that appears to belong to both the set of things less than one and belonging to the set of things equal to one is extremely troubling to transfinite theorists as it breaks down the division between less than and equal. How can something be both less than and equal? The methods used by Eudoxes demands a clear distinction between less than and equal. This group chooses to say repeating nines are 1. Repeating nines are not part of the set of things less than one. Those who studied logic in the tradition of Aristotle tend to say that the repeating nines approach the unit whole but never equal it. Such a view permits us to jump from repeating nines to one. Yet this wishy-washy-ness is a disconcerting. A third view sees repeating nines like the barber of Seville. The Barber of Seville is the only barber in town. He states that everyone in town either gets his hair cut in his shop, or they cut their own hair. The barber's clear distinction falls apart when asked which group he belongs to. He belongs to both groups. There are different views as to how we should handle this strange logical entity dividing repeating nines and the unit whole. Saying that repeating nines equals the unit whole leads to a nice internally consistent model. Because mathematicians like this model they reject the question. I believe that there are many different legitimate models that we can use to describe space and time. Following John Cougar Mellencamp's example, I fight authority, knowing authority always win. My thoughts on this subject begin at an interesting point: I've been told authoratively that every number on the the number line can be expressed by a distinct infinite decimal. This leads to a problem: Let's say that q is an irrational number. As a member of the reals it can be expressed as a distinct infinite decimal. If I understand the term "distinct" to mean that q differs from all other infinite decimals at a finite digit from the decimal point (I will call this digit n), then I can construct a real number that exists between q and all real numbers less than q. To do so, I simply truncate q at a digit somewhere to the right of digit n. For my concept of infinite decimal to be complete, I have to modify my understanding of distinct decimal. That is I have to accept that there might be two distinct decimals a and b, where a and b do not begin to differ until after an infinite number of digits from the decimal point. Accepting such an idea means that I cannot establish absolute equality simply by starting at the decimal point and comparing digit by digit. In other words, I might have two decimals with repeating nines, but the decimals themselves are different. For example (1 - 1/2^n) where n approaches infinity is equal to 0.99999. Also (1 - 1/3^n) = 0.99999... Comparing digit by digit, they appear the same, but they still might be distinct numbers. To an extent these converging sequences are distinct. Each operation converges at a different rate. Converging sequences may have the same ultimate value, but still having some subtle distinctions. We could compare the results digit by digit and find no difference and even treat them as practically equal. Yet there still might be a difference at the absolute level. Repeating nines seem to offer the same problem as multiplying and dividing by zero 0 * 5 = 0 * 4. Dividing both sides of the equation by 0 we get 5 = 4. There really is not a moral precept against dividing by zero. It is just that information gets lost when we perform division by zero...causing problems in our reasoning process. Converging sequences lets us get around some of the problems of dividing by zero. Is this in part because the sequences preserve information otherwise lost? Fractional Concepts Fractions behave in an interesting way when converted to infinite decimals. The infinite decimal represenation of fractions all end in repeating digits. For example 1/3 = 0.3333... The three will repeat indefinitely. Sometimes, several digits will precede the repeating digits. For example 1/6 = 0.1666... The 6 is the repeating digit. In base ten, When the denominator of the fraction has no other factors than 2 or 5, the decimal representation will end with repeating zeros. For example 1/2 = 0.5000..., 1/5 = 0.2000..., 1/20 = 0.050... All other fractions end in repeating digits. In the last section of this essay, I suggested that two numbers might have the same decimal representation for an infinite number of digits and still not be the same number. I have wondered at times if perhaps the infinite decimal 0.333... and 1/3 are actually the same thing. As I do the long division that produces 0.3333..., I notice that each iteration of the long division leaves me with a remainder. Certainly, each step in the expansion of the decimal gets me closer to the true value of 1/3, but there is still a remainder. Expanding the decimal 1000 digits gives me a number extremely close to a third, but there is still a remainder of 1/(3^1000). Here is the really strange thing: Clearly, three thirds equals a whole. 1/3 + 1/3 + 1/3 = 1. If I add up the infinite decimals, I get repeating nines: You are likely to come across math texts claiming that: if three thirds equals a whole, and repeating threes add up to repeating nines; Then repeating nines equals a whole. The flaw in this proof is with the stipulation that repeating threes equal a third. The same mysterious logical entity that stands between between repeating nines and a whol stands between repeating threes and a third. After then nth iteration of the expanding a third into decimal form, I still have a remainder 1/(3*10^(n+1)). A third equals repeating threes plus the logical entity. Adding three thirds equals repeating nines plus the strange logical entity. The demonstatration that three repeating threes add up to a repeating nine adds to my belief that repeating threes might be different from the absolute value a third. There are some strange things going on with fractions. Let's say I had a fraction of the form a/b where b is has factors other than 2 or 5. a is a number between 1 and b. Such a fraction would be between 0 and 1. The digits in the decimal expansion of a/b and (b-a)/b will add up to repeating nines. Here are a few samples: 1/7 = 0.142857... (142857 repeats) 6/7 = 0.857142... 2/7 = 0.285714... 5/7 = 0.714285... 3/7 = 0.428571... 4/7 = 0.571428... 1/6 = 0.166666... (6 repeats) 5/6 = 0.833333... (3 repeats) 5/11 = 0.454545... (45 repeats) 6/11 = 0.545454... (54 repeats) 1/11 = 0.0909090... (90 repeats) 10/11= 0.9090909... (09 repeats) I find it interesting that the infinite decimals of a/b and (b-a)/b will always add up to repeating nines. Are All Repeating Nines Equal? Earlier in this brain fart, I mentioned that the (1 - 1/2^n) converges at a different rate than (1 - 1/3^n). Altough both numbers produce an endless string of repeating nines, I am willing to accept that they still are different numbers. The fact that the infinite decimal expansions of a/b + (b-a)/b always creates an string of repeatings nines in one step makes me wonder if the repeating nines produced by adding 0.333... and 0.666... is absolutely equal to the repeating nines produced by adding 0.090909... and 0.909090... Wallace's Trick In Everything and More, David Foster Wallace presents an interesting trick to convince the world that repeating nines "equal" the unit whole. He starts with x = 0.9999... He then subtracts x from 10*x as follows: 10 - 1 = 9. Hence 9x = 9.000... implying that x=1.0. We start with x = 0.999... and conclude that x equals 1.0. In the essay above we noted that a digit to digit comparison from left to right does not prove that numbers are actually the same thing. This leaves the possibility that (10 * 0.9999... - 9) is not actually equal to 0.9999... despite the fact that we can match the digits from left to Base Two Conversions I do not like limiting myself to base ten. All bases behave similar to base ten. Working in base six, I would find that fractions of the form a/b, where b has factors other than 2 or 3, then a/b + (b-a)/b will add up to repeating fives. Base two is perhaps the most interesting base. All fractions would be represented as infinite decimals with the digits 0 and 1. For example 1/3 = 0.010101... In base two, repeating 1s play the same role as repeating 9s in base ten. Does 0.11111... = 1? In base two, the question of repeating nines is restated as: Is a zero followed by repeating 1s the same as a 1 followed by repeating zeros? Base two is interesting in that you can do all of your mathematics with bit functions. For example, when b is not a power of 2, a/b turns out to be the bit not of (b-a)/b. Repeating nines and the unit whole are essentially the same quantity. However, I have never been able to say with absolute certainty that they are the same thing. As we are unable to complete any infinite task, we do not know for certain that converging sequences ever actually equal the point of convergence. Certainly, we can make mathematical models that assume repeating nines equal one. Yet other mathematical models are equally valid. When considering the density of the set of all infinite decimals, I am left with a strange puzzle. Let's say a is a distinct infinite decimal. When I think of a as a member of the set of infinite decimals, I cannot find a finite point where it becomes distinct. That means that I could have two distinct infinite decimals, but not be able to prove they are distinct by starting at the decimal point and comparing digit by digit. The only conclusion I can derive from such thought experiments is that we do not know anything conclusively about infinite numbers.
{"url":"http://descmath.com/diag/nines.html","timestamp":"2014-04-20T10:46:59Z","content_type":null,"content_length":"17135","record_id":"<urn:uuid:17b3a685-e3f9-4af7-b126-5454c453f7e6>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00298-ip-10-147-4-33.ec2.internal.warc.gz"}
If an object is thrown vertically upward with an initial velocity of v, from an original position of s, the height h... - Homework Help - eNotes.com If an object is thrown vertically upward with an initial velocity of v, from an original position of s, the height h at any time t is given by: h = -16t^2 + vt + s (where h and s are in ft, t is in seconds and v is in ft/sec.) If a rock is thrown upward from a height of 100 ft, with an initial velocity of 32 ft/sec, sholve for the itme that it takes for it to hit the ground (when h = 0). Round your answer to 2 decimals. `h = -16t^2 + vt + s` s = original position = 100ft v = 32 ft/sec h = 0 when it hits ground Solve for t. `0 = -16t^2 + 32t + 100` `0 = (-32+-sqrt(32^2 - 4(-16)(100)))/(2(-16))` `0 = (-32+-sqrt(7424))/-32` `0 = -1.6925and 3.6925` Since time is not negative, -1.6925 is an extraneous solution, therefore it takes 3.69 seconds for the object to hit the ground. ` ` Join to answer this question Join a community of thousands of dedicated teachers and students. Join eNotes
{"url":"http://www.enotes.com/homework-help/an-object-thrown-vertically-upward-with-an-454929","timestamp":"2014-04-21T12:59:00Z","content_type":null,"content_length":"25674","record_id":"<urn:uuid:a37a94b3-70f6-4245-ab24-95ceeb905cc7>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00624-ip-10-147-4-33.ec2.internal.warc.gz"}
Cube Coloring Cube Coloring Problem Middle School Lessons || High School Lessons || Contents A lesson in which students explore cubes made out of unit cubes. Author: Linda Dickerson, Redmond School District, Redmond, Oregon Grade Level: Appropriate for grades 5-12 Investigate what happens when different sized cubes are constructed from unit cubes, the surface areas are painted, and the large cubes are taken apart. How many of the 1x1x1 unit cubes are painted on three faces, two faces, one face, no faces? Objective: Students will be able to: 1. Work in groups to solve a problem. 2. Determine a pattern from the problem. 3. Write exponents fro the patterns. 4. Predict the pattern for larger cubes. 5. Graph the growth patterns. 6. Extend to algebra Resources/Materials: A large quantity of unit cubes, graph paper, colored pencils or markers. Activities and Procedures: 1. Hold up a unit cube. Tell students this is a cube on its first birthday. Ask students to describe the cube (eight corners, six faces, twelve edges). 2. Ask student groups to build a 'cube' on its second birthday. Ask the students to build a cube on its second birthday and describe it in writing. 3. Ask students how many unit cubes it will take to build a cube on its third birthday, fourth, fifth... 4. Pose this coloring problem: The cube is ten years old. It is dipped into a bucket of paint. After it dries the ten year old cube is taken apart into the unit cubes. How many faces are painted on three faces, two faces, one face, no faces. 5. Have the students chart their findings for each age cube up to ten and look for patterns. 6. Have students write exponents for the number of cubes needed. Expand this to the number of cubes painted on three faces, two faces, one face, no faces. 7. Have students graph the findings for each dimension of cube up to ten and look for the graph patterns. 8. For further extension, see NCTM ADDENDA SERIES/GRADES 6/8. Tying it all together: The students will have a chance to estimate, explore, use manipulatives, predict, explain in writing and orally. They will note that the three painted faces are always the corners-8 on a cube. The cubes with two faces painted occur on the edges between the corner and increase by 12 each time. The cubes with one face painted occur as squares on the six faces of the original cube. The cubes with no faces painted are the cube within the cube. This is an excellent way for students to become involved in exploring a problem of cubic growth. Middle School Lessons and Materials High School Lessons and Materials [Privacy Policy] [Terms of Use] Home || The Math Library || Quick Reference || Search || Help © 1994-2014 Drexel University. All rights reserved. The Math Forum is a research and educational enterprise of the Drexel University School of Education.
{"url":"http://mathforum.org/paths/measurement/cube.html","timestamp":"2014-04-21T04:52:20Z","content_type":null,"content_length":"5823","record_id":"<urn:uuid:6010d985-8634-41e4-b7d0-0934c272b2d6>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00263-ip-10-147-4-33.ec2.internal.warc.gz"}
Lagrange's theorem/generator/probability August 16th 2010, 10:04 PM Lagrange's theorem/generator/probability Hi just a few questions I'm having trouble with: Let p=1009 (a) What is the order n of the group $Z^{*}_p$? Write n as a product of prime powers. (b) Show that if $g\in Z^{*}_p$ then either g is a generator for $Z^{*}_p$ or at least one of the following equations holds: $g^{144}=1,g^{336}=1,g^{504}=1$ (Hint: Find the factors of n and apply Lagrange's Theorem.) (c)Use the result from part(b) to find a generator for $Z^{*}_p$.Use fast exponentiation to compute the necessary powers of your generator, and show your working. (d) Suppose you choose 10 elements at random from $Z^{*}_p$.Estimate the probability that at least one of them will be a generator For (a) I've Calculated... But the others i need help please =)
{"url":"http://mathhelpforum.com/number-theory/153873-lagranges-theorem-generator-probability-print.html","timestamp":"2014-04-21T12:30:08Z","content_type":null,"content_length":"5447","record_id":"<urn:uuid:910457c0-9e69-4ebc-961e-9000305bd8bd>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00161-ip-10-147-4-33.ec2.internal.warc.gz"}
A new place for activists: math Published: Wednesday, March 14, 2012 Updated: Wednesday, March 14, 2012 09:03 Remember the unit circle? Of course you don’t. It’s a bunch of numbers lost in the fog of high school geometry. But it’s not your fault. It’s pi’s fault. Pi is wrong, and I want you to help make it I don’t mean that pi is factually wrong; the ratio of a circle’s circumference to its diameter hasn’t changed. I mean that it’s the wrong choice of the circle constant because it leads to weird and unnatural situations. Let me explain. Mathematicians don’t like to measure circles in degrees. They prefer radians, which are just a way of making every circle look like the unit circle, regardless of size. Because the unit circle has a radius of one, its diameter is two and its circumference is two−pi. Therefore, every circle has a circumference of two−pi radians. Pi radians is only half a circle. That’s all the math you need. I So, in classic textbook tradition, let’s apply math to a real−world situation where you would never actually need it. Say you’re cutting up your favorite circular fruit−filled pastry and your friend wants a mathematically precise amount. Where do you cut? The problem is that one pie isn’t one−pi — it’s two−pi. If you want an eighth of a pie, it’s a quarter pi, measured along the crust. It’s also really confusing, measured from anywhere. The way to fix this is to make the circle constant the size of the whole circle, currently known only as two−pi. If I had a time machine, I’d go explain this to Leonard Euler and make pi twice as large, enough to cover the whole circle. Our beloved 3.14 would be known by a name that belies its semicircular nature — one pierogi, perhaps. But it turns out I don’t have a time machine (surprise!), so unless we want to change every math textbook and paper ever written, we’re stuck with pi meaning half a circle. The next best thing is to give the true circle constant — around 6.28 — a name. We’re going to call it — drumroll please — tau. Tau is another Greek letter that looks kind of like pi, if you amputated one leg and moved the other to the middle. Tau is the ratio of a circle’s circumference to its radius (instead of to its diameter). Think of one tau as one turn. You want an eighth of a turn? That’s one−eighth tau. Half a pie? Half a tau — or as some people say, one−pi. Contrary to what you might remember from calculus at 8:30 a.m. freshman year, math is supposed to be beautiful and simple. There are many other benefits of tau beyond making radians understandable to mere mortals, and arguments against tau that I could refute. We’re going to put them aside to address what I consider to be a great injustice: No university math department accepts tau. I want Tufts to be the first. We’re a forward−thinking university full of activists. Tufts should accept all students, regardless of their numerical beliefs or angular orientation. Even though pi and tau are 180 degrees apart (literally), I think we can turn this thing around. All we need to do is show a few pi−ous administrators that they are, in fact, two pi−ous. The effects will reach far beyond Tufts. The simplicity tau offers is not nearly as important for graduate students as it is for children first learning geometry. But since no one is going to teach nonstandard notation, tau will never catch on until higher education accepts it. Reducing math’s barrier to entry in middle school will lead to more scientists and engineers coming up with solutions for the world’s problems. It also means fewer people will be scared of math. How can such a small bit of notation make people less scared of math? To revisit the unit circle, take the angle five−eighths tau. It’s immediately clear that it’s a little bit more than half a turn. But pi messes everything up. Substituting in, that’s ten−eighths pi, but the factors of two cancel, so it’s five−fourths pi. Scary mathematics, indeed. I want it out of our high schools and out of I’m not asking for much from our math department. I want the symbol tau to be officially accepted as an alternative way of writing two−pi on homework, exams and papers. Anyone who wants to continue to use pi can do so. If you want to know more about tau, go to TauDay.com (if you’re an engineer or math major) or Google “Vi Hart tau” and feel lucky (if you’re not). They’re my sources for this op−ed, so consider them cited. If you want to know more about the movement to bring tau to Tufts, you’re going to have to make that news yourself. Max Goldstein is a sophomore majoring in computer science. Be the first to comment on this article! Log in to Comment You must be logged in to comment on an article. Not already a member? Register now Most Commented • Op-Ed | University pass: a benefit for all 3 comments Recent Comments • Being an international student isn't easy, given our complex culture and language. Assistance must come from various sources, including proper representation on campus. A new award-winning worldwide book/ebook to help anyone coming to the US is "What Foreigners Need To Know About America From A To Z: How to Understand Crazy American Culture, People, Government, Business, Language and More." It paints a revealing picture of America for those who will benefit from a better understanding, including international students. Endorsed worldwide by ambassadors, educators, and editors, it also identifies "foreigners" who became successful in the US and how they contributed to our society, including students. A chapter on education identifies schools that are free and explains how to be accepted to an American university and cope with a new culture, friendship process and classroom differences they will encounter. Some stay after graduation. It has chapters that explain how US businesses operate and how to get a job (which differs from most countries), a must for those who want to work for an American firm here or overseas. It also has chapters that identify the most common English grammar and speech problems foreigners have and tips for easily overcoming them, the number one stumbling block they say they have to succeeding here. Most struggle in their efforts and need guidance from schools' international departments, immigration protection, host families, concerned neighbors and fellow students, and books like this to extend a cultural helping hand so we all have a win-win situation. Good luck to all wherever you study!
{"url":"http://www.tuftsdaily.com/op-ed/a-new-place-for-activists-math-1.2715281","timestamp":"2014-04-16T04:11:38Z","content_type":null,"content_length":"51925","record_id":"<urn:uuid:e95f7557-d31b-402b-8592-359ed7aa9ea5>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00603-ip-10-147-4-33.ec2.internal.warc.gz"}
Dataplot Vol 1 Auxillary Chapter Generate the tables for either a one-sample or a two-sample proficiency test as defined by the ASTM E 2489 - 06 standard. The following document "Standard Practice for Statistical Analysis of One-Sample and Two-Sample Proficiency Testing Programs", ASTM International, 100 Barr Harbor Drive, PO BOX C700, West Conshohoceken, PA 19428-2959, USA. describes a methodology for performing either a one-sample or a two-sample proficiency test. Proficiency testing is the use of interlaboratory comparisons for the determination of laboratory testing or measurement performance. The methods in the E 2489- 06 standard provide direction for assessing and categorizing the performances of individual laboratories based on the relative likelihood of occurence of their test results. The standard recommends a minimum of 10 laboratories and states that it is desirable to have 30 or more participating laboratories. Proficiency testing programs typically have a larger number of participants than an interlaboratory test. This results in a wider variation of test conditions, so a proficiency test can provide more information regarding the precision of test results that may be expected when a test method is used in the general testing community. In this standard, the median is used as the consensus value. The measure of variablility is the interquartile range. In this standard, the interquartile range is defined as the difference between the upper hinge and the lower hinge (this is slightly different than the standard definition of the interquartile range as the difference between the 75th percentile and the 25th percentile). The lower hinge is the median of the points less than or equal to the median and the upper hinge is the median of the points greater than or equal to the median. The inner fence is the value equal to the upper (lower) hinge of the data set plus (upper) or minus (lower) 1.5 times the interquartile range. The outer fence is the value equal to the upper (lower) hinge of the data set plus (upper) or minus (lower) 3.0 times the interquartile range. A test result that is between the lower inner fence and upper inner fence is labeled as "typical". A test result that is between the inner and outer fence values is labeled as "unusual". A test result that is beyond the outer fence values is labeled as "extremely unusual". These statistics are used because they are both simple and robust. Note that the above values are used in generating a box plot. Description of One-Sample Proficiency Test: The data consists of: 1. A response variable containing measurements on a sample 2. A lab-id variable For a one-sample proficiency analysis, each lab reports a single test result. This E 2489 one-sample proficiency analysis generates the three tables documented in the above document: 1. The test results sorted by lab-id. The purpose of this table is to make it easy to identify the results for a given laboratory. 2. The test results sorted in descending order with the median and lower and upper hinges marked. Each lab's result is categorized as "extremely unusual", "unusual", or typical. The purpose of this table is to show the range and distribution of the test results. 3. The test results sorted in descending order (as in table 2). However, the data are divided into bins. The purpose of this table is to show the range and distribution of the test results. In addition to the tables, the standard also recommends complementing the tables with a dot plot. These are also known as dot diagrams or strip plots. In Dataplot, these are referred to as strip plots. Enter the command HELP STRIP PLOT for details on generating these plots in Dataplot. The first program example below demonstrates this plot. A strip plot is an alternative to a histogram for displaying univariate data. The x-axis contains the value of the test result and the y-axis is simply a constant value. If two or more test results have the same value, the points are stacked vertically. You can draw the points are drawn as filled circles. Alternatively, you can draw the points drawn as the lab-id (this is useful for identifying outlying labs). You can also generate the strip plot with the data divided into bins (you can specify the bin width and the starting and ending bin limits). In this form, the vertical axis will represent the number of occurences. This form of the strip plot is essentially a histogram. Although the E 2489 - 06 standard does not explicitly talk about box plots, these can also be a useful complement to the tables since the box plot is a graphical representation of table 2. Description of Two-Sample Proficiency Test: The data consists of: 1. A response variable containing measurements on the first sample 2. A response variable containing measurements on the second sample 3. A lab-id variable For a two-sample proficiency analysis, each lab reports exactly two test results (i.e., a single measurement on each sample). The random error quantity is defined as (X - Y) - (X[med] -Y[med]) where X and Y denote the test results for sample one and sample two, respectively, and X[med] and Y[med] denote the medians of sample one and sample two, respectively. This E 2489 two-sample proficiency analysis generates the three tables documented in the above document: 1. The test results for both samples sorted by lab-id. The purpose of this table is to make it easy to identify the results for a given laboratory. 2. The test results sorted in descending order of the random error quantity with the median and lower and upper hinges marked. The random error quantity for each lab's result is categorized as "extremely unusual", "unusual", or "typical". The purpose of this table is to show the range and distribution of the random error quantities. 3. The test results sorted in descending order of sample two with the median and lower and upper hinges marked. The test results for each sample are categorized as "extremely unusual", "unusual", or "typical". The purpose of this table is to show the range and distribution of the test results for each sample. The standard also recommends complementing the tables with a Youden plot. This is demonstrated in the second program example below. Syntax 1: ONE SAMPLE PROFICIENCY TEST <y> <labid> <SUBSET/EXCEPT/FOR qualification> where <y> is a response variable; <labid> is a lab id variable; and where the <SUBSET/EXCEPT/FOR qualification> is optional. Syntax 2: TWO SAMPLE PROFICIENCY TEST <y1> <y2> <labid> <SUBSET/EXCEPT/FOR qualification> where <y1> is the first response variable; <y2> is the second response variable; <labid> is a lab id variable; and where the <SUBSET/EXCEPT/FOR qualification> is optional. ONE SAMPLE PROFICIENCY TEST Y LABID TWO SAMPLE PROFICIENCY TEST Y1 Y2 LABID You can use the CAPTURE HTML command to generate these tables in HTML format. You can use the CAPTURE LATEX command to generate these tables in Latex format. You can use the CAPTURE RTF command to generate these tables in Rich Text Format (RTF). Related Commands: E691 INTERLAB = Perform an E691 interlaboratory analysis. STRIP PLOT = Generate a strip plot. BOX PLOT = Generate a box plot. YOUDEN PLOT = Generate a Youden plot. CAPTURE HTML = Generate output in HTML format. CAPTURE LATEX = Generate output in Latex format. CAPTURE RTF = Generate output in RTF format. "Standard Practice for Statistical Analysis of One-Sample and Two-Sample Proficiency Testing Programs", ASTM International, 100 Barr Harbor Drive, PO BOX C700, West Conshohoceken, PA 19428-2959, Implementation Date: Program 1: . Read the data SKIP 25 READ E2489A.DAT LABID Y . Generate the tables to the screen . Now generate the tables in RTF format (for import into Word) . Generate the strip plot for the raw (unbinned) data Y1LABEL Number of Occurences X1LABEL Data Values TITLE Dot Diagram for Original Data YLIMITS 0 2 MAJOR YTIC MARK NUMBER 3 MINOR YTIC MARK NUMBER 0 SET STRIP PLOT INCREMENT 0.1 IF SYMBL = CIRCLE CHARACTER CIRCLE ALL CHARACTER FILL ON ALL CHARACTER HW 1 0.75 ALL END OF IF IF SYMBL = LABID END OF IF Y1TIC MARKS ON . Generate the strip plot for the binned data CLASS LOWER 0.5 CLASS UPPER 5 CLASS WIDTH 0.1 FRAME CORNER COORDINATES 15 40 85 70 Y1LABEL Number of Occurences X1LABEL Data Values TITLE Dot Diagram for Binned Data LET MAXFREQ = MAXIMUM Y2 LET NUMTIC = MAXFREQ + 1 MINOR YTIC MARK NUMBER 0 Y1TIC OFFSET 1 1 CHARACTER HW 1 0.75 ALL STRIP PLOT Y2 X2 . Now generate a box plot Y1LABEL Test Results TITLE Box Plot for Proficiency Data XLIMITS 0 2 X1TIC MARKS ON View the output from this command. Program 2: . Read the data SKIP 25 READ E2489B.DAT Y1 Y2 LABID . Generate the tables to the screen TWO SAMPLE PROFICIENCY TEST Y1 Y2 LABID . Now generate the tables in RTF format (for import into Word) TWO SAMPLE PROFICIENCY TEST Y1 Y2 LABID . Generate the Youden plot LET STRING SYMBL = LABID IF SYMBL = CIRCLE CHARACTER CIRCLE ALL CHARACTER FILL ON ALL CHARACTER HW 1 0.75 ALL END OF IF IF SYMBL = BOX CHARACTER BOX ALL CHARACTER FILL ON ALL CHARACTER HW 1 0.75 ALL END OF IF IF SYMBL = DIAMOND CHARACTER DIAMOND ALL CHARACTER FILL ON ALL CHARACTER HW 1 0.75 ALL END OF IF IF SYMBL = LABID LET LABIDSRT = LABID LET YSORT = SORTC Y LABIDSRT CHARACTER HW 2 1.50 ALL END OF IF TITLE OFFSET 2 X1LABEL Test Results for Sample One Y1LABEL Test Results for Sample Two TITLE Youden Plot of Test Results LIMITS 0 5 TIC MARK OFFSET 0 0.5 YOUDEN PLOT Y2 Y1 LABID LET AX1 = PROBEVAL LET AX2 = PROBEVAL LET AY1 = PROBEVAL LET AY2 = PROBEVAL DRAWDATA AX1 MEDX AX2 MEDX DRAWDATA MEDY AY1 MEDY AY2 DRAWDATA AX1 AY1 AX2 AY2 View the output from this command. Date created: 1/14/2009 Last updated: 1/14/2009 Please email comments on this WWW page to alan.heckert@nist.gov.
{"url":"http://itl.nist.gov/div898/software/dataplot/refman1/auxillar/proficie.htm","timestamp":"2014-04-20T21:21:46Z","content_type":null,"content_length":"19371","record_id":"<urn:uuid:90d0acbc-8801-4b0d-9f53-8a5b234b2451>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00022-ip-10-147-4-33.ec2.internal.warc.gz"}
Long Island City Geometry Tutor Find a Long Island City Geometry Tutor Hello,I am an experienced math tutor looking to tutor math and calculus courses to students ranging from elementary school through 1st year university calculus level. I have an honors B.Sc degree in pure math, and I have 2 years of experience as a teaching assistant, teaching 1st and 2nd year unive... 15 Subjects: including geometry, calculus, algebra 1, algebra 2 ...I try to teach not just the terms or the steps involved in introductory logic, but to help students gain a conceptual understanding--a knowledge of both the "how" and the "why"--so they are better prepared for more advanced logic. I have 3 years of experience tutoring students for the NYS Englis... 34 Subjects: including geometry, English, GRE, writing ...In order to succeed in geometry, your child must develop a clear plan for approaching proofs, as well as learn how to think spatially. Michal can help develop that thought process. With the movement from the simplicity of four functions into the idea of properties of those functions, many children might feel lost. 8 Subjects: including geometry, calculus, algebra 1, algebra 2 ...But years later, is it hard now? After elementary school, these are routine math problems most people can easily accomplish. The math is the same, but it is easier to do now. 12 Subjects: including geometry, physics, MCAT, trigonometry ...In order for me to help someone learn the subject, I first have to understand what way they learn. Everyone learns differently; whether it be visually, by listening or being hands on. So using different methods and techniques are important in the process of learning. 11 Subjects: including geometry, algebra 1, trigonometry, elementary (k-6th)
{"url":"http://www.purplemath.com/long_island_city_ny_geometry_tutors.php","timestamp":"2014-04-16T07:45:18Z","content_type":null,"content_length":"24210","record_id":"<urn:uuid:b9049a45-d4f5-454f-8430-d6a05b7bf6cf>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00341-ip-10-147-4-33.ec2.internal.warc.gz"}
Computational and Analytical Mathematics The Australian Mathematical Science Institute and the IRMACS Centre Present: A Workshop on Computational and Analytical Mathematics - Video Recordings While the free energy and spontaneous magnetisation of the two-dimensional Ising model have been known for more than 60 years, the susceptibility is still not completely known. Some 10 years ago we discovered and implemented a polynomial time algorithm for the series expansion of the susceptibility for the square lattice Ising model. This has... Nonlinear optimization is a key instrument in modern control engineering. In this talk we describe recent progress in the area of feedback control design based on the use of smooth and non-smooth optimization techniques. This talk concerns with the study of new classes of nonlinear and nonconvex optimization problems of the so-called infinite programming that are generally defined on infinite-dimensional spaces of decision variables and contain infinitely many of equality and inequality constraints with arbitrary (may not be compact) index sets. These problems... (Fr\'echet) smooth. Under some mild assumptions, it is shown that the infimal/supremal convolution of a fairly general function...
{"url":"http://conferences.irmacs.sfu.ca/jonfest2011/videos?page=4","timestamp":"2014-04-17T12:29:29Z","content_type":null,"content_length":"13788","record_id":"<urn:uuid:5ec39524-5b9a-4e1c-905f-24d5984bae5e>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00225-ip-10-147-4-33.ec2.internal.warc.gz"}
Existence of fine moduli space for curves and elliptic curves up vote 7 down vote favorite 1. For the moduli problem of a curve of genus $g$ with $n$ marked points, how large an $n$ is needed to ensure the existence of a fine moduli space? For this question, terminology is that of Mumford's GIT. 2. For the following three moduli problems, how big an $N$ is required for existence of a fine moduli space? The terminology is from the exposes of Deligne-Rapoport and Katz-Mazur, or Shimura. The first is in French, the second is too big, and the third is using old language and never mentions the modern terminology of universal elliptic curve, etc.. Therefore it is not possible for me to dig up the information myself. i) Elliptic curves equipped with a cyclic subgroup of order $N$ -- this moduli problem corresponds to the modular group $\Gamma_0(N)$. ii) Elliptic curves equipped with a point of order $N$ -- this moduli problem corresponds to the modular group $\Gamma_1(N)$. ii) Elliptic curves equipped with a symplectic pairing on $N$-torsion points -- this moduli problem corresponds to the modular group $\Gamma (N)$. References other than the above, will be appreciated. arithmetic-geometry ag.algebraic-geometry elliptic-curves moduli-spaces 2 To get a $Gamma_0(N)$-structure, the subgroup of order N needs to be cyclic (in a sense that is precisely explained in Katz-Mazur, chapter 3). – S. Carnahan♦ Jan 10 '10 at 2:28 Yes, of course. Thanks for pointing out. I have added. – Anweshi Jan 10 '10 at 2:29 If your two questions had mated, you would have asked what value of N guarantees the existence of a fine moduli space of genus-g curves endowed with various N-level structures on the Jacobian... – JSE Jan 10 '10 at 2:43 add comment 3 Answers active oldest votes If you want to work over a base ring such as $\mathbf{Z}[1/n]$ rather than over $\mathbf{Q}$ or $\mathbf{C}$ then the relevant numerical condition is that the part of $N$ coprime to $n$ not be "too small" in the $\Gamma_1$ and full level cases. For an extreme example, if $N$ is a $p$-power and you work over $\mathbf{Z}_{(p)}$ then you'll always have problems in characteristic $p$ at the supersingular points. up vote 14 On the other hand, if you're willing to go beyond schemes and work with algebraic spaces or Deligne-Mumford or Artin stacks then these issues go away (at the expense of more technical down vote background) in the sense that one has a reasonable "moduli space" over $\mathbf{Z}$ with nice regularity properties for all $N$ (even incorporating degenerations in the sense of generalized elliptic curves with level structure). It has better properties than a coarse moduli space (aside from perhaps not being a scheme...). add comment The first is unrepresentable for arbitrary large $N$ (it depends on the residue class of $N$ mod 12), the second is representatble for $N \geq 4$ (if you are considering $Y_1(N)$) or $N \ geq 5$ (if you are considering $X_1(N)$, i.e. including the cusps), the third is representable for $N \geq 3$. The references you mentioned are the standard ones. Probably Silverman discusses these in his books somewhere too (maybe the 2nd). If you look in Gross's Duke paper on companion forms (A tameness criterion ... ) you will find a summary of the story for $X_1(N)$. In the $\Gamma_0(N)$ case, Mazur has a careful discussion in the beginning of section 2 of his Eisenstein ideal up vote 13 paper. Both Gross and Mazur refer back to Deligne--Rapoport for proofs. down vote It is also just a matter of computing the torsion in each of the $\Gamma$'s (plus epsilon more if you want to understand representability at the cusps), which is an exercise. (Although you have to do a little work to see why this is the necessary computation.) 1 Regarding the answer to 2.i), I think the answer should be no for all N because a pair (E,C) with E an elliptic curve and C a cyclic subgroup always has at least the automorphism -1, no matter what N is. Maybe we are not all thinking about the same moduli problem? – Bjorn Poonen Jan 10 '10 at 7:06 You're right; I was rather just thinking about when $\Gamma_0(N)/\langle \pm 1 \rangle$ is torsion free. – Emerton Jan 10 '10 at 10:29 1 So this is another example of a moduli problem having no fine solution because of existence of automorphims. .. – Anweshi Jan 10 '10 at 14:17 @Emerton. I am unable to accept this answer because the question was in two parts, and the other part was answered by Jordan Ellenberg. See meta here .. tea.mathoverflow.net/discussion/ 178 – Anweshi Jan 22 '10 at 19:12 Dear Anweshi, I think I can speak for Jordan as well as myself in telling you not to worry about it. – Emerton Jan 22 '10 at 19:37 add comment Here is a thought on the first question. What you need to know (at least to get an algebraic space; I'll let others be more careful than I if you want a scheme) is how large n must be to ensure that an automorphism of a smooth genus g curve X which fixes n points must be the identity. Let G be the cyclic group generated by this automorphism: then the map X -> X/G is totally ramified at your n fixed points. So by Riemann-Hurwitz, g(X) [NO, 2g(X)-2, THANKS, BJORN) is at least -2|G| + n(|G|-1). If G is nontrivial, in other words, g is at least n-4 [NO, 2g+2, up vote THANKS, BJORN]. So I think g+5 [NO, 2g+3, THANKS, BJORN] marked points should be enough. That this is necessary can be seen by taking g=2; on M_{2,6} you'll have a bunch of loci with an extra 6 down involution, parametrizing curves whose marked points are precisely the Weierstrass points. [NO MORE LATE-NIGHT RIEMANN-HURWITZ: THANKS TO BJORN FOR CORRECTING THE ERRORS] Thanks, this was useful. However a solution to the scheme situation also will be appreciated(perhaps from the next person). – Anweshi Jan 10 '10 at 2:39 4 @JSE: I think you meant to have 2g(X)-2 on the LHS of your Riemann-Hurwitz, in which case you need n greater than 2g+2 (the number of fixed points of the hyperelliptic involution on a hyperelliptic curve of genus g). As for algebraic space vs. scheme, it's going to be a scheme since if you include level structure what you have is quasi-projective. – Bjorn Poonen Jan 10 '10 at 6:59 The second correction could use a little more correcting: maybe (n-2)/2. Also, the html "strike" tag can be useful for demarcating deprecated parts in a readable way. – S. Carnahan♦ Jan 10 '10 at 19:26 @JSE. I am unable to accept this answer because the question was in two parts, and the other part was answered by Emerton. See meta here .. tea.mathoverflow.net/discussion/178 – Anweshi Jan 22 '10 at 19:12 3 @JSE: There should be a law against texting while applying Riemann-Hurwitz. – Bjorn Poonen Feb 13 '10 at 6:18 add comment Not the answer you're looking for? Browse other questions tagged arithmetic-geometry ag.algebraic-geometry elliptic-curves moduli-spaces or ask your own question.
{"url":"http://mathoverflow.net/questions/11253/existence-of-fine-moduli-space-for-curves-and-elliptic-curves?sort=newest","timestamp":"2014-04-17T13:13:10Z","content_type":null,"content_length":"76286","record_id":"<urn:uuid:1ab955fe-073b-4a2f-9b7b-1669cf9dfe46>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00096-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project The Meixner Process This Demonstration shows a path of the (extended) Meixner process with four parameters and a cross-sectional ("marginal") density function of the process at a chosen moment in time. The kurtosis and skewness of the density at the given time are also displayed. The Meixner process is a pure-jump Lévy process with semi-heavy tails, which has been used successfully for stock price modelling and valuing derivative instruments. The Demonstration makes use of Mathematica 8's ability to generate random variates when an explicit formula for the probability density function is given. The Meixner process is a three-parameter pure jump Lévy process that was introduced in [1] and applied to finance in [2]. As with other similar processes, one can add a "drift" parameter, creating a four-parameter process particularly convenient for pricing derivative instruments. The process originated in the theory of orthogonal polynomials. It is a pure jump Lévy process (i.e. it has no continuous component) and was defined by explicitly giving its density function, which plays the central role in this Demonstration. [1] W. Schoutens and J. L. Teugels, "Lévy Processes, Polynomials and Martingales," Communications in Statistics: Stochastic Models , 1998 pp. 335–349. [2] W. Schoutens, Lévy Processes in Finance: Pricing Financial Derivatives , New York: John Wiley & Sons, 2003.
{"url":"http://demonstrations.wolfram.com/TheMeixnerProcess/","timestamp":"2014-04-20T03:28:44Z","content_type":null,"content_length":"42098","record_id":"<urn:uuid:3daf3eb1-e082-44b3-a6b3-da9300fc6690>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00316-ip-10-147-4-33.ec2.internal.warc.gz"}
hard interview questions Here's a list of good questions that my friends and I have compiled over the years. Solutions provided in separate nodes. 1. There is a village of wizards and a village of dwarves. Once a year, the wizards go over to the village of dwarves and line all the dwarves up in increasing height order, such that each dwarf can only see the dwarves smaller than himself. The wizards have an infinite supply of white and black hats. They place either a white or black hat on the head of each dwarf. Then, starting with the tallest dwarf (in the back of the line), they ask each what color hat he is wearing. If the dwarf answers incorrectly, the wizards kill him (the other dwarves can hear his answer, but can't tell if he was killed or not). What strategy can the dwarves use to minimize the number of dwarves that are killed? What is the most number of dwarves that will be killed using this optimal strategy? 2. Consider a two-player game played on a circular table of unspecified diameter. Each player has an infinite supply of quarters, and take turns placing a quarter on the table such that it is completely on the table and does not overlap with any other quarters already played. A player wins if he makes the last legal move. Which player (if any) has a strategy that will guarantee a win, and what is that strategy? (solution) 3. How do you reverse the order of the words (not the characters) in a string of length n in with constant extra space in linear time? (solution) 4. How do you rotate a string of length n by m characters with constant extra space in linear time (wrt n)? (solution) 5. Consider a rectangular cake with a rectangular section (of any size or orientation) removed from it. How do you divide the cake exactly in half with only one cut? (solution) 6. You have a bar of chocolate that consists of n x m square blocks. If you can only break one piece at a time, how many breaks are necessary to break the original n x m piece into n*m 1 x 1 pieces? How many are sufficient? (solution) 7. How do you quickly count the number of set bits in a 32-bit integer in linear time (with respect to the number of set bits)? In constant time? (solution) 8. Given an array of size N that contains values between 1 and N-1, find the duplicate element (assuming there is only one). If it contains values between 1 and N+1, how would you find the missing element (again assuming there is only one missing)? Do each in O(N). (solution) 9. Give a one-line C expression to test whether an unsigned int is a power of two. (solution) 10. How many points are there on the globe where by walking one mile south, one mile east and one mile north you reach the place where you started? (solution) 11. Given a singly linked list, determine whether it contains a loop or not. (solution) 12. Every day, Joe arrives at the train station from work at 6pm. His wife leaves home in her car to meet him there at exactly 6pm, and drives him home. One day, Joe gets to the station an hour early, and starts walking home, until his wife meets him on the road. They get home 20 minutes earlier than usual. How long was he walking? Distances are unspecified. Speeds are unspecified, but constant. (solution) 13. How do you divide a cake among n people, maximizing fairness? (solution) 14. In your cellar there are three light switches in the OFF position. Each switch controls one of three light bulbs on floor above. You may move any of the switches but you may only go upstairs to inspect the bulbs one time. How can you determine the switch for each bulb with one inspection? (solution) 15. Alice and Bob are on separate islands. Bob is sick, and Alice has the medicine. Eve has a boat and a chest that can be locked. She is willing to transport objects between Alice and Bob, but only in the chest, and if the chest is unlocked, she will steal whatever is inside. If both Alice and Bob have a padlock and a key such that their own key only opens their own lock, how can Alice send Bob the medicine so that Eve won't steal it? (solution) 16. Write some code to convert a positive integer into base minus 2. That is, whereas base 2 has a 1's place, a 2's place, a 4's place, etc., base minus 2 has a 1's place, a minus 2's place, a 4's place, a minus 8's place, ... (-2)^n. (solution) 17. A couple invites n-1 other couples to dinner. Once everyone arrives, each person shakes hands with everyone he doesn't know. Then, the host asks everyone how many hands they shook, and each person replies with a different number. Assuming that everyone knows his or her own spouse, how many hands did the hostess shake? (solution) 18. Two robots start at different places on the same linear track. What one program can you give to both robots to guarantee that they meet? The the program may consist only of the instructions move_left n, move_right n (where n is the number of spaces to move), if statements while loops, and the boolean values at_own_start and at_other_robots_start (note that you can't use other variables or counters). (solution) 19. Which offer is better and why? 1. You are to make a statement. If the statement is true, you get exactly $10. If the statement is false, you get either less than or more than $10 but not exactly $10. 2. You are to make a statement. Regardless of whether the statement is true or false, you get more than $10. You have two ropes and a box of matches. Each rope takes exactly one hour to burn, but they may not necessarily burn evenly -- i.e., the first half might burn in the first 10 minutes and the second in the remaining 50). How can you measure out 45 minutes by just using these two ropes? (solution) Consider three identical airplanes starting at the same airport. Each plane has a fuel tank that holds just enough fuel to allow the plane to travel half the distance around the world. These airplanes possess the special ability to transfer fuel between their tanks in mid-flight. Devise a scheme that will allow one airplane to travel all the way around the world, landing only at the original airport. (solution) You are at the bottom of the elevator shaft of a 100 story building. You see 21 wires labelled 1...21. The wires go up to the 100th floor where the ends are labelled A...U, but you don't know how they correspond to the ends at the bottom. You have a battery, a light bulb, and many small wires. How can you determine the pairing between the numbers and letters by only making one trip to the 100th floor and back down? (solution). A woman starts paddling upstream in a canoe, and after one mile, encounters a log floating with the current. She continues to paddle upstream for onehour, then turns around and paddles downstream, until she returns to the dock where she started. If the woman and the log reach the dock at exactly the same time, how fast was the current flowing? Assume all speeds are constant. (solution) Consider a centrifuge with 12 slots for test tubes. When you use a centrifuge, the tubes must be placed in the slots so that they are radially balanced (we can assume all tubes have the same mass). For example, for 3 tubes, you would place them in slots 4, 8 and 12. How can you place exactly 5 tubes in the centrifuge so that they are radially balanced? (solution) You are on a strict medical regimen that requires you to take two types of pills each day. You must take exactly one A pill and exactly one B pill at the same time. The pills are very expensive, and you don't want to waste any. So you open the bottle of A pills, and tap one out into your hand. Then you open the bottle of B pills and do the same thing -- but you make a mistake, and two B pills come out into your hand with the A pill. But the pills are all exactly identical. There is no way to tell A pills apart from B pills. How can you satisfy your regimen and take exactly one of each pill at the same time, without wasting any pills? (solution) Write an algorithm to find a given element in an n by n matrix where the rows and columns are monotonically increasing. (solution) General Alice and General Bob, commanders of the allied armies A and B, respectively, are camped in the mountains on either side of a valley. Alice and Bob would like to attack enemy army C, camped in the valley below. Army A by itself is unable to defeat army C, as is army B, but a coordinated attack by A and B at the same time will secure a victory for Alice and Bob. However, the only way Alice and Bob can communicate is by sending messengers through the valley, who may or may not get captured en route by the enemy army C. Is there an algorithm by which Alice and Bob can coordinate an attack on army C so as to secure their victory? (solution) Consider a circular race track with n gas stations spaced along it, each containing a fixed amount of gas. You are given an array containing the distances between consecutive gas stations and an array containing the amount of gas at each. Suppose the total amount of gas at all the gas stations is the same as the number of miles around the race track. Your car gets one mile to the gallon, but its gas tank has an unlimited capacity. Where do you start your car along the race track to guarantee that you get all the way around without running out of gas? Do this in O(n) time. (solution) Given an array of n integers, find all Pythagorean triples in the array, that is, three elements such that a^2 + b^2 = c^2. Do this in O(n^2) time. (solution) You are on a spaceship that has a computer with n processors. Suddenly, the spaceship gets hit with an alien laser beam, and some of the processors are damaged. However, you know that more than half of the processors are still good. You can ask one processor whether it thinks another processor is good or bad. A good processor will always tell the truth, but a bad one will always lie. A 'step' consists of asking one processor if it thinks another processor is good or bad. Find one good processor, only using n-2 steps. (solution) Given an array of n integers, where one element appears more than n/2 times, find that element in linear time and constant extra space. (solution) A spinning disc is painted black on one half and white on the other (i.e., the line forming the border between the black and white regions of the disc is a diameter of the disc). The disk is spinning on a turntable in an unknown direction at an unknown speed. You have special video cameras that can observe the color of a single point on the disc. How many cameras do you need to determine the direction the disc is spinning? (solution) Create an equilateral triangle using three toothpicks. Now add three more equilateral triangles of the same size as the original using only three more toothpicks. (solution) Feel free to /msg me with other cool interview questions or brainteasers you've heard... and don't bother sending the solutions, I like a good challenge.
{"url":"http://everything2.com/title/hard%2520interview%2520questions","timestamp":"2014-04-18T13:39:59Z","content_type":null,"content_length":"50534","record_id":"<urn:uuid:378b1d70-e005-4e12-995f-50bc0e93f2e8>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00362-ip-10-147-4-33.ec2.internal.warc.gz"}
Stability experimental Maintainer Patrick Perry <patperry@stanford.edu> An overloaded interface to mutable banded matrices. For matrix types than can be used with this interface, see Data.Matrix.Banded.IO and Data.Matrix.Banded.ST. Many of these functions can also be used with the immutable type defined in Data.Matrix.Banded. • class (MatrixShaped a, HasVectorView a, HasMatrixStorage a, Elem e, BaseVector (VectorView a) e, BaseMatrix (MatrixStorage a) e) => BaseBanded a e where • class (BaseBanded a e, BLAS2 e, ReadTensor a (Int, Int) e m, MMatrix a e m, MMatrix (Herm a) e m, MMatrix (Tri a) e m, MSolve (Tri a) e m, ReadVector (VectorView a) e m, ReadMatrix (MatrixStorage a) e m) => ReadBanded a e m where • class (ReadBanded a e m, WriteTensor a (Int, Int) e m, WriteVector (VectorView a) e m, WriteMatrix (MatrixStorage a) e m) => WriteBanded a e m where • module Data.Matrix.Class • module Data.Matrix.Class.MMatrix • newBanded :: WriteBanded a e m => (Int, Int) -> (Int, Int) -> [((Int, Int), e)] -> m (a (n, p) e) • newListsBanded :: WriteBanded a e m => (Int, Int) -> (Int, Int) -> [[e]] -> m (a (n, p) e) • newZeroBanded :: WriteBanded a e m => (Int, Int) -> (Int, Int) -> m (a (n, p) e) • setZeroBanded :: WriteBanded a e m => a (n, p) e -> m () • newConstantBanded :: WriteBanded a e m => (Int, Int) -> (Int, Int) -> e -> m (a (n, p) e) • setConstantBanded :: WriteBanded a e m => e -> a (n, p) e -> m () • newCopyBanded :: (ReadBanded a e m, WriteBanded b e m) => a (n, p) e -> m (b (n, p) e) • copyBanded :: (WriteBanded b e m, ReadBanded a e m) => b (n, p) e -> a (n, p) e -> m () • rowViewBanded :: BaseBanded a e => a (n, p) e -> Int -> (Int, VectorView a k e, Int) • colViewBanded :: BaseBanded a e => a (n, p) e -> Int -> (Int, VectorView a k e, Int) • diagViewBanded :: BaseBanded a e => a (n, p) e -> Int -> VectorView a k e • getDiagBanded :: (ReadBanded a e m, WriteVector y e m) => a (n, p) e -> Int -> m (y k e) • module Data.Tensor.Class • module Data.Tensor.Class.MTensor Banded matrix type classes class (MatrixShaped a, HasVectorView a, HasMatrixStorage a, Elem e, BaseVector (VectorView a) e, BaseMatrix (MatrixStorage a) e) => BaseBanded a e whereSource Common functionality for all banded matrix types. numLower :: a (n, p) e -> IntSource Get the number of lower diagonals in the banded matrix. numUpper :: a (n, p) e -> IntSource Get the number of upper diagonals in the banded matrix bandwidths :: a (n, p) e -> (Int, Int)Source Get the range of valid diagonals in the banded matrix. bandwidthds a is equal to (numLower a, numUpper a). ldaBanded :: a (n, p) e -> IntSource Get the leading dimension of the underlying storage of the banded matrix. transEnumBanded :: a (n, p) e -> TransEnumSource Get the storage type of the banded matrix. isHermBanded :: a (n, p) e -> BoolSource Indicate whether or not the banded matrix storage is transposed and conjugated. coerceBanded :: a np e -> a np' eSource Cast the shape type of the banded matrix. maybeMatrixStorageFromBanded :: a (n, p) e -> Maybe (MatrixStorage a (k, p) e)Source Get a matrix with the underlying storage of the banded matrix. This will fail if the banded matrix is hermed. maybeBandedFromMatrixStorage :: (Int, Int) -> (Int, Int) -> MatrixStorage a (k, p) e -> Maybe (a (n, p) e)Source Given a shape and bandwidths, possibly view the elements stored in a dense matrix as a banded matrix. This will if the matrix storage is hermed. An error will be called if the number of rows in the matrix does not equal the desired number of diagonals or if the number of columns in the matrix does not equal the desired number of columns. viewVectorAsBanded :: (Int, Int) -> VectorView a k e -> a (n, p) eSource View a vector as a banded matrix of the given shape. The vector must have length equal to one of the specified dimensions. viewVectorAsDiagBanded :: VectorView a n e -> a (n, n) eSource View a vector as a diagonal banded matrix. maybeViewBandedAsVector :: a (n, p) e -> Maybe (VectorView a k e)Source If the banded matrix has only a single diagonal, return a view into that diagonal. Otherwise, return Nothing. unsafeBandedToIOBanded :: a (n, p) e -> IOBanded (n, p) eSource Unsafe cast from a matrix to an IOBanded. Elem e => BaseBanded IOBanded e Elem e => BaseBanded Banded e Elem e => BaseBanded (STBanded s) e class (BaseBanded a e, BLAS2 e, ReadTensor a (Int, Int) e m, MMatrix a e m, MMatrix (Herm a) e m, MMatrix (Tri a) e m, MSolve (Tri a) e m, ReadVector (VectorView a) e m, ReadMatrix (MatrixStorage a) e m) => ReadBanded a e m whereSource Banded matrices that can be read in a monad. unsafePerformIOWithBanded :: a (n, p) e -> (IOBanded (n, p) e -> IO r) -> m rSource Cast the banded matrix to an IOBanded, perform an IO action, and convert the IO action to an action in the monad m. This operation is very unsafe. freezeBanded :: a (n, p) e -> m (Banded (n, p) e)Source Convert a mutable banded matrix to an immutable one by taking a complete copy of it. unsafeFreezeBanded :: a (n, p) e -> m (Banded (n, p) e)Source BLAS3 e => ReadBanded IOBanded e IO BLAS3 e => ReadBanded Banded e IO BLAS3 e => ReadBanded Banded e (ST s) BLAS3 e => ReadBanded (STBanded s) e (ST s) class (ReadBanded a e m, WriteTensor a (Int, Int) e m, WriteVector (VectorView a) e m, WriteMatrix (MatrixStorage a) e m) => WriteBanded a e m whereSource Banded matrices that can be created or modified in a monad. newBanded_ :: (Int, Int) -> (Int, Int) -> m (a (n, p) e)Source Creates a new banded matrix of the given shape and bandwidths. The elements will be uninitialized. unsafeConvertIOBanded :: IO (IOBanded (n, p) e) -> m (a (n, p) e)Source Unsafely convert an IO action that creates an IOBanded into an action in m that creates a matrix. thawBanded :: Banded (n, p) e -> m (a (n, p) e)Source Convert an immutable banded matrix to a mutable one by taking a complete copy of it. unsafeThawBanded :: Banded (n, p) e -> m (a (n, p) e)Source BLAS3 e => WriteBanded IOBanded e IO Overloaded interface for matrices Creating banded matrices newBanded :: WriteBanded a e m => (Int, Int) -> (Int, Int) -> [((Int, Int), e)] -> m (a (n, p) e)Source Create a banded matrix with the given shape, bandwidths, and associations. The indices in the associations list must all fall in the bandwidth of the matrix. Unspecified elements will be set to zero. newListsBanded :: WriteBanded a e m => (Int, Int) -> (Int, Int) -> [[e]] -> m (a (n, p) e)Source Create a banded matrix of the given shape and bandwidths by specifying its diagonal elements. The lists must all have the same length, equal to the number of elements in the main diagonal of the matrix. The sub-diagonals are specified first, then the super-diagonals. In subdiagonal i, the first i elements of the list are ignored. Special banded matrices newZeroBanded :: WriteBanded a e m => (Int, Int) -> (Int, Int) -> m (a (n, p) e)Source Create a zero banded matrix with the specified shape and bandwidths. newConstantBanded :: WriteBanded a e m => (Int, Int) -> (Int, Int) -> e -> m (a (n, p) e)Source Create a constant banded matrix of the specified shape and bandwidths. Copying banded matrices newCopyBanded :: (ReadBanded a e m, WriteBanded b e m) => a (n, p) e -> m (b (n, p) e)Source Create a new banded matrix by taking a copy of another one. copyBanded :: (WriteBanded b e m, ReadBanded a e m) => b (n, p) e -> a (n, p) e -> m ()Source Copy the elements of one banded matrix into another. The two matrices must have the same shape and badwidths. Conversions between banded matrices and vectors Row and column views rowViewBanded :: BaseBanded a e => a (n, p) e -> Int -> (Int, VectorView a k e, Int)Source Get a view into the partial row of the banded matrix, along with the number of zeros to pad before and after the view. colViewBanded :: BaseBanded a e => a (n, p) e -> Int -> (Int, VectorView a k e, Int)Source Get a view into the partial column of the banded matrix, along with the number of zeros to pad before and after the view. diagViewBanded :: BaseBanded a e => a (n, p) e -> Int -> VectorView a k eSource Get a view of a diagonal of the banded matrix. This will fail if the index is outside of the bandwidth. Getting diagonals getDiagBanded :: (ReadBanded a e m, WriteVector y e m) => a (n, p) e -> Int -> m (y k e)Source Get a copy of the given diagonal of a banded matrix. Overloaded interface for reading and writing banded matrix elements Conversions between mutable and immutable banded matrices Conversions from IOBandeds
{"url":"http://hackage.haskell.org/package/blas-0.7.6/docs/Data-Matrix-Banded-Class.html","timestamp":"2014-04-20T13:26:37Z","content_type":null,"content_length":"39715","record_id":"<urn:uuid:de6193c1-9d17-461e-a8be-e4f41e6364be>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00397-ip-10-147-4-33.ec2.internal.warc.gz"}
Thoughts on graduate school: an addendum Thoughts on graduate school: an addendum May 18, 2009 Posted by Ben Webster in math life. Of course, this process could go on endlessly, but I think there was an important point that Noah didn’t emphasize enough: talk to people. There are a few categories of talking that deserve special attention. • You should make a point of going to conferences whenever possible (it can be extremely easy to get travel money for conferences as a grad student), even if they’re not exactly your field. If you have something to speak about, and can get a speaking spot, even better. If you’re wondering how one goes to conferences, there’s a simple algorithm. 1. read the AMS math calendar 2. request funding for any ones that sound interesting 3. rinse and repeat. • You should do whatever you can, non-annoyingly, to cultivate relationships with mathematicians, especially ones who are older. They can give you valuable advice, serve as good references, and can be good collaborators. I feel like it can’t be emphasized enough: mathematics is a social activity. You’ll never learn it properly from books and papers, and you can’t rely on your advisor to tell you all the things you need to know. Rather, you have to talk to the people around you, and make sure you have people around you to talk to. Of course, different levels of talking are good for different people. I’m a pretty sociable guy, and that shows in my mathematical work (it’s been almost 3 years since I’ve written a solo paper and don’t have any on the horizon), but even if you don’t want to collaborate with people, you really do need to talk to them about math. While the AMS math calendar is a good place to start finding out about conferences, students should also be warned it’s far from complete, and its coverage varies a lot from one field to another. Some fields have active (or not-so-active) mailing lists; you should also find out if your field and adjacent fields have them, and get on them. (All of this falls under the heading of things you can’t necessarily rely on your advisor to tell you.) This topic of mailing lists is one that is presumably relevant to a number of readers of this blog. So I will ask, is there any reasonably complete list of such mailing lists out there somewhere on the internet? (And if not, then we should probably try to create one, for some appropriate definition of ‘we’.) A previous post (on “Secret listservs”) discussed exactly this topic. (maybe some more computer capable member of this blog can be in a link) [...] Update 2: excellent advice for the graduate students in math is available at the Secret Blogging Seminar, here and here. [...] Sorry comments are closed for this entry Recent Comments Erka on Course on categorical act… Qiaochu Yuan on The many principles of conserv… David Roberts on Australian Research Council jo… David Roberts on Australian Research Council jo… Elsevier maths journ… on Mathematics Literature Project…
{"url":"http://sbseminar.wordpress.com/2009/05/18/thoughts-on-graduate-school-an-addendum/","timestamp":"2014-04-19T04:44:15Z","content_type":null,"content_length":"69304","record_id":"<urn:uuid:c9fba5c7-0e2f-4f6a-a463-ed79bbcfa25f>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00445-ip-10-147-4-33.ec2.internal.warc.gz"}
Six Sigma and the X-Y Matrix of Project Management X-Y matrix project management tool (or cause and effect matrix) is used mainly in six sigma DMAIC project. This is the ideal tool for prioritizing the input process parameters (X’s). The prioritization of X’s is important for performing process FMEA. • Procedure for Making an X-Y Matrix □ List out the CTQ (Y’s) for the targeted process. □ Make a table with numbers of rows and columns. □ Write down the CTQ against individual column. □ Write the weightage for each CTQ. □ Find out input process parameters (X’s) by brainstorming and write them against individual rows. □ Put suitable numbers in the intersection of rows and columns to show the relationship between X’s and Y’s. □ Finally calculate the weighted sum against each X’s. For clarity please read the example in the next paragraph. • An Example Let’s take the example of the newspaper printing process. After transforming the VOC we find the CTQ (Y’s) as below: □ Clearly readable print □ Good quality photo □ Harmless to health Upon brainstorming, the input process parameters (X’s) have been found as below: □ Good quality ink □ Less vibration during operation of printing press □ Paper quality For the above sets of X’s and Y’s the X-Y matrix table will look like the example below: Wherever there is a strong relation between X’s and Y’s, put 9. For weak relations put 3 or 1. Keep the intersection field blank if there is significant relation. Weighted sum for “Good quality ink” is calculated as 15*9 + 10*9 + 10*1 = 235. The calculation is similar for the rest of the X’s. The input parameters with a higher weighted sum should be selected for further FMEA. • Conclusion A X-Y matrix project management tool is useful for prioritizing the input parameters (X’s). CTQ (Y’s) of the six sigma project must be kept ready before preparing the cause and effect matrix. The X-Y matrix should be prepared by brainstorming of cross functional team members.The success of FMEA also depends upon the result of X-Y matrix.
{"url":"http://www.brighthubpm.com/six-sigma/42068-six-sigma-x-y-matrix-explained/","timestamp":"2014-04-21T01:59:23Z","content_type":null,"content_length":"39894","record_id":"<urn:uuid:99634fe5-c0e9-44b5-93cf-e05832ff67fa>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00479-ip-10-147-4-33.ec2.internal.warc.gz"}
Default values for the y and z-coordinate of Points, when the x-coordinate, or the x and y-coordinates only are specified. Both are 0 by default. These values only used in the constructor and setting function taking one required real value (for the x-coordinate), and two optional real values (for the y and z-coordinates). They are not used when a Point is declared using the default constructor with no arguments. In this case, the x, y, and z-coordinates will all be 0. See Point Reference; Constructors and Setting Functions. Point A(1); -| A: (1, 0, 0); CURR_Y = 5; -| A: (2, 5, 0); CURR_Z = 12; Point B(3); -| B: (3, 5, 12); Point C; -| C: (0, 0, 0);
{"url":"http://ftp.gwdg.de/pub/gnu2/3dldf/3DLDF/Point-Data-Members.html","timestamp":"2014-04-18T10:35:20Z","content_type":null,"content_length":"11182","record_id":"<urn:uuid:467b8573-7d9d-4baa-a454-7992d1bcbce9>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00561-ip-10-147-4-33.ec2.internal.warc.gz"}
Help with work problem June 3rd 2008, 11:18 PM Help with work problem The region R bounded by the y-axis, the lines y=4 and y=0, and the portion of the (x^2) +4(y-4)^2=100 between the points (6,0) and (10,4) is revolved around the y-axis to form a container that is full of water. (See picture below.) How do I set this up? Using the disk method? June 4th 2008, 12:18 PM Yes - you each disc is centered on the y-axis, and you have the radius of each disc: R^2 = 100- 4(y-4)^2. The incremental volume of each disc is therefore: $<br /> <br /> dV = \pi (100-4(y-4)^2) dy<br />$ Integrate this from y = 0 to y =4.
{"url":"http://mathhelpforum.com/calculus/40568-help-work-problem-print.html","timestamp":"2014-04-20T04:48:19Z","content_type":null,"content_length":"4151","record_id":"<urn:uuid:60675fb5-33d3-4df1-ba99-cc455a2a05ec>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00556-ip-10-147-4-33.ec2.internal.warc.gz"}
Twenty Objects Trick 1. Place 20 objects on a table. 2. Remove any number of items from 1 to 10. 3. The items that remain will be a two-digit number. 4. Find the sum of these two digits and remove that many more items from the table. 5. Now remove some of the remaining objects, and tell me how many you removed.
{"url":"http://www.pleacher.com/mp/puzzles/tricks/twenty2.html","timestamp":"2014-04-19T19:43:35Z","content_type":null,"content_length":"4153","record_id":"<urn:uuid:08fcd318-8519-4f7d-be1c-08bc57cbb2ec>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00443-ip-10-147-4-33.ec2.internal.warc.gz"}
Course Content and Outcome Course Content and Outcome Guide for ELT 204 Course Number: ELT 204 Course Title: Adjustable Speed Drives Credit Hours: Lecture Hours: Lecture/Lab Hours: Lab Hours: Special Fee: Course Description Covers theory, operation, installation, and maintenance of adjustable speed motor drives. Introduces drive applications and selection for industrial, utility, and commercial structures. This class can be used towards Continuing Education Units for Oregon State electrical licensing purposes. Prerequisites: Placement in MTH 20 or higher; (WR 80 or ESOL 252) and (RD 80 or ESOL 250) or equivalent placement test scores. Audit available. Addendum to Course Description Students will be able to better understand the basics of adjustable speed drives and utilize a practical approach to troubleshoot and repair adjustable speed drives. Intended Outcomes for the course • Identify the concept of Force, Inertia, Speed & Torque. • Identify the difference between work & power. • Identify the construction of a squirrel cage motor. • Identify the nameplate information of an AC motor, & how it applies to an AC drive. • Apply understanding of the operation of a three-phase rotating magnetic field. • Calculate synchronous speed, slip, & rotor speed. • Identify the difference between volts/HZ., torque, and current. • Identify the basic construction & operation of a PWM (Pulse width modulation) type AC drive. • Identify the characteristics of constant torque, constant HP, and variable torque applications. • Apply understanding of direct-current motors and their use in a variety of industries. • Apply understanding of several methods of repeated closure of a circuit. • Identify the installation of motor drives and how to calculate the size required. Course Activities and Design A lecture-laboratory course in which the student may come from a wide variety of occupations. The emphasis of the course will depend on the needs of the students. Laboratory activities will utilize electrical test equipment to understand the operation of adjustable speed drives. Outcome Assessment Strategies Procedures will be discussed at the first class session and the instructor's grading policy will be referenced on the class syllabus. Assessment will be based on attendance, participation, homework, lab activities, assignments, short quizzes and written examination. Course Content (Themes, Concepts, Issues and Skills) • Identify the concept of Force, Inertia, Speed & Torque. • Identify the difference between work & power. • Identify the construction of a squirrel cage motor. • Identify the nameplate information of an AC motor, & how it applies to an AC drive. • Apply understanding of the operation of a three-phase rotating magnetic field. • Calculate synchronous speed, slip, & rotor speed. • Identify the difference between volts/HZ., torque, and current. • Identify the basic construction & operation of a PWM (Pulse width modulation) type AC drive. • Identify the characteristics of constant torque, constant HP, and variable torque applications. • Apply understanding of direct-current motors and their use in a variety of industries. • Apply understanding of several methods of repeated closure of a circuit. • Identify the installation of motor drives and how to calculate the size required.
{"url":"http://www.pcc.edu/ccog/default.cfm?fa=ccog&subject=ELT&course=204","timestamp":"2014-04-17T12:42:12Z","content_type":null,"content_length":"9653","record_id":"<urn:uuid:2b08ef9a-d396-4755-88c1-ab1dfb15ead7>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00172-ip-10-147-4-33.ec2.internal.warc.gz"}
Mplus Discussion >> Correlated slopes JM posted on Monday, September 07, 2009 - 2:35 pm Hi quick clarification question, if two slopes are both negative, and positively correlate in a longitudinal parallel-process model, what exactly does that mean....both slopes are negative but positively correlate? Thanks! Linda K. Muthen posted on Tuesday, September 08, 2009 - 9:59 am When one is lower the other is lower. When one is higher the other is higher. jpmv posted on Monday, April 18, 2011 - 2:08 pm I am interested in the correlation between the slopes of two variables. Because I was wondering whether the results would be the same when shared variance between the two variables were removed, I added the within time-correlations at each time point in the model specification. I have three questions hereby: 1. Was this the correct way to control for shared variance? 2. What are the pro's and contra's of controlling for shared variance? 3. The correlation between the slopes remains similar, but this result is no longer significant (despite the strong negative correlation). How should I interpret this? Thank you! Linda K. Muthen posted on Tuesday, April 19, 2011 - 9:32 am It soundslike you are talking about the residual covariance at the same time point between two outcomes that are part of two different growth model. Is this correct? Is this residual covariance what you mean by shared variance? jpmv posted on Wednesday, April 20, 2011 - 6:07 am Indeed, this is what I mean. Bengt O. Muthen posted on Thursday, April 21, 2011 - 8:06 am I think allowing for contemporaneous residual covariance between outcomes of two different growth processes is often necessary to capture the effects of left-out time-varying covariates that influence both processes. This then avoids channeling too much of the correlation between the two outcomes through the growth factors. That is, the correlation you get between the growth factors is more trustworthy when you include these contemporaneous residual covariances. Back to top
{"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=next&topic=14&page=4651","timestamp":"2014-04-21T16:08:18Z","content_type":null,"content_length":"22679","record_id":"<urn:uuid:58daedb6-5878-467d-8108-baede954add7>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00212-ip-10-147-4-33.ec2.internal.warc.gz"}
Comparing Ruby and C# Performance As I'm in the middle of learning Ruby and Ruby on Rails, I wanted to do a quick comparison of Ruby vs C#, knowing quite well that C# will outperform Ruby, but I wanted to get some idea by how much. This site indicates that Ruby is about 25 times slower on average, but I wanted to see for myself. As it turns out, Ashraff Ali Wahab had a couple days ago posted the article Eratosthenes/Sundaram/ Atkins Sieve Implementation in C#, and I figured this would be a quick way to write some tests. This is obviously not conclusive - it's a bit like comparing apples and oranges at the code level, especially since I tried to leverage the syntax of Ruby to my current level of understanding. Also, I decided to give IronRuby a try as well, however the tests were inconclusive as the program failed to complete. Source Code Please refer to Ashraff Ali Wahab article for the C# source code. Brute Force Algorithm def BruteForce(topCandidate) totalCount = 1 isPrime = true 3.step(topCandidate, 2) do |i| while j*j <= i && isPrime isPrime = false if i%j==0 j += 2 isPrime ? totalCount += 1 : isPrime = true Sieve of Eratosthenes Algorithm def SieveOfEratosthenes(topCandidate) myBA1 = Array.new(topCandidate + 1) {true} myBA1[0] = myBA1[1] = false thisFactor = 2 while thisFactor * thisFactor <= topCandidate do mark = thisFactor + thisFactor mark.step(topCandidate+1, thisFactor) {|n| myBA1[n] = false} thisFactor += 1 while !myBA1[thisFactor] do thisFactor += 1 Sieve of Sundaram Algorithm def SieveOfSundaram(topCandidate) k = topCandidate / 2 myBA1 = Array.new(k + 1) {true} myBA1[0] = myBA1[k] = false for i in 1..k do denominator = (i << 1) + 1 maxVal = (k - i) / denominator i.step(maxVal+1, 1) {|n| myBA1[i + n * denominator] = false} # this version takes .20 seconds longer to run 1M iterations! # for n in i..maxVal+1 do # myBA1[i + n * denominator] = false # end myBA1.count(true) + 1 def main max = 1000000 startTime = Time.now() primes = BruteForce(max) endTime = Time.now() elapsed = endTime - startTime printf("Elapsed time for Brute Force : %f Primes = %d\n", elapsed, primes) startTime = Time.now() primes = SieveOfEratosthenes(max) endTime = Time.now() elapsed = endTime - startTime printf("Elapsed time for Sieve of Eratosthenes: %f Primes = %d\n", elapsed, primes) startTime = Time.now() primes = SieveOfSundaram(max) endTime = Time.now() elapsed = endTime - startTime printf("Elapsed time for Sieve of Sundaram : %f Primes = %d\n", elapsed, primes) The Results As you can see from these screenshots: Ruby is: • about 5 times slower for the brute force algorithm • about 19 times slower for the Eratosthenes and Sundaram algorithms For my purposes, that's essentially inline with the shootout website I mentioned in the Introduction. Sadly, the IronRuby program did not complete: Dying on this line: But the brute force algorithm was consistently almost twice as slow. Running on a Virtual Box I'm also running Ubuntu on Virtual Box (2GB RAM, 3 processors) and was pleased with the results: Only about 3 times slower! While not conclusive, it was a useful exercise to go through. Note particularly the commented out Ruby code: i.step(maxVal+1, 1) {|n| myBA1[i + n * denominator] = false} # this version takes .20 seconds longer to run 1M iterations! # for n in i..maxVal+1 do # myBA1[i + n * denominator] = false # end The "for" loop version takes almost 50% longer! That is a significant and worthwhile discovery, and it essentially makes sense -- the step function is a library implementation (and I would assume therefore compiled) whereas the for loop I would imagine is constantly being interpreted. Still, it's a significant difference, especially considering that the block {|n| myBA1[i + n * denominator] = false} theoretically is implemented as a function call. Also, it was disappointing that the IronRuby code failed. I was hoping that something this "simple" would not have issues. Lastly, please do not take this as a detraction to Ruby! This is an amazing language and for many purposes, performance is not the most important concern - interactions with a database and network latency (if you're thinking of Ruby on Rails) will often contribute more to the perception of performance than the language performance. Also, there appear to be some compilers available, for example Rubinius as well as The Ludicrous JIT Compiler. The former looked much too complicated to try, and the latter, Ludicrous, I did try but was not successful with the installation. Given that the creator claims "Though still in the experimental stage, its performance is roughly on par with YARV", it doesn't seem that helpful, given that: "Probably the most exciting and visible change in Ruby 1.9 is the addition of a bytecode interpreter for Ruby. The YARV (Yet Another Ruby VM) interpreter was integrated into the Ruby project, replacing the interpreter created by Matz (aka MRI, Matz's Ruby Interpreter)." (read here).
{"url":"http://www.codeproject.com/Articles/491362/Comparing-Ruby-and-Csharp-Performance","timestamp":"2014-04-20T02:19:19Z","content_type":null,"content_length":"73315","record_id":"<urn:uuid:70e0ec07-6822-4b27-8310-b7f029990424>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00534-ip-10-147-4-33.ec2.internal.warc.gz"}
Castle Point, NJ Calculus Tutor Find a Castle Point, NJ Calculus Tutor ...I have a degree in physics and a minor in mathematics. I'm also currently working on a masters in applied mathematics & statistics. In several courses, such as Ordinary Differential Equations (ODE) and Partial Differential Equations (PDE), we make heavy use of programs such as Mathematica and Maple. 83 Subjects: including calculus, chemistry, physics, statistics ...At first, chemistry can be really tough because it seems like there is always an exception to the rule, but once you begin to think like an atom, things get less electron cloudy. I can help you forgive your grievances with chemistry and let your pi-bonds be pi-bonds. At first, physics didn't make any sense to me either. 9 Subjects: including calculus, chemistry, physics, biology ...I also received credit for VEE Economics and VEE Corporate Finance (part of the SOA Actuary track). I am highly proficient in probability and statistics, which are tested in great detail in actuarial science. I also have tutored in material covered on exam P/1 and exam FM/2. Samuel I am highly proficient in linear algebra. 21 Subjects: including calculus, geometry, statistics, algebra 1 ...For students whose goal is to learn particular subjects, I make sure that the student understands the basics prior to delving into the details. In a nutshell, I provide tutoring based on the student's need. Thank you for your time reading this profile! 15 Subjects: including calculus, chemistry, geometry, statistics ...He is also well versed in the Regents tests, SHSAT, ISEE, SSAT, and the SAT Physics subject test. On the side, Ellery spends his time writing for and playing with his jazz sextet.I am currently an adjunct with New York University's physics department where I teach laboratories to undergraduates.... 8 Subjects: including calculus, physics, algebra 1, algebra 2 Related Castle Point, NJ Tutors Castle Point, NJ Accounting Tutors Castle Point, NJ ACT Tutors Castle Point, NJ Algebra Tutors Castle Point, NJ Algebra 2 Tutors Castle Point, NJ Calculus Tutors Castle Point, NJ Geometry Tutors Castle Point, NJ Math Tutors Castle Point, NJ Prealgebra Tutors Castle Point, NJ Precalculus Tutors Castle Point, NJ SAT Tutors Castle Point, NJ SAT Math Tutors Castle Point, NJ Science Tutors Castle Point, NJ Statistics Tutors Castle Point, NJ Trigonometry Tutors Nearby Cities With calculus Tutor Allwood, NJ calculus Tutors Ampere, NJ calculus Tutors Bayway, NJ calculus Tutors Beechhurst, NY calculus Tutors Bellerose Manor, NY calculus Tutors Doddtown, NJ calculus Tutors Dundee, NJ calculus Tutors Five Corners, NJ calculus Tutors Fort George, NY calculus Tutors Greenville, NJ calculus Tutors Highbridge, NY calculus Tutors Hoboken, NJ calculus Tutors Linden Hill, NY calculus Tutors Manhattanville, NY calculus Tutors Pamrapo, NJ calculus Tutors
{"url":"http://www.purplemath.com/Castle_Point_NJ_Calculus_tutors.php","timestamp":"2014-04-21T13:06:04Z","content_type":null,"content_length":"24339","record_id":"<urn:uuid:98ce239f-d4f9-4e6b-9c0a-e1238e8b567e>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00287-ip-10-147-4-33.ec2.internal.warc.gz"}
Forest City, WA Math Tutor Find a Forest City, WA Math Tutor ...I have tutored the ISEE and SSAT for about nine years, at each of the different levels. I work with students to familiarize them with the test format and test management strategies. We also work on content areas, reviewing math facts, learning vocabulary, and practicing critical reading and essay writing. 32 Subjects: including prealgebra, LSAT, algebra 1, algebra 2 ...I graduated high school with straight A's and took math classes up to and including Calculus I. I have a Bachelor of Science in International Economics and Management (3.9) and a Master of Science in Organizational Management (3.75). While pursuing these degrees I have completed many math course... 7 Subjects: including calculus, linear algebra, algebra 1, algebra 2 ...I studied computer science in college, with additional coursework in biology, physics, chemistry, anatomy and physiology, and linguistics. After graduation I worked as a technical writer and computer programmer for eight years. I enjoy writing, editing, and illustrating, and I have a deep understanding of computers, how they work, and how to make them work. 18 Subjects: including algebra 1, algebra 2, biology, chemistry ...We cover more material faster, it's much more convenient for our schedules, and I can email you PDFs of all of the problems that we did. We also video record our session so you can watch them again and again for free. My lessons are structured and thorough. 15 Subjects: including SAT math, GRE, ACT Math, prealgebra ...I am good at teaching students how to work with fractions. I have a Bachelor's Degree in Mechanical Engineering. I am very patient and can explain how to solve story problems. 5 Subjects: including precalculus, algebra 1, algebra 2, geometry Related Forest City, WA Tutors Forest City, WA Accounting Tutors Forest City, WA ACT Tutors Forest City, WA Algebra Tutors Forest City, WA Algebra 2 Tutors Forest City, WA Calculus Tutors Forest City, WA Geometry Tutors Forest City, WA Math Tutors Forest City, WA Prealgebra Tutors Forest City, WA Precalculus Tutors Forest City, WA SAT Tutors Forest City, WA SAT Math Tutors Forest City, WA Science Tutors Forest City, WA Statistics Tutors Forest City, WA Trigonometry Tutors Nearby Cities With Math Tutor Annapolis, WA Math Tutors Colby, WA Math Tutors Colchester, WA Math Tutors East Port Orchard, WA Math Tutors Fernwood, WA Math Tutors Horseshoe Lake, WA Math Tutors Lake Holiday, WA Math Tutors Long Lake, WA Math Tutors Orchard Heights, WA Math Tutors Overlook, WA Math Tutors Parkwood, WA Math Tutors Retsil Math Tutors South Park Village, WA Math Tutors View Park, WA Math Tutors Waterman, WA Math Tutors
{"url":"http://www.purplemath.com/forest_city_wa_math_tutors.php","timestamp":"2014-04-20T14:07:12Z","content_type":null,"content_length":"24048","record_id":"<urn:uuid:bb87946f-aa4c-473b-95e2-c656bc92fea9>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00563-ip-10-147-4-33.ec2.internal.warc.gz"}
This will be an introductory talk, whereby I will speak on the mathematics of the modern theory of cosmology. This theory is based on Einsteinšs gravitational field equations for a perfect fluid, together with astronomical observations. Both of these topics will be considered, and I will also discuss the very interesting "standard" cosmological model. Towards the end of my talk, I will explain the recent modification of this theory, due to myself and Blake Temple, (University of California at Davis). The standard model, predicts a Universe of infinite mass and extent at each instant after the Big-Bang. We propose a modification of the standard model by one based on the mathematical theory of shock waves, which avoids this defect. In order to get the shock wave to lie beyond one Hubble length at present time, (this is needed to agree with astronomical observations), we prove that the expansion must take place inside a black hole (really a "white hole"). There are other interesting and unexpected features of our model, which will be discussed. I will also explain the undefined notions mentioned above.
{"url":"http://www.newton.ac.uk/Abstracts/Smoller.html","timestamp":"2014-04-19T15:08:43Z","content_type":null,"content_length":"2796","record_id":"<urn:uuid:997db1bc-46a9-463b-86fa-588485aa9703>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00157-ip-10-147-4-33.ec2.internal.warc.gz"}
Matrix approximation up vote 2 down vote favorite Let A be an $m\times n$ matrix and $k$ be an integer. Assume that $A$ is non-negative. We want to find a scalar $\epsilon$ and an $n\times n$ matrix $B$ such that $A\leq A(\epsilon I + B)$ (where $\ leq$ is an element-wise comparison). The goal is to minimize $\epsilon$ and we have the following restrictions on $B$: 1) $B$ is non-negative. 2) Each column of $B$ has $L_1$ norm at most 1. 3) There are at most $k$ rows of $B$ that are non-zero (i.e., at least $n-k$ rows are zero vectors). In case it helps, we may assume that $n>>k>>m$. My goal is to get an algorithm for computing $B$ to minimize $\epsilon$ (either exactly or approximately) and my general question is whether you know of anything related. (I'm not familiar with this area. It's not even clear to me if the problem is NP-hard or not.) Another thing is whether it is possible to bound $\epsilon$ in terms of $k$, $m$ and $n$? I suppose $A\leq A(\epsilon I + B)$ means element-wise inequality? Otherwise, I can't make sense of the question. (But then, why didn't you write condition (1) as $B\ge0$?) I guess the main difficulty stems from requirement (3), which seems to give the problem a rather combinatorial flavour. Without that, it looks like a standard linear programming problem. – Harald Hanche-Olsen Mar 30 '10 at 15:54 Do you assume that the elements of $A$ are nonnegative? If not, it may happen that there is no $\epsilon$ at all. – Sergei Ivanov Mar 30 '10 at 16:50 @Harald: a statement like $A \le B$ for matrices $A$ and $B$ often means that the matrix $A-B$ is non-negative definite. – Tom LaGatta Mar 30 '10 at 22:32 Yes, I meant element-wise inequality and assume that A is nonnegative. I will clarify these points on the problem statement. – Danu Mar 31 '10 at 4:03 add comment 1 Answer active oldest votes I'll address the last question (about an a priori bound for $\epsilon$). If $n\gg k\gg m$, the worst-case bound for $\epsilon$ is between $c(m)\cdot k^{-2/(m-1)}$ and $C(m)\cdot k^{-1/(m-1)}$ (probably near the former but I haven't checked this carefully). Note that the bound does not depend on $n$. Proof. The columns of $A$ form a set $S$ of cardinality at most $n$ in $\mathbb R^m$. For a given $\epsilon$, a suitable $B$ exists if and only if there is a subset $T\subset S$ of cardinality at most $m$ such that the convex hull $conv(T)$ majorizes the set $(1-\epsilon)S$ in the following sense: for every $v\in S$, there is a point in $conv(T)$ which is component-wise greater than $(1-\epsilon)v$. And this majorization is implied by the following: $conv(sym(T))$ contains the set $(1-\epsilon)S$, or equivalently, the set $(1-\epsilon)conv (sym(S))$, where by $sym(X)$ denotes the minimal origin-symmetric set containing $X$, that is, $sym(X)=X\cup -X$. Consider the polytope $P=conv(sym(S))$. We want to find a subset of its vertices of cardinality at most $k$, such that their convex hull approximates $P$ up to $(1-\epsilon)$-rescaling. This problem is invariant under linear transformations, and we may assume that $P$ has nonempty interior. Then Fritz John's theorem asserts that there is a linear transformation of $\ mathbb R^m$ which transforms $P$ to a body contained in the unit ball and containing the ball of radius $1/\sqrt m$. For such a set, $(1-\epsilon)$-scaling approximation follows from $(\ epsilon/\sqrt m)$-approximation in the sense of Hausdorff distance. So it suffices to choose $T$ to be an $(\epsilon/\sqrt m)$-net in $S$. Then a standard packing argument gives the above upper bound for $\epsilon$. On the other hand, if $S$ is contained in the unit sphere and separated away from the coordinate hyperplanes, you must choose $T$ to be a $\sqrt\epsilon$-net in $S$. This gives the lower up vote 3 bound; the "worst case" is a uniformly packed set of $n=C(m)\cdot k$ points on the sphere. down vote accepted UPDATE. Fritz John theorem, also known as John Ellipsoid Theorem, says that for any origin-symmetric convex body $K\subset\mathbb R^m$, there is an ellipsoid $E$ (also centered at the origin) such that $E\subset K\subset\sqrt m E$. (There is a non-symmetric variant as well but the constant is worse.) The linear transformation that I used just sends $E$ to the unit ball. There are lecture notes about John ellipsoid here and probably in many other sources. Comparing scaling distance (see also Banach-Mazur distance) and Hausdorff distance between convex bodies is based on the following. The scaling distance is determined by the worst ratio of the support functions of the two bodies, and the Hausdorff distance is the maximum difference between the support functions. Once you captured the bodies between two balls, you can compare relative and absolute difference. This should be explained in any reasonable textbook in convex geometry; unfortunately I'm not an expert in textbooks, especially English-language By "packing argument" I mean variants of the following argument showing that for any $\epsilon$, any subset $S$ of the unit ball in $\mathbb R^m$ contains an $\epsilon$-net of cardinality at most $(1+2/\epsilon)^m$. Take a maximal $\epsilon$-separated subset $T$ of $S$, it is always an $\epsilon$-net. Since $T$ is $\epsilon$-separated, the balls of radius $\epsilon/2$ centered at the points of $T$ are disjoint, hence the sum of their volumes is no greater than the volume of the $(1+\epsilon/2)$-ball that contains them all. Writing the volume of an $r$-ball as $c(m)\cdot r^m$ yields the result. This argument gives a rough estimate $$ \epsilon \le (2\sqrt m+1) k^{-1/m} $$ in the original problem (up to errors in my quick computations). To improve the exponent one can consider the $(m-1)$-dimensional surface of $P$ rather that the whole ball. Thank you for you answer! Quick questions for now: 1. What is $c(m)$ and $C(m)$? Are they some constants that depend on $m$? 2. Could you recommend a place where I can find out more about Fritz John's theorem? The argument seems to be correct to me but I will have to digest and work out some parts more before asking you some questions. (I don't know anything about $\epsilon/\sqrt m$ net and what the "standard packing argument" actually is.) Hope you don't mind answering some questions after that. Thank you! – Danu Mar 31 '10 at 4:16 Just regarding Q2: typing "Fritz John theorem" turns up plenty of results. That theorem is a well-established part of convex geometry. It might help your question to get fuller answers if you say something about your mathematical background/experience, so that they don't talk past you or vice versa – Yemon Choi Mar 31 '10 at 4:30 1 Yes $C(m)$ and $c(m)$ are constants depending on $m$ that I did not bother to compute. The proof assumes some background in convex geometry. I'll add some explanations so you can dig it out. – Sergei Ivanov Mar 31 '10 at 9:04 Thank you for clarification. Now I understand the whole argument. So I guess if one can show that the size of $\epsilon$-net is (roughly) $(1+2/\epsilon)^{(m-1)/2}$ instead of $(1+2/\ epsilon)^{m-1}$ then one can show the upper bound $k^{-2/(m-1)}$ which matches the lower bound? (Note: I ignore the constants.) Is this possible? – Danu Mar 31 '10 at 15:00 No, you generally cannot find an $\epsilon$-net that small. But the subset does not have to be an $\epsilon$-net. On the sphere, being a $\sqrt\epsilon$-net is sufficient, and I believe there is something similar on any convex surface. – Sergei Ivanov Mar 31 '10 at 15:46 show 1 more comment Not the answer you're looking for? Browse other questions tagged algorithms linear-algebra geometry matrices approximation-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/19826/matrix-approximation?sort=votes","timestamp":"2014-04-20T13:48:37Z","content_type":null,"content_length":"66364","record_id":"<urn:uuid:73d5f738-4066-4bb9-a79b-41fab2507730>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00404-ip-10-147-4-33.ec2.internal.warc.gz"}
Formal Verification, Theory, Techniques and Tools People involved • Nicolas Halbwachs • David Monniaux • Pascal Raymond • Matthieu Moy — APRON — ASOPT — openTLM Model-checking and abstract interpretation for safety properties Synchronous observers for the description of safety properties All our verification tools are based on the use of synchronous observers, to describe both the property to be checked and the assumptions on the program environment under which these properties are intended to hold: an observer of a safety property is a program, taking as inputs the inputs/outputs of the program under verification, and deciding (e.g., by emitting an alarm signal) at each instant whether the property is violated. Running in parallel with the program, an observer of the desired property, and an observer of the assumption made about the environment one has just to check that either the alarm signal is never emitted (property satisfied) or the alarm signal is emitted (assumption violated), which can be done by a simple traversal of the reachable states of the compound program. Apart from only needing to consider reachable states (instead of paths) this specification technique has several advantages: — observers may be written in the same language as the program under verification — observers are executable, which means they can be tested, or even kept in the actual implementation (redundancy, autotest). Safety versus liveness properties Using Lustre, one can only write safety properties, i.e., program invariants. Expressing general temporal properties (liveness) requires sophisticated formalisms such as temporal logics and Büchi automata. Such formalisms allow us to speak about the unbounded future (liveness properties), which is not the case of Lustre. The choice of restricting ourselves to such properties is motivated by the following arguments. — Safety properties are simple and natural to express, because they can always be written as "Always Prop" — They are compatible with abstractions: Abs(Prog) |= Abs(prop) => Prog |= prop ; and performing such abstractions is absolutely mandatory if one wants to deal with programs and properties involving numerical values (which is generally the case for Lustre programs). — 99 % of interesting properties are safety properties. — Even properties that look like liveness properties are actually safety properties. For instance, "the train will stop" is often useless. What we mean is "the train will stop before the wall", which can be rewritten as a safety property; this is often called a bounded liveness. Tools: model-checking, abstract interpretation, theorem-proving — Lesar is a symbolic, BDD-based, model-checker for Lustre. Lesar being a model-Checker, verification is performed on an abstract (finite) model of the program. Concretely, for purely logical programs the proof is complete, whereas in general (in particular when numerical values are involved) the proof is only partial. To get the tool see here. — NBac is a safety property verification tool, that analyzes synchronous and deterministic reactive systems containing combination of Boolean and numerical variables. NBac is based on the theory of abstract interpretation, which allows us to overcome the undecidability of the reachability/co-reachability problem for the class of programs treated by NBac. Sets of states are represented by values belonging to an abstract domain, and (fix-point) computations are performed. This leads to conservative results: if a state is shown unreachable (resp. not co-reachable), then it is for sure. More details here. — Gloups is an automatic generator of PVS proof obligations. The tool performs a reduction of the initial property expressed upon finite and infinite sequences into a set of scalar properties: our leading principle for this reduction is (continuous) induction. More precisely, properties are expressed as Lustre observers (programs), and then reduced into a set of scalar proof obligations which are discharged into the PVS theorem prover. An (interactive) proof of these obligations is a proof of the initial invariant. Those tools have been used with several industrial case studies from EADS (Ariane), Airbus, Schneider, etc. You can find more details here.
{"url":"http://www-verimag.imag.fr/Formal-Verification-Theory.html?lang=","timestamp":"2014-04-20T00:45:10Z","content_type":null,"content_length":"25394","record_id":"<urn:uuid:5dc5ac48-9481-4983-8164-d499a2e2ce27>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00361-ip-10-147-4-33.ec2.internal.warc.gz"}
Living computer solves Burnt Pancake Problem The latest news from academia, regulators research labs and other things of interest Posted: May 23, 2008 Living computer solves Burnt Pancake Problem (Nanowerk News) A research project by Davidson College scientists and collaborators at Missouri Western State University has constructed a basic "living computer" by genetically altering E. coli bacteria. The work demonstrates that computing in living cells is feasible, opening the door to a number of applications including data storage and as a tool for manipulating genes for genetic The burnt pancake problem involves a stack of pancakes of different sizes, each of which has a golden and a burnt side. The goal is to sort the stack so the largest pancake is on the bottom and all pancakes are golden side up. Each flip reverses the order and the orientation (i.e. which side of the pancake is facing up) of one or several consecutive pancakes. The aim is to stack them properly in the fewest number of flips. The Davidson/MWSU researchers used fragments of DNA as the pancakes. They added genes from a different type of bacterium to enable E. coli bacteria to flip the DNA 'pancakes'. They included components of a gene that made the bacteria resistant to an antibiotic, but only when the DNA fragments had been flipped into the correct order. The time required to reach the mathematical solution in the bugs reflects the minimum number of flips needed to solve the burnt pancake problem. As the number of pancakes increases, solving this problem quickly becomes very hard. There’s no equation that will give the correct answer; it is necessary to explore all the possible configurations of the stack of pancakes. For six pancakes, there are 46,080 configurations. For 12 pancakes, there are about 1.9 trillion. “These problems get so immense that even having a huge network of computers is not enough,” says Karmella Haynes, the lead researcher. "Because the number of bacteria in a colony grows exponentially, a single bacterium engineered to perform the flipping problem in its DNA will soon become several billion or trillion little bacterial computers. Each bacterium in the colony can then compute a separate flipping scenario. These 'bacterial computers' could act in parallel with each other, meaning that solutions could potentially be reached quicker than with conventional computers, using less space and at a lower cost." In addition to parallel computation, bacterial computing also has the potential to utilize repair mechanisms and, of course, can evolve after repeated use. The open access paper "Engineering bacteria to solve the Burnt Pancake Problem" (pdf download, 895 KB) is available at theJournal of Biological Engineering website.
{"url":"http://www.nanowerk.com/news/newsid=5813.php","timestamp":"2014-04-17T07:36:03Z","content_type":null,"content_length":"32698","record_id":"<urn:uuid:c5fbde79-acfb-4826-a6f1-eb8002c219a5>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00267-ip-10-147-4-33.ec2.internal.warc.gz"}
Three Mode (PID) Controller [ ppt ] Three Mode (PID) Controller times Topic under Control Mode Source: www.che.utexas.edu File size: 2.55 MB File type: ppt Last download on: Thu Mar 27, 2014 10:16:27 PM Short Desciption: Three Mode (PID) Controller . Proportional; Integral; Derivative; Proportional Control ... Proportional plus integral: No offset. Better dynamic response ... On-off Controllers Simple Cheap Used In residential heating and domestic refrigerators Limited use in process control due to continuous cycling of controlled variable excessive wear on control valve. Bookmark or share this info on Google+ and leave comment at below : People who downloaded this also viewed in this sites: PID Control System Analysis and Design PID Control System Analysis The wide application of PID control has stimulated and sus- ..... Because of the simplicity of PID control, parameters can be ... PI Control works PID Control Dynamics . At t a, the amount of error is positive and small -> PI control ... Control Mode . Proportional Gain (K c) Integral Time (t... Controller Design for Active Cavity Flow For C 0 we took a simple first order controller placing the poles as far left in the complex plane as possible. 3.3 PID Controller Design The... Leave comment : "Three Mode (PID) Controller" Posted by same here Sat Nov 12, 2011 10:25:19 PM Posted by 65 Mon Oct 10, 2011 05:59:02 AM Nice presentation, fresh Posted by 65 Mon Oct 10, 2011 05:59:00 AM Nice presentation, fresh Posted by 65 Mon Oct 10, 2011 05:58:57 AM Nice presentation, fresh Posted by 65 Mon Oct 10, 2011 05:58:39 AM Nice presentation, fresh Posted by 65 Mon Oct 10, 2011 05:58:39 AM Nice presentation, fresh Posted by 65 Mon Oct 10, 2011 05:58:39 AM Nice presentation, fresh Posted by 65 Mon Oct 10, 2011 05:58:38 AM Nice presentation, fresh Posted by 65 Mon Oct 10, 2011 05:58:38 AM Nice presentation, fresh Related Free Files Design and Performance of PID and Fuzzy... - Download: 53 times ADAPTIVE CONTROL Chapter 8... - Download: 46 times LINEAR CONTROL SYSTEMS... - Download: 99 times Design OfFuzzy Controllers... - Download: 43 times FUZZY LOGIC FOR ENGINEERING APPLICATIONS... - Download: 170 times PID Controller Design with Guaranteed... - Download: 45 times Proportional Integral Derivative PID... - Download: 334 times ADAPTIVE CONTROL chapter 0 -1 ... - Download: 76 times Fuzzy Logic Control... - Download: 50 times PID Control... - Download: 110 times
{"url":"http://controlmanuals.com/files/Automation/Control-Mode/Three-Mode-(PID)-Controller~ppt377.html","timestamp":"2014-04-18T23:16:09Z","content_type":null,"content_length":"43206","record_id":"<urn:uuid:6cc8cbfb-d974-4f9d-ae35-ae2c1f6d6dd0>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00635-ip-10-147-4-33.ec2.internal.warc.gz"}
Prove every compact set is closed and bounded. December 17th 2009, 07:26 PM #1 Prove every compact set is closed and bounded. Does anyone have an elegant/easy to remember proof of this? I can prove bounded easily, and I have notes on why its closed, but they're kind of messy and confusing! Suppose (for a contradiction) that for all $y\in K$ there exists $\epsilon _y$ such that $B_{\epsilon _y} (y)$ contains only a finite number of terms of a given sequence $(x_n)\subset K$ then since $K$ is compact, the cover $\{ B_{\epsilon _y} (y) : y\in K \}$ has a finite subcover ie. there exists $y_0,...,y_m$ such that $K \subset \cup_{i=1}^{m} B_{\epsilon _{y_i}} (y_i)$ but this is clearly a contradiction since each of these balls contain only finitely many terms. So we conclude that for any given sequence $(x_n)\subset K$ there exists a $y\in K$ such that for all $\ epsilon >0$$B_{\epsilon } (y)$ contains infinitely many terms of said sequence ie. there exists a subsequence $(x_{n_k})$ converging to $y$. This in turn proves that $K$ is closed trivially. Suppose (for a contradiction) that for all $y\in K$ there exists $\epsilon _y$ such that $B_{\epsilon _y} (y)$ contains only a finite number of terms of a given sequence $(x_n)\subset K$ then since $K$ is compact, the cover $\{ B_{\epsilon _y} (y) : y\in K \}$ has a finite subcover ie. there exists $y_0,...,y_m$ such that $K \subset \cup_{i=1}^{m} B_{\epsilon _{y_i}} (y_i)$ but this is clearly a contradiction since each of these balls contain only finitely many terms. So we conclude that for any given sequence $(x_n)\subset K$ there exists a $y\in K$ such that for all $\ epsilon >0$$B_{\epsilon } (y)$ contains infinitely many terms of said sequence ie. there exists a subsequence $(x_{n_k})$ converging to $y$. This in turn proves that $K$ is closed trivially. thanks for your response. that clears some of it up... but could you clarify what the contradiction is exactly? i seem to be missing it By how we defined them, each $B_{\epsilon _y} (y)$ contains only a finite number of terms of a sequence (a countable infinte set) so when we get our finite subcover we can get at most a finite number of terms of the sequence in the union of a finite number of balls, but these union contains $K$ which contains the sequence. I assume that we are in a metric space. Here are traditional proofs for both properties. Suppose that $x$ is a limit point of $K$ but $xotin K$. $\left( {\forall y \in K} \right)$ this is true $r_y = \frac{{d(x,y)}}{4} > 0$. The collection $\left\{ {B(y;r_y )} \right\}_{y \in K}$ covers $K$. So finite subcollection $K \subset \bigcup\limits_{j = 1}^n {B(y_j ;r_{y_j } )}$ also covers $K$. But note that $x \in \bigcap\limits_{j = 1}^n {B(x ;r_{y_j } )}$. That is a open set that contains $x$ and no other point of $K$. Contradiction. For bounded, there is finite collection $\bigcup\limits_{j = 1}^n {B(y_j ;1)}$ covering $K$. Let $M = \max \left\{ {d(y_k ,y_j )} \right\} + 2$. It is easy to show that $M$ is a bound for $K$. Last edited by Plato; December 18th 2009 at 09:29 AM. Here's what I think is the simplest proof that any compact set is bounded. (Assuming that A is in a metric space, of course.) Let p be any point in the compact set, A. Let $B_p(n)$ be the open ball centered at p with radius n. Certainly every point in A has some distance from p and there exist an integer larger than that distance. That is, the set of all such open balls is an open cover of A. Since A is compact, there is a finite subcover of A, so there is a largest "N". Show that, if x and y are any points in A, d(x,y)< 2N. To show that any compact set, A, is closed, show that its complement is open. (Again, in a metric space.) Let p be a point in the complement of A. For any q in A, let B(q) be the open ball, of radius 1/2 the distance from p to q, centered on q. Let C(q) be the open ball, of radius 1/2 the distance from p to q, centered on p (note that C(q), though centered on p, is still "indexed" by q). The set of all open balls, B(q), for all q in A, is an open cover for A. Since A is compact, there exist a finite subcover, $\{B(q_1), B(q_2), \cdot\cdot\cdot, B(q_n)\}$. Look at the corresponding collection of open sets $\{C(q_1), C(q_2), \cdot\cdot\cdot, C(q_n)\}$. Since p is in each of them it is in there intersection. Further, since this is a finite collection, its intersection is an open set. Finally, since every member of A is in one of the sets $B(q_i)$, it is not in the corresponding $C(q_i)$ and so not in there intersection. That is, the intersection of the all the $C(q_i)$ is an open set, containing p, which contains no member of A. That means that p is an interior point of the complement of A and, since p could be any member of the complement of A, the complement of A is open and A itself is closed. Last edited by Plato; December 19th 2009 at 05:07 AM. Thanks! That is super clear... At first, I thought Rudin made a typo on the index There are so many experts here! I love that one! It's so cool! December 17th 2009, 08:09 PM #2 Super Member Apr 2009 December 17th 2009, 08:18 PM #3 December 17th 2009, 08:28 PM #4 Super Member Apr 2009 December 18th 2009, 07:20 AM #5 December 19th 2009, 03:46 AM #6 MHF Contributor Apr 2005 July 13th 2010, 03:12 PM #7 Jul 2010 July 13th 2010, 08:55 PM #8 Feb 2010
{"url":"http://mathhelpforum.com/differential-geometry/121036-prove-every-compact-set-closed-bounded.html","timestamp":"2014-04-20T15:01:45Z","content_type":null,"content_length":"66258","record_id":"<urn:uuid:8c0d2fc9-bc9a-4440-a703-dca05249bec2>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00003-ip-10-147-4-33.ec2.internal.warc.gz"}
Design of Combined footing vs. Single Footing Member Login Come Join Us! Are you an Engineering professional? Join Eng-Tips now! • Talk With Other Members • Be Notified Of Responses To Your Posts • Keyword Search • One-Click Access To Your Favorite Forums • Automated Signatures On Your Posts • Best Of All, It's Free! *Eng-Tips's functionality depends on members receiving e-mail. By joining you are opting in to receive e-mail. Donate Today! Posting Guidelines Promoting, selling, recruiting, coursework and thesis posting is forbidden. Link To This Forum! Forum Search FAQs Links Jobs Whitepapers MVPs pob11646 (Structural) 21 Apr 09 5:11 I have got a general question about the design of a combined footing vs. a single footing. I have just finished a design of a single footing for a storage silo. It is founded on grade, with a soil bearing capacity of 3 ksf. My footing will be a 19 ft by 19 ft by 3 ft thick footing, reinforced top and bottom by #6 at 12" O.C. top & bottom each way. I analyzed for overturning moment, sliding for gravity loads, wind and seismic forces. Now, I have to enlarge my footing to take into consideration the construction of a similar footing for similar loads just adjacent to my current footing. I would not be constructing the whole combined footing all at once, because the while the first storage silo is for present use, the second one is for future. In effect, I would only be constructing, say 1-1/3, or even 1-1/4 of the combined footing for now. Would designing a combined footing for two of these storage silo structures be the best way to do it? I would design a combined footing for both of the structures, but only construct, say 1-1/3 of the combined footing, leaving the rest to be constructed when it is required. Is this the best approach? Intuitively, I would think that my previous design would also work, but I just need to double the size of the footing. Would this be correct? In fact, I would also intuitively think, without starting my design, that doubling my previous design would be conservative, and that my combined footing could actually be smaller in footprint and thickness? Would this also be accurate? What would be the best way to approach the design of a combined footing? What is the best way to allow for the construction of the rest of the footing? Mild steel dowels embedded in the first footing, and greased on the exposed end? Any other suggestions? Thanks. jheidt2543 (Civil/Environmental) 21 Apr 09 8:37 In my opinion, you would be better off to look at the two footings independantly and make sure the soil bearing capacity is not exceeded in the footing influence overlap area in the future when the two are finally constructed. The loading on a combined footing will be totally different since you will have to consider the load case where one silo is full and the other empty; introducing bending moments in the combined footing that an individual footing wouldn't have. kslee1000 (Civil/Environmental) 21 Apr 09 9:02 I agree with jheidt2543. In you stability analysis, do not forget to add a case for influences from future construction. pob11646 (Structural) 21 Apr 09 12:31 Thank you very much, jheidt2543 and kslee1000. I've got two questions, though. One is, what is the best way to evaluate the footing overlap influence area? Would this be a factor if I design a combined footing, or would it only be a factor if I design two separate footings. Why would two separate footings be better than a single combined footing? Is it because they are built at different times? Another question I've got is adding the case for influences from future construction. What are the major influences do I need to consider, and what is the best way to go about it? Thanks again. kslee1000 (Civil/Environmental) 21 Apr 09 13:48 For combined footing with columns to be placed in different stages, the controlling bearing pressure can be confusing, since for an unknown time period, the footing has only one column load, that is eccentric to the center of the footing, that are to be determined by two column loads. (Confused already? :) If use indivisual footing, the overlap pressure cones will increase soil bearing pressure, thus settlement. If the two footings' bottom are to be at the same elevation, I don't see much problem with bearing, ask your geotech engr. on settlement. If the latter is higher than the former, then you have to check both. When using this scheme, at least one side of the footing will be exposed for the construction of the latter, does it has adversed effects on the footing (especially lateral stability, OT)? Do I have adequate space in between the footings without over crowed the space (shoring, form works). Just a few things to think about. (2) BigH (Geotechnical) 21 Apr 09 22:08 Suggest you might be interested in the link: Also many texts cover the effects of overlap - see Tomlinson Foundation Design book for instance (with pretty pictures!) jheidt2543 (Civil/Environmental) 22 Apr 09 7:19 Take a look at the link BigH supplied. Figure 2 shows the overlap area I described (a picture is worth...). Thanks BigH for a really spot on link! That paper also shows the problem of uneven loading, which for a combined footing is magnified due to the filling and unloading of the adjacent silos, which causes big flucutations in bending BigH (Geotechnical) 22 Apr 09 7:27 Bozozuk also has a paper in the Canadian Agriculture Engineering Journal 1973 or so. It downloaded but wouldn't print - go figure. I knew about the original paper (he also has a classic on ground movements due to trees) - but I googled "silo foundation failures Bozozuk" and got the hit to the paper I noted above. Delijosi (Geotechnical) 24 Apr 09 11:42 Having read all the contributions above, I submit that a single foundation is an economic option for the structures. You have to check 3 conditions; 1. Condition of the two silos, filled to the brim. Check that there is adequate FoS against bearing failure and settlement within limit. 2. Using the dimensions of the foundation in condition 1, check for FoS against bearing failure for one fully filled tank without the other. 3. Same as above but with the tank on the other side fully filled without the first tank. If the 3 conditions are ok, there should not be any problem. jheidt2543 (Civil/Environmental) 24 Apr 09 14:31 I agree with the conditions you list that have to be satified (along with wind load), I think the point made was that a combined footing solution requires more engineering and more rebar in the construction because of the negative slab moments than two separate foundations. They both will work, it's just a matter of economics. aeoliantexan (Geotechnical) 24 Apr 09 23:36 You need to consider the settlements. If you build two silos close together on a common mat and load them simultanelously, they will tend to tilt towards each other. If you build one on the combined mat and load it before building the second, silo No. 1 will tilt away from silo No. 2; then when both are loaded, they will tilt towards each other. The relative movement at the tops will be greater in the second case, and that movement will affect anything that joins the two silos, such as conveyors and catwalks. Moment in the center of the mat will first be negative, then Unless the settlements are expected to be inconsequential, it would be best to support the silos on individual footings separated far enough apart to keep the tilting effect small. Your geotech can determine that minimum separation from settlement analyses. Of course, if the soil is uniform, there is in theory an ideal separation between the two silos on a common foundation that will result in uniform settlement of both, but only if they are loaded simultaneously the first time they are filled. kslee1000 (Civil/Environmental) 25 Apr 09 9:10 If you have deceided go with seperate footings on a staged construction, the investigation should look like: 1. Two silos with varies filling scheme and loading combinations to find max. stress anainst the allowable with a conservative FOS. Adjust footing sizes to achieve this. 2. Two silos fully loaded to find the overlap in stress bulbs, and estimate differential settlement. Adjust footing distance to eliminate, or minimize effect of overlapping. It may require several iterations to reach a safe design. If either criterion (bearing, settlement) couldn't be satisfied, suggest to consider pile/pier foundations. BAretired (Structural) 27 Apr 09 16:48 You said: You need to consider the settlements. If you build two silos close together on a common mat and load them simultanelously, they will tend to tilt towards each other. I don't think that is correct. The two silos would be loading the combined footing uniformly, so they should have no tendency to tilt towards each other. Back If the owner wants the future silo to be close to the present silo, a combined footing is necessary in order to avoid overlapping of stress bulbs, hence tilting silos. Forum My suggestion in that case, is to build only the footing required for the first silo but place additional reinforcement required for the future combined footing. This would stop at the common face with mechanical anchors provided at the bulkhead. That way, the first silo stresses the soil uniformly throughout its life. If and when the second silo is built, it would be best to pick a time when the first silo is as close to empty as possible. The reinforcement for the new silo would be fastened to the previously placed mechanical anchors and concrete would be cast against the face of the existing foundation. The surface would be intentionally roughened for bond. If, on the other hand, the owner does not require the two silos to be close together, the separate foundations are the simplest way to go. aeoliantexan (Geotechnical) 29 Apr 09 2:14 Your are correct that two adjacent identical silos on separate foundations will tend to tilt towards each other due to the overlapping of the stresses, just as the paper in BigH's link depicts. Joining the two foundations doesn't eliminate the overlap, it introduces a moment at the connection that resists the tilting. Because of this moment, the developed bearing pressure is reduced near the connection and increased at the outside ends. The bearing pressure is not uniform; the settlement is (if the mat is sufficiently rigid and strong). The distribution of bearing pressure across the combined mat is whatever is needed to make the mat settle uniformly. It will depend on the compressibility of the soil at various depths, so it is unique to the site. The moment at the center of the mat cannot be accurately predicted without considering the compressiblity of the soil. A model that represents the soil as a single layer of springs won't do the job; there are lots of layers. I believe that this phenomenon is not well appreciated and falls into the crack between the geotechnical engineers and the structural engineers. We're both lucky that safety factors are included in the structural design and tend to cover our ignorance. Trouble comes when we get outside normal practices. I believe that a proper FE model analysis can address this issue, but lack the skills to try. I'd be interested in hearing from someone who has tried. Perhaps we could start another thread. By the way, I have seen a mat foundation supporting two silos that cracked in the middle and allowed the silos to tilt towards each other. BAretired (Structural) 29 Apr 09 10:06 I heartily agree that "We're both lucky that safety factors are included in the structural design and tend to cover our ignorance". If two silos are placed symmetrically on a 19' x 38' x3' deep rectangular pad such that the resultant load is directly over the center of footing, then the pressure under that footing would be uniform and there would be a single bulb of pressure forming in the soil below the footing. This assumes that the two silos are filled simultaneously with silage of the same density. If one silo is emptied and the other remains full, the resultant load shifts towards the full silo. When the eccentricity of total load becomes 38/6 = 6'-4", the pressure varies from 2p to 0 where p is the average pressure. If the eccentricity exceeds 6'-4", the effective area of footing is reduced but the pressure distribution remains triangular. During all of this, the bulb of pressure changes shape considerably. The variations in pressure will result in differential settlement. If the footing remains rigid, both silos will tilt toward the loaded side. For that reason, it is advantageous to maintain the same level of silage in the two silos. So far as settlements are concerned, I agree that it depends on the soil properties. Soil is not a perfectly elastic material. I don't think a finite element analysis would shed much light on the subject unless you can input a realistic array of soil properties throughout the volume of the bulb of pressure. kslee1000 (Civil/Environmental) 29 Apr 09 10:46 From my limited understanding on soil behaviors, I think there are two ways to yield resulting uniform settlement. 1. Flexible mat. The resulting soil pressure is close to uniform, thus the settlement. However, I don't think it would work for high-localized concentrate load like the silos. 2. Rigid block. The settlement would be forced to be uniform, disregard of the soil pressure distributions, since the block can't deform (negligible, if any). However, tilting (uneven settlement) could be a problem. Piles may be one way out, if both do not work? Please comment. BAretired (Structural) 29 Apr 09 12:03 We have not been told the size of the silos, but I am assuming they are about 18' in diameter. They could be flat bottomed or hopper bottomed. In the first case, the load is uniformly spread over the silo area. In the second case, it is likely carried by columns arranged in a circular ring. Either way, with a thickness of three feet, the footing must be considered rigid and the pressure uniform or in the case of different silage heights, uniformly varying. The pressure for any combination of loads can be reasonably well predicted if the footing is deemed to be a rigid body. Settlement, on the other hand is not so easily predicted, particularly if the properties of the soil are variable. pob11646 indicated that the soil had an allowable bearing pressure of 3 ksf. If this is adequate for the loaded silos, there is no need to consider piles. Piles could be an option but the type of pile and the allowable load would have to come from geotechnical recommendations. kslee1000 (Civil/Environmental) 29 Apr 09 12:39 Yes, we can agree on the settlement needs to be work out by our pals at geotect department :) Please note my big "IF". Since I don't know how critical that would be - leteral movement when one full, one empty. Actually, I am quite interested in hearing more from our pals on this, also how rigid is rigid (relative to soil)- in geotechnical views. aeoliantexan (Geotechnical) 30 Apr 09 1:21 I'm delighted that you joined the conversation. Uniform, trapezoidal and triangular contact stress distributions are convenient for designing footings that at least satisfy statics, but they are far from what actually happens beneath the Geotechs are pretty well agreed that uniform stresses do not produce uniform settlements, due to geometric stress distribution. The thin floor of a steel storage tank applies a pretty uniform stress. Elastic analysis indicates that the vertical stress at most any depth directly beneath the edge of the tank is just about one-half the stress under the center. Numerous authors have reported that the settlement of a flat tank floor produces a bowl-shaped floor, with the edge settlement usually less than half the center settlement. It is also well established by elastic analysis, but much less widely recognized by practitioners, that rigid foundations do not generate uniform vertical stresses in the soil. Elastic analysis indicates that the vertical stress under the center of a rigid circular load on an elastic half space is roughly one-half the average stress, P/A. The stress at a radius of 95% of the footing radius is about 120% of the average, and the stress at the edge approaches, inconveniently, infinity. Obviously, the stress at the edge will be limited by the shear strength of the soil. If the footing is supporting a central column, the bending moment at the center must be considerably higher than the moment calculated using a uniform bearng pressure. It amazes me that the industry has not really addressed this issue, and I would like to hear others' insights on the subject. Numerous elastic equations and charts can be seen in "Elastic Solutions for Soil and Rock Mechanics" by Poulos and Davis, now available in pdf at As for the silos, if they are to go on a combined mat, I would recommend that it be reinforced conservatively. kslee1000 (Civil/Environmental) 30 Apr 09 9:35 Excellent explanation. It is always a pleasure to review some topics been put on shelf for a while, and refresh our mind. Thanks. BAretired (Structural) 30 Apr 09 9:53 I have to admit, that comes as news to me. I always assumed that the pressure under a footing carrying a column load at its centroid was safely taken as uniform pressure. If anything, I thought, the pressure would be larger directly under the column, sort of a bell shaped curve with the maximum ordinate in the middle. In the case at hand, we have a circular silo supported on a square footing or two silos supported on a rectangular footing. The pressure on the floor of each silo might be uniform or the entire load might be carried by a circular ring of columns. We don't know. It would appear that some conservatism is warranted in the design of the reinforcement.
{"url":"http://www.eng-tips.com/viewthread.cfm?qid=243203","timestamp":"2014-04-21T07:04:45Z","content_type":null,"content_length":"55100","record_id":"<urn:uuid:232f3eb3-b682-45c8-a20f-9339a40ed415>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00660-ip-10-147-4-33.ec2.internal.warc.gz"}
On Big Questions and a Century of Einstein I cannot imagine physics without Einstein. While it is certainly true that the subject has been around since well before Einstein burst on the scene 100 years ago"and there have been other great physicists both before and after"it is nonetheless Einstein's work that touches the heart and soul of what it means to me to be a physicist. Since this is the centennial year of Einstein's relativity and has been declared the Year of Physics by the United Nations, I find myself thinking more and more about Einstein lately and what his work has meant to my teaching and research. To me, there are basically two types of physicists. First there are those who like to take stuff apart and figure out how things work. Then there are those who like to ask the big questions: where did the universe come from, and why are we all here? Of course, most physicists"including Einstein"are a little of both. But for those who lean more toward the big questions (myself included) there is no greater role model than Albert Einstein. Perhaps the most astonishing thing about Einstein is that he was able to come up with the theory of relativity largely on his own and from outside the academic community. While he was aware of much of the work of his contemporaries and had all the benefits of a university education, he had nonetheless fallen into obscurity by his mid-20s, taking a job as a patent clerk in Bern, Switzerland. Somehow Einstein thrived, and in 1905 he published five papers that forever changed physics. By far the most important of his papers that year were the two dealing with the special theory of relativity. The implications of this theory are mind boggling: the passage of time and spatial distance are all relative, and mass and energy become equivalent as stated in the famous E=mc^2. According to relativity, how one person ages compared to another depends on how fast they are moving relative to each other. This means that if I were to go off for one day in a relativistic rocket at, say, 99.9999999 percent the speed of light, when I returned I would be one day older, but my children would have aged by over 60 years! Of course, in our slowpoke existence we do not have to worry about such scenarios. Even in the fastest jet planes, the relative time shift due to relativity is only a fraction of a microsecond for a one-day trip. This is far too little for anyone to notice"though it has been measured with precise atomic clocks. Indeed, there have been numerous high-precision tests of relativity over the years, and much of my research or the past seven or eight years has been concerned with looking for better ways to test relativity. In the end, as bizarre as relativity may seem, its main predictions do appear to be correct. Time and space do not behave as we naïvely assume based on ordinary experience. However, the relative nature of space and time is just the beginning of the story. Einstein's 1905 theory of special relativity concerns only steady (or non-accelerated) motion. In the years 1907-1915, Einstein generalized the theory to include the effects of acceleration"the resulting theory is known as general relativity. Since gravity causes objects to fall with an acceleration, what Einstein ultimately had to do when he developed general relativity was to invent a new theory of gravity. In general relativity, massive objects like planets, stars, and galaxies cause the space and time around them to curve. This distortion of space and time can become so extreme that in objects called black holes, not even light can escape. Perhaps most intriguing of all, one can model the whole universe in the context of general relativity. The solutions describe a dynamical universe that can expand, contract, or even accelerate. When combined with experimental observations, we find that the observable universe appears to have a beginning. Projecting back about 14 billion years, there appears to be a moment of creation or Big Bang in which all matter and energy erupted from a single point of extremely high density and hot temperature. As a teacher at Colby, I never tire of retelling the story of Einstein's discoveries and how they have reshaped our understanding of the universe. I see the initial looks of total disbelief on students' faces as I tell them about the strange behavior of space in time in special relativity. In my upper-level course on general relativity, we work our way slowly through the mathematical intricacies of describing warped space and time and how very recent discoveries have altered our understanding of the evolution and makeup of the universe. As much as I admire Einstein's relativity, it is still just the latest installment of an ongoing (and probably never-ending) effort to understand all of physics at the most fundamental level. Indeed, much of the current research in theoretical physics is devoted to finding a quantum theory of gravity that will supercede Einstein's general relativity. Ultimately, though, when a deeper, more fundamental theory is uncovered I suspect it will be the result of a huge collaborative effort. It is hard for me to imagine that there will ever again be a single person emerging from near total obscurity who will single-handedly change our view of the entire universe. For this reason, I believe Einstein will always remain a unique figure in physics, one who will inspire and amaze physics students for years to come. Robert Bluhm is the Sunrise Professor of Physics. He has been at Colby since 1990. His research interests include theoretical particle physics, atomic physics, gravity, and cosmology.
{"url":"https://www.colby.edu/colby.mag/issues/32/article/392/on-big-questions-and-a-century-of-einstein/","timestamp":"2014-04-16T13:23:54Z","content_type":null,"content_length":"42588","record_id":"<urn:uuid:252d00d0-14a2-4851-91f3-b2e7f74d9d07>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00351-ip-10-147-4-33.ec2.internal.warc.gz"}
Wheaton, IL Science Tutor Find a Wheaton, IL Science Tutor ...I am aspiring to be a physician one day. I enjoy trying new things and being invested in the other people's lives. I started Stop Diabetes Loyola, a non-profit organization designed to empower students and parents to make healthier lifestyle choices. 13 Subjects: including biology, chemistry, English, ESL/ESOL ...I consistently monitor progress and adjust lessons to meet the specific needs of each individual student. Thank you for considering my services. I look forward to helping you succeed in mathematics.I have a teaching certificate in mathematics issued by the South Carolina State Department of Education. 12 Subjects: including ACT Science, calculus, geometry, algebra 1 In my years of working in church ministry and teaching at the College of DuPage, I have had the opportunity to teach people of all ages with diverse backgrounds and learning styles. I consider myself to be a gifted writer, and my goal is to make all the material I teach applicable and practical in ... 11 Subjects: including philosophy, writing, public speaking, social studies ...Writing is the art learned after learning to read well. When you write something you are expressing your thoughts, so as to communicate with the reader. So a lot of your emotion should flow into that thought and make the reader feel it. 8 Subjects: including biology, chemistry, reading, zoology I have a Ph.D in Organic/Medicinal Chemistry from the University of Wisconsin-Madison and 30 years of experience in the fields of organic, medicinal, and pharmaceutical chemistry (drug discovery and development). I tutor chemistry/organic chemistry to college level students (undergraduate, pre-med, ... 2 Subjects: including chemistry, organic chemistry Related Wheaton, IL Tutors Wheaton, IL Accounting Tutors Wheaton, IL ACT Tutors Wheaton, IL Algebra Tutors Wheaton, IL Algebra 2 Tutors Wheaton, IL Calculus Tutors Wheaton, IL Geometry Tutors Wheaton, IL Math Tutors Wheaton, IL Prealgebra Tutors Wheaton, IL Precalculus Tutors Wheaton, IL SAT Tutors Wheaton, IL SAT Math Tutors Wheaton, IL Science Tutors Wheaton, IL Statistics Tutors Wheaton, IL Trigonometry Tutors
{"url":"http://www.purplemath.com/Wheaton_IL_Science_tutors.php","timestamp":"2014-04-16T04:26:12Z","content_type":null,"content_length":"23839","record_id":"<urn:uuid:342c6854-8265-485d-9bef-727f1dbc30f3>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00197-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: January 2005 [00576] [Date Index] [Thread Index] [Author Index] Re: Re: simplifying inside sum, Mathematica 5.1 • To: mathgroup at smc.vnet.net • Subject: [mg53790] Re: [mg53749] Re: simplifying inside sum, Mathematica 5.1 • From: DrBob <drbob at bigfoot.com> • Date: Thu, 27 Jan 2005 05:41:31 -0500 (EST) • References: <ct4h70$av2$1@smc.vnet.net> <ct56p1$eca$1@smc.vnet.net> <200501260936.EAA00194@smc.vnet.net> <opsk7uawdtiz9bcq@monster.ma.dl.cox.net> <41F7DE08.7090001@cs.berkeley.edu> • Reply-to: drbob at bigfoot.com • Sender: owner-wri-mathgroup at wolfram.com Exactly. We get farther by working WITH Mathematica, not against it. Paying attention to examples like these is a part of that, of course; it never hurts to know where the gaps and pitfalls may lurk. On Wed, 26 Jan 2005 22:36:18 +0000, Andrzej Kozlowski <akoz at mimuw.edu.pl> wrote: > On 26 Jan 2005, at 18:14, Richard Fateman wrote: >> As for Andrzej's comment, that this does the job... >> Block[{Power,Infinity}, >> 0^(i_) := KroneckerDelta[i, 0]; Sum[a[i]*x^i, {i, 0, Infinity}]/. x >> -> 0] >> Here are some comments: >> 1. There is no need for Infinity to be bound inside the Block. > Indeed, I did not check that. I had reasons to think it was needed. >> 4. Your solution gives the wrong answer for >> Sum[a[i]*x^i, {i, -1, Infinity}] > Since any sum can be split into a finite sum over the negative indices > and an infinite sum over indices >=0 and since finite sums are handled > correctly this is essentially a cosmetic issue. In fact it is easy to > modify Sum to automatically split all sums in this way, and to use the > Block trick for the infinite part. But I don't think this is important > enough to bother. >> It also doesn't work for >> Sum[a[i]*x^(i^2), {i, -1, Infinity}] >> This latter problem suggests an inadequacy in the treatment of the >> simplification of Sum[KroneckerDelta[...]....] > Well, yes. One can always find ways to trip up Mathematica (and all > other CAS) in this sort of thing. It's a bit like playing chess with a > computer program; however strong it is if you get to know it well > enough you will find ways to beat it (assuming of course you are a good > chess player and understand computers). But the difference is that CAS > is not meant to be your opponent and trying to trip it up (which is > also what most of Maxim's examples involve) is a pointless exercise, > which may amuse people who like such things but has nothing to do with > any serious work. > Andrzej Kozlowski DrBob at bigfoot.com • References:
{"url":"http://forums.wolfram.com/mathgroup/archive/2005/Jan/msg00576.html","timestamp":"2014-04-16T19:10:14Z","content_type":null,"content_length":"37331","record_id":"<urn:uuid:102500df-0aa2-4048-a27d-8dee14b0505f>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00650-ip-10-147-4-33.ec2.internal.warc.gz"}
The Elements: Books I-XIII (Barnes & Noble Library of Essential Reading) Euclid's Elements is a fundamental landmark of mathematical achievement. Firstly, it is a compendium of the principal mathematical work undertaken in classical Greece, for which in many cases no other source survives. Secondly, it is a model of organizational clarity which has had a deep influence on the way almost all subsequent mathematical research has been conducted. Thirdly, it is the most successful textbook ever written, only seriously challenged as an account of elementary geometry in the nineteenth century, more than two thousand years after its first publication. Read More Show Less Product Details • ISBN-13: 9780760763124 • Publisher: Sterling • Publication date: 3/16/2006 • Edition description: Complete and Unabridged • Product dimensions: 6.00 (w) x 9.00 (h) x 2.40 (d) Euclid's Elements is a fundamental landmark of mathematical achievement. Firstly, it is a compendium of the principal mathematical work undertaken in classical Greece, for which in many cases no other source survives. Secondly, it is a model of organizational clarity which has had a deep influence on the way almost all subsequent mathematical research has been conducted. Thirdly, it is the most successful textbook ever written, only seriously challenged as an account of elementary geometry in the nineteenth ... See more details below Euclid's Elements is without question a true masterpiece of Western civilization. It is one of the most widely disseminated and most influential books of all time. A fundamental landmark of mathematical achievement, the Elements is profoundly important for several distinct reasons. Firstly, it is a compendium of the principal mathematical work undertaken in classical Greece, for which in many cases no other source survives. Secondly, it is a model of organizational clarity which has had a deep influence on the way almost all subsequent mathematical research has been conducted. Thirdly, it is the most successful textbook ever written, only seriously challenged as an account of elementary geometry in the nineteenth century, more than two thousand years after its first Homer, the greatest Greek poet, became so obscure that skeptics were able to speculate that he never existed; Euclid, the greatest Greek scientist, is comparably enigmatic. We do not know where Euclid was born. Some medieval writers referred to him as Euclid of Megara, but it is now apparent that this is a confusion with a different Euclid, a student of Socrates (469-399 BC) who was a century older than the author of the Elements. We do not know with any certainty when Euclid was born. Our best source, Proclus (AD 410-485), argues that Euclid lived some time between the death of Plato (427-347 BC) and the birth of Archimedes (287-212 BC). However, it is clear that even this is conjectural. Euclid probably learned mathematics at Plato's Academy in Athens. Some commentators have detected platonist echoes in Euclid's writing, but it is a fairly safe bet that he studied in Athens, since we know of nowhere else he could have pursued his subject to comparable depth. We can be reasonably confident that Euclid taught at Alexandria in Egypt, most likely during the reign of Ptolemy I (306-283 BC). One of Ptolemy's first acts as ruler had been to establish a library, the celebrated Great Library of Alexandria, and a school of advanced study, patterned after those in Athens, and known as the Museum. Euclid appears to have been hired as one of the original faculty and to have remained there for the rest of his career. Little else is known, although a few anecdotes have been preserved. One has Euclid responding to a student who had asked what he would gain by the geometry he had learnt by directing his slave to "Give him threepence, since he must profit by what he learns." Another has him reproaching an impatient Ptolemy that there is "no royal road in geometry." However, both stories are of much later date and the latter at least is also told of other ancient mathematicians. The Greeks were not the first civilization to study geometry. They themselves attributed the invention of the discipline to Egyptian surveyors seeking to compensate landowners for the annual inundation of the Nile. Whatever the status of this story, it is true that the Egyptians did have practical surveying techniques at a very early date, as did ancient Babylonian, Chinese, and Indian civilizations. What influence these civilizations had on each other, or on the Greeks, is much harder to determine. The Babylonians left behind a rich legacy of mathematical cuneiform tablets, but they were far more interested in algebra than geometry. Documents from other early civilizations are either fragmentary or frustratingly hard to date. Despite some tantalizing passages, particularly in the Indian S´ulvasu¯tra tradition, all these sources lack any concept of rigorous proof. This specifically mathematical practice of deriving results with certainty from undisputed axioms, rather than by generalization from practical cases, appears to be a Greek innovation. Rigorous proof may have antedated Euclid by little more than a century: Traditional attributions of geometrical proofs to the early Greek philosophers Thales (c. 624-548 BC) and Pythagoras (585-497 BC) are hard to justify. However, both men left behind disciples, and it is amongst these less well-known figures that the concept of proof is likely to have originated. The next decisive step in mathematical history was the establishment of Plato's Academy in about 387 BC. Plato was fascinated by mathematics: Although he was not himself a mathematician, he made geometry central to the curriculum of his prototype university and recruited the greatest mathematicians of his day, including such luminaries as Eudoxus (408-355 BC) and Theaetetus (415-369 BC), much of whose work is preserved in Euclid's Elements. There are several tempting misconceptions about the nature of the Elements: that it was the first such work of its kind; that the results, or at least the proofs, were largely Euclid's own work; that it is exhaustive of Greek mathematics. None of these claims are true. Several earlier authors are credited with having written Elements before Euclid, the earliest known being Hippocrates of Chios (fl. c. 430 BC). The success of Euclid's Elements obliterated all these earlier works, which have been lost since antiquity: Then as now, obsolete textbooks fell swiftly from favor. However, it would appear that all such works sought to assemble the core subject matter of Greek mathematics into a logical sequence, beginning with first principles which have to be accepted without proof, and then deriving some of the most useful and generally applicable results in the field from these principles. Euclid's genius was not as an original mathematician, but as a brilliant expositor. His work brings together many of the most important mathematical results then known. Euclid never attributes these results to their original discoverers, although we are sometimes able to identify these mathematicians through discussions in earlier works, such as those of Aristotle (384-322 BC), and references in ancient commentaries on Euclid. In this manner much of Books V and VI may be attributed to Eudoxus and much of X and XIII to Theaetetus. However, Euclid was not attempting to anthologize all prior mathematics, only the 'elementary' results, those which were most important to the subject and essential to a thorough mathematical education. Material that was more advanced, such as the theory of conic sections, or more rudimentary, such as everyday methods of calculation, is excluded from the Elements. It is also somewhat misleading to say that the Elements is only concerned with geometry. Although the entire content of the book is set out geometrically, much of it is concerned with subject matter that we would now expect to be articulated by other means. Specifically, Books I through IV are concerned with plane geometry. Book I sets out the underlying principles of this discipline: It begins with definitions of the principal terms used in geometry, thereby setting boundaries to the subject. Next Euclid states five postulates and five common notions. The latter are to be understood as subject neutral claims, common to any discipline, whereas the former constitute axioms from which the rest of geometry may proceed. The remainder of the book sets out the principal results in the geometry of triangles. In Book II Euclid extends his treatment to rectangles, in Book III circles, and in Book IV polygons. Book V introduces a theory of proportion, which we would find more familiar in an algebraic format. Book VI applies this theory to specifically geometrical questions. Books VII, VIII, and IX are given over to results in arithmetic, that is elementary number theory. Book X confronts what had been for the Greeks the vexing topic of irrational numbers and incommensurable magnitudes. Pythagoras had taught that all magnitude could be expressed as ratios of whole numbers; the discovery by later members of his school that this was not so had been profoundly unsettling to the foundations of their mathematics and the quasi-religious beliefs from which they were derived. Finally, Books XI to XIII are concerned with solid geometry and culminate in a demonstration that the 'Platonic' solids, regular polyhedra having regular polygons for faces, are exactly five in Euclid is known to have written several other works, four of which survive: the Data, the Division of Figures, the Phaenomena, and the Optics. The Data forms a sort of companion to the first six books of the Elements, consisting of worked problems in which various magnitudes are given and others are to be found. The Division of Figures exists only in an Arabic translation. It shows how to divide figures of plane geometry, such as triangles, circles, and quadrilaterals, into parts whose areas stand in a given ratio to each other. The Phaenomena is a text on astronomy, which includes some work on spherical geometry. The Optics is mostly concerned with perspective: It shows Euclid's allegiance to Plato's theory of vision, in which we see by way of rays emitted from our eyes, rather than incident upon them. Euclid is still able to derive sound conclusions, since he grasps the underlying point that light travels in straight lines. Lost works attributed to Euclid include the Conics, the Porisms, and the Pseudaria. The Conics appears to have been an anthology of prior work on conic sections, which was subsumed into the first three books of the Conic Sections of Apollonius (c. 262-190 BC), a work which survives in a mixture of Greek and Arabic editions. The Porisms was a treatment of propositions of a somewhat elusive nature, midway between theorems and problems, in the sense that they require a construction, but of something that is already known to exist. The Pseudaria was an anthology of fallacies in elementary geometry, intended as a teaching aid to the Elements. As we do with many ancient works, we rely on a precarious process of transmission for the text of the Elements that we read today. Paradoxically, the much earlier Babylonian mathematical sources survive in original editions, sometimes the authors' own manuscripts. However, the Babylonians wrote on clay tablets, which in hot, dry conditions are virtually indestructible, whereas the Greeks wrote on much less durable papyrus or parchment. A few scraps of Euclid on papyrus, written perhaps four hundred years after the book's original publication, have been recovered from the Egyptian desert, but the earliest complete manuscript, now in the Bodleian Library at Oxford, dates from AD 888. This is closer to our own time than it is to Euclid's. The relationship between these early texts and a modern edition such as this one is somewhat circuitous. It is believed that a Latin edition of the Elements may have circulated in the later Roman Empire, but all traces of it have long since disappeared. The first language into which we know Euclid was translated is Arabic. The earliest such translation appeared in the reign of Ha¯ru¯n al-Rashi¯d (786-809), the Caliph of Baghdad familiar from so many of the tales of The Thousand and One Nights. Although this edition is lost, several subsequent Arabic translations survive, which have been of some use in standardizing the text, on the hypothesis that the translators were working from good quality Greek manuscripts. All three of the earliest surviving Latin editions, by Athelhard of Bath (fl. c. 1120), Gherard of Cremona (1114-1187), and Johannes Campanus (fl. 1261-1281), are primarily translations from the Arabic. The third of these became the first edition of Euclid to appear in print, in Venice in 1482. Meanwhile, Greek manuscripts had been preserved, mostly in the monastic libraries of the Byzantine Empire, and began to trickle into western Europe, perhaps as early as the fourteenth century. The search for a sound Greek text is further complicated by the poor quality of some of these early manuscripts, which after all are at the very least copies of copies of copies. Moreover, the copyists may have had a poor grasp of the subject matter, or have been working from inferior originals. In particular, the earliest printed Greek edition of 1533, the so-called editio princeps, upon which many subsequent translations were based, was derived from especially weak manuscripts. More importantly, this and all editions of the Elements until the nineteenth century were ultimately derived from a version prepared by Theon of Alexandria in about AD 400. This edition contained many emendations, most not clearly marked, where Theon had sought to improve upon Euclid, sometimes successfully, sometimes not. Only in the early nineteenth century did the French scholar François Peyrard (1760-1822) identify a tenth-century manuscript, now known as P, from the Vatican Library as a copy of an earlier edition from which Theon's changes were absent. The precise connection between P and the so-called Theonine editions is complicated by textual evidence that suggests that Theon was working from an even earlier and more accurate edition. (After all, he had at his disposal the resources of the Library of Alexandria, where the finest editions still extant in the fourth century were likely to be located.) Some early nineteenth century editions of the Elements, including Peyrard's own, made minor use of P, but modern Euclid scholarship really began in 1883-8 with the publication of a Greek text prepared by the Danish philologist Johan Ludvig Heiberg (1854-1928). He based his text primarily on P and, by comparing it with many other manuscripts, was able to reconstruct an edition similar to that which circulated prior to Theon's editorial intervention. All important modern editions of the Elements are derived from Heiberg's text. The first English edition of the Elements appeared in 1570 in a translation by Sir Henry Billingsley (c. 1545-1606), who subsequently became Lord Mayor of London. This substantial volume contains numerous annotations, and an extensive preface by John Dee (1527-1608), better known for his interest in magic, who may have had a hand in the translation as well. Of the many subsequent English editions, the most influential was first published in 1756 by Robert Simson (1687-1768). However, the 1908 translation by Thomas Heath (1861-1940) contained in this book was the first to employ Heiberg's text and eclipses all its predecessors. Heath was one of the last great academic amateurs: Despite being a world class expert in the history of mathematics, he spent his entire working life in the British Civil Service, rising to the senior rank of Permanent Secretary to the Treasury, a post he held throughout the First World War. He was knighted twice, in 1909 and 1916, for his administrative work, and awarded fellowships of the Royal Society and the British Academy for his academic achievements. Besides his edition of Euclid's Elements, his publications included editions of the surviving works of Archimedes and Apollonius, and influential histories of Greek mathematics and astronomy. In 1925 Heath produced a second edition, 'revised with additions', of the Elements. Most of the text was reproduced photographically and the changes are comparatively minor: The 'Addenda and Corrigenda' were incorporated into the main text, and two short essays were added on Pythagoras and on the traditional names associated with the proposition I.5 ('Pons Asinorum' or 'Asses' Bridge' and 'Elefuga' or 'the flight of the miserable') and Pythagoras's Theorem I.47 ('the Franciscan's cowl', 'Dulcarnon', 'the bride's chair', and 'the theorem of the bride'). However, the most conspicuous asset of Heath's edition, the enormous volume of annotation explaining Euclid's work and linking it to that of subsequent mathematicians, was largely unchanged. Heath's methods have been brought up to date in the most recent new edition, Bernard Vitrac's extensively annotated French translation, published in four volumes between 1990 and 2001. Unfortunately for the non-francophone, this work is unavailable in English. Important developments in geometry since Euclid's day include the splitting off of trigonometry, algebra and number theory as separate disciplines, the invention of analytic or coordinate geometry and the eventual heresy of non-Euclidean geometry. As we observed above, all of Euclid's results are arrived at geometrically, although many of them would seem more natural to us in a different mathematical idiom. For example, Propositions II.12 and II.13 are more familiar as the laws of cosines for obtuse and acute angles. However, the systematic study of trigonometry did not begin until more than a century after the Elements, with the work of Hipparchus of Nicaea (c. 180-125 BC). Similarly, Greek interest in algebraic and number theoretic problems remained essentially wedded to geometry until the publication of the Arithmetica of Diophantus (fl. c. AD 250), which may have been influenced by much earlier Babylonian work, probably unknown to Euclid. Euclid's practice of solving algebraic problems with geometry was eventually inverted, with the application of a much more sophisticated algebra to geometry, most notably in the work of René Descartes (1596-1650), who pioneered the use of algebraic equations as representations of geometrical curves. The story of non-Euclidean geometry is one of frequent ineffectual anticipation: Numerous mathematicians stumbled on the idea, but failed either to realize its importance or to attract any attention to their discovery. The earliest was Geralmo Saccheri (1667-1733) in his 1733 book Euclides ab Omni Naevo Vindicatus: 'Euclid Cleared of Every Flaw'-an ironic choice of title given the nature of the discoveries it contains. He had intended to resolve the perceived awkwardness of the fifth or parallel postulate, by deriving it from the others, an exercise attempted since antiquity. His chosen method was innovative: An application of the rule of logical inference known to the medievals as consequentia mirabilis: that any proposition entailed by its own negation must be true. Hence he sought to show that, even if we assume the parallel postulate is false, it will still be possible to derive the postulate, thereby demonstrating that the postulate must be true after all. Saccheri's demonstration doesn't work, although he seems not to have realized this, precisely because the systems arising from the denial of the parallel postulate are internally consistent alternative geometries. Saccheri considered three possibilities, that the angles in a triangle add to two right angles (180°), to more than that, or to less than that, which give rise respectively to Euclidean geometry, elliptic or Riemannian geometry, and hyperbolic or Lobachevskian geometry. Saccheri's accomplishment was repeated by Johann Heinrich Lambert (1728-1777) in his Die Theorie der Parallellinein, posthumously published in 1786. Lambert came closer than Saccheri to realizing that he had found a new form of geometry, but not quite close enough. The first mathematician who did realize the magnitude of this discovery was Carl Friedrich Gauss (1777-1855), perhaps the greatest mathematician of his generation. However, Gauss did not publish his results on non-Euclidean geometry, which only became widely known after his death. In the meantime, hyperbolic geometry had been independently and simultaneously rediscovered by two comparatively obscure figures, the Russian Nicolai Ivanovich Lobachevsky (1793-1856) and the Hungarian Janos Bolyai (1802-1860). Both recognized their discovery for what it was in papers coincidentally each published in 1829. However, the material remained obscure and little read; Gauss praised both works privately, but declined to do so in print, perhaps conscious of how controversial such a departure from Euclid would appear. Georg Riemann (1826-1866) developed an elliptic geometry in his seminal habilitation lecture of 1854, which was, however, not published until after his death. Finally, Hermann von Helmholtz (1821-1894) independently arrived at and extended Riemann's ideas in papers published from 1868 onwards. Hence it was only in the 1870s, nearly a century and a half after the earliest work in the field that non-Euclidean geometry became widely known amongst mathematicians. Riemann had observed that since Euclidean geometry is not unique, whether we live in a Euclidean or non-Euclidean world is a question for physicists, not mathematicians. Famously, it was settled in favor of non-Euclidean geometry with the confirmation of Albert Einstein's (1879-1955) theory of Non-Euclidean geometry establishes the logical independence of Euclid's controversial fifth postulate. A more modest arbitrariness attaches to his first three postulates, which constrain the admissible methods of proof to straight edge and compass constructions. Crucially, neither device can be used to transfer a magnitude from one part of a construction to another: The straight edge is not a ruler, and the compass collapses when lifted from the page. These constraints make it impossible to solve three famous ancient geometrical puzzles: The trisection of an arbitrary angle, the doubling of a cube, and the squaring of a circle. This impossibility may have been suspected by the Greeks, but it only received a rigorous demonstration in the work of Gauss and Evariste Galois For much of its history Euclid's Elements was a paragon of mathematical rigor. All other fields of mathematics lagged behind until modern times: This includes fields such as logic and arithmetic, which would now be considered more foundational than geometry. As recently as the eighteenth century, many mathematicians sought to ground all mathematics in geometry. However by the nineteenth century it was clear that Euclidean geometry could be given a more rigorous foundation than that provided by Euclid. This work was pioneered by Moritz Pasch (1843-1930) and perfected by David Hilbert (1862-1943) in his 1899 Grundlagen der Geometrie. Hilbert's axiomatization achieves a far higher standard of rigor than Euclid's, with the mutual consistency and independency of the axioms explicitly proven, and the empirical specifics of the subject matter wholly abstracted away. As Hilbert is said to have remarked, "One must at all times be able to replace 'points, lines, planes' by 'tables, chairs, beer mugs'." Hilbert's work may be understood as the elimination of diagrams from geometry: An important criticism of the Elements was that the diagrams often form essential parts of the proofs. However, diagrams remain essential for the effective study of geometry. Greek geometers are often represented scratching diagrams in sand with a stick, and they also used them freely in their texts, as had the Egyptians before them. Over the centuries a variety of proposals have been made to facilitate understanding by improving geometrical diagrams. Components that are variously given, unknown, or constructed have been distinguished in various ways, often by the thickness of the lines, but in at least one edition by color. One of the most helpful innovations of all is a product of the Internet. David Joyce of Clark University, Massachusetts, has made interactive versions of all Euclid's diagrams available online in a form that allows one to manipulate the variable parts and instantly see the changes to the whole. The formal methods of proof characteristic of geometry and logic both originated in ancient Greece at much the same time, although probably independently of each other. (Euclid often provides separate proofs for propositions which in logic would follow immediately from propositions he has already established.) Hilbert's work finally demonstrated the superiority of logic to geometry as a foundation for mathematics, but the disciplines also rivaled each other as models of reasoning. In particular, the account of geometrical 'analysis and synthesis' provides a valuable insight into the heuristics, or problem-solving techniques, in use amongst the Greeks. Analysis describes the method of working backwards from the sought conclusion to known propositions, whereas synthesis proceeds in the opposite direction. This has been a direct inspiration for modern studies of mathematical problem solving, such as that of George Pólya (1887-1985). This work in turn has been a source for research into automated theorem proving: the programming of computers to prove mathematical theorems. One of the earliest successes of this approach was the independent discovery by a computer program called the Geometry Machine, developed in the 1950s by Herbert Gelernter of IBM, of a more natural proof of Euclid's I.4. Unbeknownst to Gelernter (or his program), this proof was first discovered by Pappus (fl. c. 300 AD), to whom we owe the most detailed account of analysis and synthesis to have survived. In 1879 the Oxford mathematician Charles Dodgson (1832-1898), better known as Lewis Carroll, published a book with the tantalizing title Euclid and his Modern Rivals. However, the rivals he had in mind were not the pioneers of non-Euclidean geometry, a subject of which English mathematicians were only just beginning to take notice, but rather the authors of competing geometry textbooks. Sadly we can only speculate what the author of Through the Looking Glass might have done with such rich material. However, Carroll's determination to defend the superiority of Euclid for teaching purposes was not quite the quixotic or reactionary enterprise that it might at first appear. Indeed he makes short work of the vast majority of the books he considers, showing up fallacies and confusions in their attempts to improve upon Euclid, not much different from those in Theon's work 1500 years earlier. Two crucial exceptions to Carroll's critique are the books by the American Benjamin Peirce (1809-1880) and the Frenchman Adrien Marie Legendre (1752-1833), both of which are dismissed not for their errors, but for their supposed unsuitability for beginning students. There is something in this: As the introduction to the modern edition of Carroll's book observes, "there is much to be said for his standpoint that the degree of rigor in Euclid's Elements is just right for high school." However, Legendre's 1794 book Éléments de Géométrie represents an important redevelopment of the teaching of geometry, simplifying Euclid's exposition by the use of trigonometrical and algebraic techniques. This work ran into many editions, and became particular influential in American geometry teaching. In the twentieth century, geometry teaching has been largely in retreat, occupying less and less of the mathematics syllabus in both schools and universities, while what is taught is more in the spirit of Legendre than of Euclid. However, there are some signs of a recent return to Euclid. J. L. Heilbron's richly illustrated Geometry Civilized "follows more or less in order the material in Books I-IV, and some of that in Book VI of the Elements." While this work is intended to be accessible to the high school student, as well as the general reader, Robin Hartshorne's Geometry: Euclid and Beyond offers a similarly structured approach to the subject for the undergraduate mathematician. Euclid's place in the present century seems as assured as it has been for the previous twenty-three centuries. Read More Show Less
{"url":"http://www.barnesandnoble.com/w/elements-books-i-xiii-euclid/1106233501?ean=9780760763124","timestamp":"2014-04-18T22:14:10Z","content_type":null,"content_length":"208701","record_id":"<urn:uuid:0b7ec0d2-bf91-4224-b9b9-e13e1b7203ba>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00390-ip-10-147-4-33.ec2.internal.warc.gz"}
Generating Realistic Large Bayesian Networks by Tiling Ioannis Tsamardinos, Alexander Statnikov, Laura E. Brown, Constantin F. Aliferis In this paper we present an algorithm and software for generating arbitrarily large Bayesian Networks by tiling smaller real-world known networks. The algorithm preserves the structural and probabilistic properties of the tiles so that the distribution of the resulting tiled network resembles the real-world distribution of the original tiles. By generating networks of various sizes one can study the behavior of Bayesian Network learning algorithms as a function of the size of the networks only while the underlying probability distributions remain similar. We demonstrate through empirical evaluation examples how the networks produced by the algorithm enable researchers to conduct comparative evaluations of learning algorithms on large real-world Bayesian networks. Subjects: 12. Machine Learning and Discovery; 12.2 Scientific Discovery Submitted: Feb 10, 2006 This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.
{"url":"http://aaai.org/Library/FLAIRS/2006/flairs06-116.php","timestamp":"2014-04-16T04:21:11Z","content_type":null,"content_length":"2854","record_id":"<urn:uuid:0f4b46cd-1f21-454e-a1c5-8d062c7b1470>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00439-ip-10-147-4-33.ec2.internal.warc.gz"}
mirror image problem June 27th 2010, 05:07 AM mirror image problem find the image of y=x^2+2x+1 in the line y=x-1 i know how to find image of point on line but how to find image of a curve on line is difficult June 27th 2010, 06:44 PM Now the sophisticated way to do this would be using linear algebra. But i'm too lazy, so i'm going to suggest something more elementary, that to me makes sense would work, and if anyone finds a problem with it, they can point it out. you want to reflect in the line y = x - 1. (This means also that x = y + 1), so make these substitutions: The curve $y = x^2 + 2x + 1 = (x + 1)^2$ becomes $x - 1 = [(y + 1) + 1]^2$ Now, solve for $y$. And note that this is not a function. You can graph it piece-wise though June 27th 2010, 09:13 PM why have we substituted the values of x and y
{"url":"http://mathhelpforum.com/geometry/149494-mirror-image-problem-print.html","timestamp":"2014-04-19T04:13:26Z","content_type":null,"content_length":"4697","record_id":"<urn:uuid:3a4a1215-2135-47a3-83fe-44738a5be521>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00614-ip-10-147-4-33.ec2.internal.warc.gz"}
Help Simplifying an Equation I'm trying to figure out how they simplified this: $[a_1 + d]+[a_1+(n-2)d]$ to this: $2a_1 + (n-1)d$ I get how the two $a_1$s get to $2a_1$ but I can't figure out how they're getting the two $d$s two just one. Thanks in advance, maroonblazer wrote:I'm trying to figure out how they simplified this: $[a_1 + d]+[a_1+(n-2)d]$ to this: $2a_1 + (n-1)d$ What did you get when you multiplied out the second bit? You got to here: . . . . .$a_1\, +\, d\, +\, a_1\, +\, nd\, -\, 2d$ . . . . .$a_1\, +\, a_1\, +\, nd\, +\, 1d\, -\, 2d$ ...and then what? Re: Help Simplifying an Equation $2a_1+nd-1d = 2a_1+(n-1)d$ Thank you!
{"url":"http://www.purplemath.com/learning/viewtopic.php?p=6865","timestamp":"2014-04-17T15:34:44Z","content_type":null,"content_length":"20595","record_id":"<urn:uuid:cbf3d505-a0aa-4762-8885-26162b026590>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00490-ip-10-147-4-33.ec2.internal.warc.gz"}
Jeremiah Flaga This is the problem specification of the Milking Cows Programming Problem From USACO. Milking Cows Three farmers rise at 5 am each morning and head for the barn to milk three cows. The first farmer begins milking his cow at time 300 (measured in seconds after 5 am) and ends at time 1000. The second farmer begins at time 700 and ends at time 1200. The third farmer begins at time 1500 and ends at time 2100. The longest continuous time during which at least one farmer was milking a cow was 900 seconds (from 300 to 1200). The longest time no milking was done, between the beginning and the ending of all milking, was 300 seconds (1500 minus 1200). Your job is to write a program that will examine a list of beginning and ending times for N (1 <= N <= 5000) farmers milking N cows and compute (in seconds): ● The longest time interval at least one cow was milked. ● The longest time interval (after milking starts) during which no cows were being milked. PROGRAM NAME: milk2 │Line 1: │The single integer │ │Lines 2..N+1:│Two non-negative integers less than 1000000, the starting and ending time in seconds after 0500 │ SAMPLE INPUT (file milk2.in) A single line with two integers that represent the longest continuous time of milking and the longest idle time. SAMPLE OUTPUT (file milk2.out) You can view my solution (source code) to this problem at
{"url":"http://jeremiahflaga.blogspot.com/2011/09/milking-cows-programming-problem-from.html","timestamp":"2014-04-16T04:36:59Z","content_type":null,"content_length":"81278","record_id":"<urn:uuid:a595d1de-f8cd-4537-9636-c2dddd3df271>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00344-ip-10-147-4-33.ec2.internal.warc.gz"}
Cool Math Games and Activities- XGerms Review There are times when I want my kids to work on their school work independently because I need to take care of some other business. When they’re in the first few grades of elementary school, that can be pretty hard for them to do. Other times, I notice that my kids need to practice their basic math facts so that they can move on to more complicated problems. I compiled a list of cool math games and activities that reinforce important math facts that I’ve used over the years. I find that having a few different methods at my disposal increases my kids’ interest and willingness to practice. In this post, I’ll tell you about XGerms by k12. In this game, the kids answer addition, subtraction, multiplication, division or a combination of each of these the types of questions. For every question they get right, they capture a germ in the germ lab. The goal is to capture a bunch of little germs. Once those germs are captured, kids answer a few questions so they can capture the big germ. Getting Started Your child will be able to set up their avatar. As you can see from the graphic below, there are many options for the avatar’s appearance. They also get to choose their name. This makes it possible for multiple children to play xgerms and keep track of their progress on the same computer- though not at the simultaneously. Select a Germ The next step is to select a germ to capture. This only needs to be done the first time the math game is played. After that, they won’t need to do this step. Capturing smaller germs First, the kids must solve problems to capture the smaller germs that surround, and protect, the big germ. Their goal is to solve the questions correctly so they don’t waste goop. Their second goal is to solve math problems as quickly as possible. There is a timer above the math question. You’ll notice that there is a bar with red, yellow, and green (left to right). First the green will disappear, then the yellow, then the red. Once the red runs out, you’re out of time. Capturing the Big Germ This task isn’t as easy as capturing the smaller germs. 5 math problems solved correctly must be answered. You’ll know how you’re progressing by looking at the hearts. Red hearts mean the germ has full strength. No red hearts means that germ is tired and ready to be captured. The XGrid One way for parents to keep track of progress is to look at the x-grid. The more green dots on the chart, the more proficient the student is with their math facts. Yellow mean that they know the fact, but they could answer more quickly. Red means that the student either ran out of time, answered incorrectly, or took a long time to answer the question. As long as their avatar information isn’t deleted from the computer, the student can work on xgerms until they have filled out the entire xgrid, no matter how many days it takes. If your child uses different computers to access xgerms, their progress will not transfer from one computer to another. How to Delete XGerms from your computer or device Technically, XGerms isn’t on your computer. But, if you want all everything associated with XGerms removed from your computer you could erase your cookies as well as click remove player on this page. How to access Xgerms XGerms is available for any device equipped with Flash. This means that computers and mobile devices (not iphone) can run this program. Many people access these games through the K12 curriculum. However, people who do not have K12 can access it for free on their XGerms Multiplication. Why I recommend Xgerms Kids need to memorize their addition, subtraction, multiplication, and division facts. This is a fun way, especially for budding scientists, to practice. It beats doing timed worksheets and flash cards, though we still do those, because kids can do them independently while parents make lunch, take care of other kids, etc. My kids look forward to playing this math game, which k12 recommends practicing between 10-20 minutes per day. Some of my friends’ kids would play it much longer and every day of the week if given the opportunity. I still recommend using other methods to practice so that more connections and pathways are created in the brain’s neuro-circuitry and so that they aren’t glued to the computer or mobile device too long. When math facts are memorized, higher level math levels won’t be difficult later on. Practice! Practice! Practice! If you are looking for online and offline games that help kids practice and/or learn math skills, there are many online resources for parents. I think we’re blessed to live in a time when we have so much information at our finger tips. If you have more ideas, please share! I’m anxious to hear about them. Then, please share this site with your friends. Updated Links: 1) Race Car Fast Facts 2) Baseball Fast Facts 3) xGerms Addition 4) xGerms Multiplication 5) xGerms Subtraction 6) xGerms Division 7) xGerms Math Mashup Cool Math Games and Activities- XGerms Review — 13 Comments 1. The link you posted for the multiplication demo site is not working. It basically takes me to the K12 page and says access denied. Stinks because we used to use K12, but we moved and we can’t do it where we live now. My kids loved this game! □ WOW! You’re right! The link is dead. My kids were using it not too long ago, but took a break from it when we had Christmas break. They’ll be upset, too. I looked all over the place, including K12′s Think Tank blog and had no luck there. I’ll repost a link if I am so lucky as to find a functioning site for non-K12 users ☆ I went through my browser’s history and found another link that takes you to XGerms Multiplication. I added it to the post above. 2. Hmm… I use for “xGerms Math Mash-Up”. Does this work? □ Yes, the link you shared worked for me. Thank you for sharing the link. I was looking for valid links since my old links were dead. I appreciate it very much! □ Yes, it did work. 3. what web sight is it on? □ http://k12.http.internapcdn.net/k12_vitalstream_com/CURRICULUM/381894/CURRENT_RELEASE/Comp_Fluency_xGerms.html This link worked as of 4/13/13 4. We loved the xgerms and the links aren’t working again:-(any chance you can share again? We enjoyed k12 and xgerms multiplication was my daughters all time favorite game. □ Kristen, Thank you for your comments. I am happy you like the x-germs games. I tried using the links in the “updated links” section just above the “pin it” button” and they worked for me. I’d love to know if they work for you or not. Thank you, 5. i love the game □ Thanks, Tomas! Do you play XGerms or is it for another member of your family?
{"url":"http://homeschoolingintheburbs.info/cool-math-games-and-activities-xgerms","timestamp":"2014-04-16T22:30:57Z","content_type":null,"content_length":"83544","record_id":"<urn:uuid:1c72bcd2-9377-4abd-a5da-daa3065592de>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00530-ip-10-147-4-33.ec2.internal.warc.gz"}
Good Approximations Copyright © University of Cambridge. All rights reserved. 'Good Approximations' printed from http://nrich.maths.org/ Why do this problem? For a better understanding of rational and irrational numbers. Possible approach Use this problem as part of a lesson series on number to include some or all of: • proof root 2 is irrational • converting periodic decimals to rational numbers • proof that every rational number has a periodic decimal expansion • the rational numbers are countable (see Route to Infinity ) • the irrational numbers are uncountable (see the article Infinity is not a number ). Key question Why are the finite continued fractions which follow a regular pattern called 'convergents'?
{"url":"http://nrich.maths.org/319/note?nomenu=1","timestamp":"2014-04-17T18:44:39Z","content_type":null,"content_length":"3897","record_id":"<urn:uuid:f3072c6a-601e-4245-b06d-0cd378d41c3c>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00602-ip-10-147-4-33.ec2.internal.warc.gz"}
Villanova Algebra 2 Tutor Find a Villanova Algebra 2 Tutor ...I truly enjoy helping students achieve their goals. Thanks for visiting my page, and best of luck!Scored 780/800 on SAT Math in high school and 800/800 on January 26, 2013 test. Routinely score 800/800 on practice tests. 19 Subjects: including algebra 2, calculus, statistics, geometry ...I am finding more and more as I get older that critical thinking is rarely taught and greatly needed. I feel that getting experience teaching students one on one is the best way for me to have an immediate impact. This will especially help to personalize the teaching experience and is an effective way to create a trusting relationship. 16 Subjects: including algebra 2, Spanish, calculus, physics ...I graduated summa cum laude, with a BS in mathematics, a BA in humanities, and a BAH in honors. I also minored in classics, philosophy, history, and theology. During my undergraduate career, I tutored mathematics at the Villanova Mathematics Learning and Resource Center (MLRC), primarily in Calculus I, II, and III, Differential Equations, and Linear Algebra. 26 Subjects: including algebra 2, English, writing, reading ...My name is Blaise. I have five years classroom experience, and have been tutoring on the side since college. I was an employee of West Chester's tutoring center and achieved Master - Level 3 certification from the College Reading and Learning Association. 8 Subjects: including algebra 2, calculus, geometry, algebra 1 ...Around that time, I struck out on my own, and provided in home math tutoring to middle school, high school, and college students for several years until 2008. Today, even though my career has taken me away from tutoring full-time, I continue to tutor math because it is one of my favorite subject... 12 Subjects: including algebra 2, calculus, writing, geometry
{"url":"http://www.purplemath.com/villanova_pa_algebra_2_tutors.php","timestamp":"2014-04-18T16:11:45Z","content_type":null,"content_length":"23956","record_id":"<urn:uuid:efae009a-ad1f-4686-b029-cc464c19d24d>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00372-ip-10-147-4-33.ec2.internal.warc.gz"}
Redmond, WA Prealgebra Tutor Find a Redmond, WA Prealgebra Tutor ...I started playing volleyball on a team in junior high school. I played through high school. I play on the beach every so often and enjoy the sport. 39 Subjects: including prealgebra, reading, writing, English ...If you don't understand something or can't solve an algebra problem, I can simplify it until you get it and solve it all by yourself. I enjoy tutoring Algebra 2, trying to make it interesting and easy to learn. With my years of tutoring experience, I've helped many students improve their math grades. 13 Subjects: including prealgebra, geometry, Chinese, algebra 1 ...I'm also nearly fluent in Spanish, and would be happy to converse with students taking Spanish classes. I like to communicate plainly and simply, and have always enjoyed presenting material in a way that I find easy to understand, and like to approach the subject matter so that it becomes engagi... 39 Subjects: including prealgebra, English, Spanish, reading ...In addition, I taught biology at St. Louis College of Pharmacy. There was a genetics component to the course there. 6 Subjects: including prealgebra, chemistry, biology, algebra 1 ...In my methods I avoid lecturing. Students get plenty of that in the classroom and since they need a tutor, lecturing proves to be ineffective. Instead, I teach them to ask questions and look for answers. 20 Subjects: including prealgebra, reading, calculus, geometry Related Redmond, WA Tutors Redmond, WA Accounting Tutors Redmond, WA ACT Tutors Redmond, WA Algebra Tutors Redmond, WA Algebra 2 Tutors Redmond, WA Calculus Tutors Redmond, WA Geometry Tutors Redmond, WA Math Tutors Redmond, WA Prealgebra Tutors Redmond, WA Precalculus Tutors Redmond, WA SAT Tutors Redmond, WA SAT Math Tutors Redmond, WA Science Tutors Redmond, WA Statistics Tutors Redmond, WA Trigonometry Tutors
{"url":"http://www.purplemath.com/Redmond_WA_prealgebra_tutors.php","timestamp":"2014-04-16T22:43:47Z","content_type":null,"content_length":"23628","record_id":"<urn:uuid:3d27e825-41b6-4a44-8c6a-cb1470910601>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00570-ip-10-147-4-33.ec2.internal.warc.gz"}
Fast summation of erfc functions Direct computation of the weighted sum of N complementary error functions at M points scales as O(MN). The following code computes the same extremely fast to epsilon-precision in O(M + N). For example for N = M = 51,200 points while the direct evaluation takes around 17.26 hours the fast evaluation requires only 4.29 seconds with an error of around 1e-10. The code is written in C++ with a MATLAB wrapper. I have provided the compiled dll files for windows platform. For other OS you will have to recompile the C++ files. Read the technical report before proceeding. I would be interested to know if you used it any application. There is also a bit slower MATLAB version of the code. Download: [ C++ source code along with dlls ] Fast weighted summation of erfc functions. Vikas C. Raykar, R. Duraiswami, and B. Krishnapuram, CS-TR-4848, Department of computer science, University of Maryland, CollegePark. [abstract] [TR] [slides] [bib] We have embedded the above fast summation in an optimization algorithm for learning ranking functions. A fast algorithm for learning large scale preference relations. Vikas C. Raykar, Ramani Duraiswami, and Balaji Krishnapuram, In Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics, San Juan, Peurto Rico, March 2007, pp. 385-392. [paper] [slides] [bib] [ More details can be found in CS-TR-4848 ] [oral presentation] [code] Copyright Information The code was written by Vikas C. Raykar and is copyrighted under the Lesser GPL: Copyright (C) 2007 Vikas C. Raykar This program is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; version 2.1 or later. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details. You should have received a copy of the GNU Lesser General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. The author may be contacted via email at: vikas (at) umiacs (.) umd (.) edu.
{"url":"http://www.umiacs.umd.edu/labs/cvl/pirl/vikas/Software/fast_erfc_summation/fast_erfc_summation.htm","timestamp":"2014-04-20T06:50:58Z","content_type":null,"content_length":"7734","record_id":"<urn:uuid:7a30f9a9-91b2-411d-870c-86ec49b4de03>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00641-ip-10-147-4-33.ec2.internal.warc.gz"}
Gradients and the sort =[ December 3rd 2007, 08:51 PM #1 Sep 2007 Gradients and the sort =[ Heya! =] I need help with a fairly easy maths assignment. I can't find my textbook, so I'm not sure how the formulas went. But I think it was a really easy process. =] Basically, we have to find the gradients of these measurements that we took. If you guys can guide me with one of the questions, then I should be able to figure out the rest for myself. =] Okay, so for one measurement, I took a measurement of 13cm vertically. And a measurement of 100cm horizontally. I should be able to find the gradient from those two, right? =[ (This is a picture of how the thing we measured looked like - Click I -think- it went something like, y-step over x-step. So that gives me a gradient of 0.13? Is that right? (It's not isn't it. =[ ) Thanks if you reply! =] Heya! =] I need help with a fairly easy maths assignment. I can't find my textbook, so I'm not sure how the formulas went. But I think it was a really easy process. =] Basically, we have to find the gradients of these measurements that we took. If you guys can guide me with one of the questions, then I should be able to figure out the rest for myself. =] Okay, so for one measurement, I took a measurement of 13cm vertically. And a measurement of 100cm horizontally. I should be able to find the gradient from those two, right? =[ (This is a picture of how the thing we measured looked like - Click I -think- it went something like, y-step over x-step. So that gives me a gradient of 0.13? Is that right? (It's not isn't it. =[ ) Thanks if you reply! =] $m = \frac{13 - 0}{0 - 100} = -0,13$ December 4th 2007, 12:01 AM #2 December 4th 2007, 12:50 AM #3 Sep 2007 December 4th 2007, 01:00 AM #4 December 4th 2007, 01:17 AM #5 Sep 2007 December 4th 2007, 01:19 AM #6
{"url":"http://mathhelpforum.com/math-topics/24101-gradients-sort.html","timestamp":"2014-04-19T23:39:52Z","content_type":null,"content_length":"48087","record_id":"<urn:uuid:ce02d6e2-221c-40fc-906f-e1d9c38c4b67>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00601-ip-10-147-4-33.ec2.internal.warc.gz"}