content
stringlengths
86
994k
meta
stringlengths
288
619
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: I'm at the moment studying complex numbers and having some troubles converting polar form to a+bi form. The problem is that I can't find the numerical representation of ex. sin 2pi/3 which is sqrt(3) /2. Is there any general way to figure this out or do I have to memorize the most common ones and hope I don't need more on future tests? • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5076f2d2e4b02f109be39f0b","timestamp":"2014-04-17T07:13:49Z","content_type":null,"content_length":"42349","record_id":"<urn:uuid:db7f12a0-5f3b-494a-907c-06ef95543ce0>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00445-ip-10-147-4-33.ec2.internal.warc.gz"}
OpenX Source The acronym eCPM means ‘effective cost per mille’. It is the outcome of a calculation of the ad revenue generated by a banner or campagne, divided by the number of ad impressions of that banner or campaign expressed in units of 1,000. The ‘M’ for mille in the name comes from the Latin meaning 1,000. The formula to calculate eCPM is not all the complicated, once you realize the components that go into the computation. Calculating eCPM for a CPM campaign When the campaign has a rate expressed in CPM (cost per mille), there is not a lot that needs to be done. The CPM rate is by definition identical to the eCPM value of that campaign. Calculating eCPM for a CPC campaign For campaigns with a CPC rate, the calculation works like this: • you will need to lookup the number of ad impressions over a given period of time • you will also need to lookup the number of clicks on these ads over the same period of time • and finally you will need to know the CPC rate for the campaign Once you know this input data, the formula is: • multiply the number of clicks by the CPC rate to come up with total revenue • divide the number of impressions by 1,000, giving you the number of blocks of 1,000 impressions delivered • divide the total revenue by the number of blocks of 1,000 impressions to come up with the eCPM value Let’s use the following fictional numbers for an example: • a campaign with a CPC pricing was displayed 2 million times in a period of 1 day • in that same day, the ad server counted 5,000 clicks on the banners of the campaign • the CPC rate for the campaign was set to US$ 0.50 We can now calculate the eCPM of this campaign with a CPC rate: • total revenue was 5,000 clicks times $ 0.50 equals $ 2,500 • 2 million impressions equals 2,000 blocks of 1,000 impressions • the eCPM is $ 2,500 divided by 2,000, equals an eCPM of $ 1.25 In reality, you will probably never see nice round numbers like these, but if you use the formula explained above, it is simple to compute the eCPM value of a CPC campaign for any given time frame. Calculating eCPM for a CPA campaign For campaigns with a CPA rate (cost per sale or cost per lead), the formula is rather similar: • You will need to lookup the number of ad impressions over a given period of time • you will also need to lookup the number of sales or leads (OpenX calls these ‘conversions’) over the same period of time • and finally you will need to know the CPA rate for the campaign Once you know this input data, the formula is: • multiply the number of conversions by the CPA rate to come up with the total revenue • divide the number of impressions by 1,000, giving you the number of blocks of 1,000 impressions delivered • divide the total revenue by the number of blocks of 1,000 impressions to come up with the eCPM value Let’s use the following fictional numbers as an example: • a campaign with CPA pricing was displayed 2 million times in a period of 1 day • in the same day, the ad server measured 100 sales related to the banners that were displayed and clicked • the CPA rate for the campaign was set to $ 30 per sale We can now calculate the eCPM of this campaign with a CPA rate: • total revenue was 100 sales times $ 30 equals $ 3,000 • 2 million impressions equals 2,000 blocks of 1,000 impressions • the eCPM is $3,000 divided by 2,000, equals an eCPM of $ 1.50 Like in our previous example, it is highly unlikely that there are nice round numbers like this, but the formula is always the same. OpenX Source will automatically calculate the eCPM for you Fortunately, you do not have to calculate the eCPM outcomes yourselves. In the statistics screens of your OpenX Source ad server system, you can enable an extra column that will show this value automatically at all times. Is there a difference between eCPM and eRPM? The terms eCPM and eRPM mean exactly the same, the only difference is the context they are used for. When a publishers discusses an advertising campaign with an advertiser, the advertiser thinks of the total amount on the invoice as a cost. Therefore the advertiser will think in terms of eCPM. For the publisher sending the invoice, the same amount is a source of income or revenue. Therefore, the publisher could think in terms of eRPM. But essentially they are talking about the same numbers. Since sales people usually try to talk the same language as their customers, publisher usually just talk about eCPM when they discuss things with their advertisers, and that’s why everyone in the ad industry started using eCPM as the main term.
{"url":"http://www.openxconsultant.com/blog/2012/02/what-is-ecpm-and-how-is-it-calculated/","timestamp":"2014-04-18T21:38:41Z","content_type":null,"content_length":"27175","record_id":"<urn:uuid:402ac55a-36be-4cf0-85e2-ab2e51385f59>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00094-ip-10-147-4-33.ec2.internal.warc.gz"}
Levels of Routines Next: Data Types and Precision Up: Structure of LAPACK Previous: Structure of LAPACK &nbsp Contents &nbsp Index Levels of Routines The subroutines in LAPACK are classified as follows: • driver routines, each of which solves a complete problem, for example solving a system of linear equations, or computing the eigenvalues of a real symmetric matrix. Users are recommended to use a driver routine if there is one that meets their requirements. They are listed in Section 2.3. • computational routines, each of which performs a distinct computational task, for example an LU factorization, or the reduction of a real symmetric matrix to tridiagonal form. Each driver routine calls a sequence of computational routines. Users (especially software developers) may need to call computational routines directly to perform tasks, or sequences of tasks, that cannot conveniently be performed by the driver routines. They are listed in Section 2.4. • auxiliary routines, which in turn can be classified as follows: □ routines that perform subtasks of block algorithms -- in particular, routines that implement unblocked versions of the algorithms; □ routines that perform some commonly required low-level computations, for example scaling a matrix, computing a matrix-norm, or generating an elementary Householder matrix; some of these may be of interest to numerical analysts or software developers and could be considered for future additions to the BLAS; □ a few extensions to the BLAS, such as routines for applying complex plane rotations, or matrix-vector operations involving complex symmetric matrices (the BLAS themselves are not strictly speaking part of LAPACK). Both driver routines and computational routines are fully described in this Users' Guide, but not the auxiliary routines. A list of the auxiliary routines, with brief descriptions of their functions, is given in Appendix B. Next: Data Types and Precision Up: Structure of LAPACK Previous: Structure of LAPACK &nbsp Contents &nbsp Index Susan Blackford
{"url":"http://www.netlib.org/lapack/lug/node22.html","timestamp":"2014-04-19T09:34:36Z","content_type":null,"content_length":"5990","record_id":"<urn:uuid:16489721-60b2-4226-a891-d19350fd8574>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00078-ip-10-147-4-33.ec2.internal.warc.gz"}
City Block Distance <Previous | Next | Content> It is also known as Manhattan distance, boxcar distance, absolute value distance. It represents distance between points in a city road grid. It examines the absolute differences between coordinates of a pair of objects. For example: cost time weight incentive Object A 0 3 4 5 Object B 7 6 3 -1 Point A has coordinate (0, 3, 4, 5) and point B has coordinate (7, 6, 3, -1). The City Block Distance between point A and B is Try the interactive program below. Suppose you walk to the North 3 meters and then to the East 4 meters, what is the city block distance to the origin location? Put A = (0, 0) to indicate the origin and B = (3, 4) as the current location, the City Block distance represents the length of walking distance. Input coordinate values of Object-A and Object-B (the coordinate are numbers only), then press "Get City Block Distance" button. The program will directly calculate when you type the input. City block distance is a special case of Minkowski distance with The pattern of City block distance in 2-dimension is diamond. When the sink is on the center, it forms concentric diamonds around the center. <Previous | Next | Content> Rate this tutorial This tutorial is copyrighted. Preferable reference for this tutorial is Teknomo, Kardi. Similarity Measurement. http:\\people.revoledu.com\kardi\ tutorial\Similarity\
{"url":"http://people.revoledu.com/kardi/tutorial/Similarity/CityBlockDistance.html","timestamp":"2014-04-20T15:51:36Z","content_type":null,"content_length":"19650","record_id":"<urn:uuid:33e29b89-400f-4b02-89fd-1aa810de8040>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00515-ip-10-147-4-33.ec2.internal.warc.gz"}
periodic functions May 26th 2010, 07:17 PM #1 Junior Member May 2010 Can anyone solve this??? i am so lost! Question 1 a) State the amplitude (A) the period (T) and the frequency (F) for the periodic function - 120sin(100 pi t)V. b) Through how many degrees does a wheel rotate in 10 ms if its angular velocity is 8 rad/s. c) Sketch one cycle of the given function showing the values of each variable where the curve crosses the axis. V=50sin(4.2*10^2t) Can anyone solve this??? i am so lost! Question 1 a) State the amplitude (A) the period (T) and the frequency (F) for the periodic function - 120sin(100 pi t)V. b) Through how many degrees does a wheel rotate in 10 ms if its angular velocity is 8 rad/s. c) Sketch one cycle of the given function showing the values of each variable where the curve crosses the axis. V=50sin(4.2*10^2t) Could you show us what you attempted so far or what part is confusing you? The 1st two problems are just using the formulas in your book. I am studying by correspondence and there is basicall next to no information in the materials i have given that really explains this to me. Here's what i can gather 1a) A=120 T= 100*pi b) Is the formula Angle=Angular Velocity*time.......I'm not given a unit for time though c) i have no idea???? Any help would be very much apprecviated I am studying by correspondence and there is basicall next to no information in the materials i have given that really explains this to me. Here's what i can gather 1a) A=120 T= 100*pi b) Is the formula Angle=Angular Velocity*time.......I'm not given a unit for time though c) i have no idea???? Any help would be very much apprecviated So you have the amp right but your peroid should be $\frac{2\pi}{100\pi} = \frac{1}{50}$ the freq is the reciprocal of the period That makes more sense. Thanks heaps. Could you provide any insight into (b) or (c)? much appreicated May 26th 2010, 07:37 PM #2 May 26th 2010, 07:46 PM #3 Junior Member May 2010 May 26th 2010, 07:58 PM #4 May 26th 2010, 08:31 PM #5 Junior Member May 2010
{"url":"http://mathhelpforum.com/trigonometry/146595-periodic-functions.html","timestamp":"2014-04-17T01:52:50Z","content_type":null,"content_length":"41222","record_id":"<urn:uuid:99e03179-99f0-4543-b739-84e9fe30ed2e>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00092-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] Shipman's field question JoeShipman@aol.com JoeShipman at aol.com Thu Oct 26 10:23:32 EDT 2006 -----Original Message----- >From: marker at math.uic.edu >To say a bit more about your original question. >Tom Scanlon (building on work of Pop, Poonen and others) has >recently proved that if K is a finitely generated field, there is a >sentence describing K up to isomorphism among the finitely generated >fields. In particular, this proves Pop's conjecture that elementarily >equivalent finitely generated fields are isomorphic. That's great, just the result I need. Can there be two nonisomorphic countable fields which have the same finitely generated subfields? (This question is slightly imprecise, because an easy way out would be if they have the same isomorphism classes of finitely generated subfields but different numbers of some given isomorphism class, if there is such an easy way out modify my question in the obvious way). Is the answer different for char 0 and positive characteristic? (In positive characteristic I can see non-separability creating difficulties....) -- JS -------------- next part -------------- An HTML attachment was scrubbed... URL: /pipermail/fom/attachments/20061026/8505034b/attachment.html More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2006-October/011026.html","timestamp":"2014-04-19T04:23:05Z","content_type":null,"content_length":"3549","record_id":"<urn:uuid:48e392ae-5d4f-4c24-954c-5a7a808cb202>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00319-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by ruby on Tuesday, July 22, 2008 at 6:29am. How would you reword the terms of this equation: (1/2)M Vo^2 = M g H Where H is the maximum height above the thrower's hand. H = VoČ/(2 g) = 11.5 m Could you substitute the letters H & M with Y, Yo, V, Vo, G, T, or use any of these four newton's equtions: V = Vo + gt Y = Yo + Vot + 1/2gt^2 Vo^2 + 2g(Y-Yo)-Y Average velocity = V + Vo/ 2 Thanx for alll the help. =) • physics - drwls, Tuesday, July 22, 2008 at 6:34am No eq • physics - drwls, Tuesday, July 22, 2008 at 6:35am No equation was provided • sorry! - ruby, Tuesday, July 22, 2008 at 6:39am For this problem: A person throws a ball upward with an initial velocity of 15 m per second. In order to calculate how high the ball goes, drwls gave me this formula below: It travels upwards until the vertical velocity component becomes zero. The calculaitons are below: (1/2)M Vo^2 = M g H Where H is the maximum height above the thrower's hand. H = Vo^2/(2g) = 11.5 m Could you substitue H and M for V,Vo,T,Y,Yo,g,a, and could you possible use and of the four newton's equations listed below, to rephrase this? 1.) V = Vo +gt 2.) Y = Yo + Vot + 1/2gt^2 3.) Vo^2 + 2g(Y - Yo)-Y 4.) Average V = V + Vo/ 2 • physics - Dr Russ, Tuesday, July 22, 2008 at 7:05am For a body already moving upwards with kinetic energy 1/2mu^2 (u=your Vo)the kinetic energy at a point on the upward journey is 1/2mv^2=1/2mu^2-mgh cancelling m 1/2v^2 = 1/2u^2-gh v^2=u^2-2gh (more familiarly when falling v^2=u^2+2gh) So for your problem v^2+2gh=u^2 and v=0 at the top so or h=u^2/2g (=Vo^2/2g as you have) does this help? • physics - drwls, Tuesday, July 22, 2008 at 7:54am I don't know what you mean by "rephrasing" the simple equation H = Vo^2/(2g) that provides the answer. H is the height the ball rises Vo is the initial velocity that it is thrown upwards g is the acceleration of gravity (9.8 m/s^2) If you are looking for another way of deriving the equation starting from Newton's Laws, that can be done, but you will still get the same answer. By the way, one of your statements (3) Vo^2 + 2g(Y - Yo)-Y is not an equation. A correct equation would be V^2 = Vo^2 - 2 g (Y - Yo) if Y is measured positive upwards. This equation leads directly to the one I provided earlier, since Y-Yo = H and V = 0 at the highest point of the trajectory • physics - ruby, Tuesday, July 22, 2008 at 7:41pm Yes! thank you both! this really helps1 □ physics - ruby, Tuesday, July 22, 2008 at 7:42pm !* =) Related Questions Physics Please - You want to show that V^2 - Vo^2 = 2 a (X - Xo) first rewrite ... math - a rock is thrown straight up in the air from an initial height of ho, in... Physics/Science #3 - A ball is thrown vertically downward from the top of a 35.9... science - A startled armadillo leaps upward, rising .544 m in eh first .200s a) ... physics - You are investing an accident in which a vehicle traveling with a ... Physics PLEASE - Of course you have to rearrange it X -Xo = (V-Vo)(V+Vo)/2a (X-... MATH - The height h in feet of an object after t seconds is given by the ... Precal. - If object is thrown upward with an intitial velocity of 15 ft per ... Physics (algebra bit) - rearanging V^2 = Vo^2 + 2 a (X - Xo) for V gives me V... maths(urgent....please...) - A ball is hit at an angle of 36 degrees to the ...
{"url":"http://www.jiskha.com/display.cgi?id=1216722573","timestamp":"2014-04-17T16:24:51Z","content_type":null,"content_length":"11387","record_id":"<urn:uuid:d5df9ccd-1bd0-49ff-9f47-66de29c9285e>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00232-ip-10-147-4-33.ec2.internal.warc.gz"}
|X| is a random variable Well, in this case let see that for the sample space S={1,2,3}, X is a random variable such that X(w) = {0.2, 0.4, 0.4}, for every w from S. In this particular example, |X| = X. And, I presume you meant to say that: X(1) = 0.2 X(2) = 0.4 X(3) = 0.4 For |X|, say |X|^-1, inverse image of |X|, how can we map back an outcome to the sample space? I'm not even sure what you're asking here. Maybe explicitly typing things would help. I'm going to define f(x) := |x| for clarity.. X is a random variable. It maps events to probabilities. f is a function. It maps outcomes to outcomes. f is a function. It maps random variables to random variables. (Yes, this is an abuse of notation -- we can "lift" ordinary functions of the outcomes to become functions on random variables. But since they're so closely related, we typically use the same notation for both the ordinary function and its "lift") f(X) is, therefore, a random variable. It maps events to probabilities. So it doesn't even make sense to ask what the "inverse" of f(X) would do to an outcome. Now, the ordinary absolute value function f does map outcomes to outcomes, and it makes sense to ask about its "inverse", or more rigorously, to ask about the inverse image of a set of outcomes. By definition: f^{-1}(E) = \{ x \in S \, | \, f(x) \in E \} so, if S is the set {1, 2, 3}, then f^(-1) (E) = E.
{"url":"http://www.physicsforums.com/showthread.php?p=2325564","timestamp":"2014-04-21T04:37:51Z","content_type":null,"content_length":"63163","record_id":"<urn:uuid:6f2f5a97-3719-4f61-9104-413c413b4f41>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00223-ip-10-147-4-33.ec2.internal.warc.gz"}
Digital Event Timing Patent application title: Digital Event Timing Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP Methods, computer-readable mediums, and a circuit are provided. In one embodiment, a method is provided which obtains a digital sample. The method calculates a second derivative of the digital sample and thereafter determines when the second derivative passed through a zero crossing point. A master clock value and the second derivative value before and after the second derivate passes through zero are used to calculate a clock fraction and add the clock fraction to the master clock value. Thereafter, an event start signal is triggered to initiates signal processing. A method of determining the start time of a gamma interaction in a nuclear imaging detector, comprising the steps of: obtaining a digital sample of an energy signal from said nuclear imaging detector; calculating a second derivative of said digital sample; determining when said second derivative passed through a zero crossing point; retrieving a master clock value and said second derivative value before and after said second derivate passes through the zero crossing point; calculating a clock fraction and adding said clock fraction to said master clock value; and triggering an event start signal that initiates signal processing of signals from said nuclear imaging detector. The method of claim 1, further comprising the step of determining whether said digital sample exceeds a first predetermined threshold prior to calculating said second derivative. The method of claim 1, further comprising the steps of: calculating a first derivative of said digital sample; and determining whether said first derivative is positive prior to calculating said second derivative. The method of claim 1, wherein said obtaining step comprises the step of summing a preselected number of successive outputs of an analog-to-digital converter coupled to outputs of photosensors of said nuclear imaging detector. The method of claim 1, wherein said calculating of said clock fraction comprises: (SD_P)/(SD_P-SD_N), wherein SD_P represents said second derivative of said clock value of an event when said clock value is positive and SD_N represents said clock value of said second derivative at a next occurring event. The method of claim 1, wherein said clock fraction represents a start of a clock cycle to before and after said second derivative passes through the zero crossing point. A computer-readable medium having stored thereon a plurality of instructions, the plurality of instructions, when executed by a processor, cause the processor to perform the steps comprising of: obtaining a digital sample of an energy signal from a nuclear imaging detector; calculating a second derivative of said digital sample; determining whether said second derivative has passed through a zero crossing point; retrieving a master clock value, and said second derivative before and after said second derivative have passed through the zero crossing point; calculating a clock fraction and adding said clock fraction to said master clock value; and triggering an event start signal that initiates signal processing of signals from said nuclear imaging detector. The computer-readable medium of claim 7, further comprising: determining whether a value of said sample is greater than a first preselected value, when said second derivative has passed through the zero crossing point; determining whether an event signal was previously generated when a value of said sample was greater than a second preselected value, and triggering output of a pulse pile-up detection signal when results of said last two determining steps are affirmative. The computer-readable medium of claim 7, further comprising; determining whether said digital sample exceeds a first predetermined threshold prior to calculating said second derivative. The computer-readable medium of claim 7, further comprising: calculating a first derivative of said digital sample; and determining whether said first derivative is positive prior to calculating said second derivative. The computer-readable medium of claim 7, further comprising: summing a preselected number of successive outputs of an analog-to-digital converter coupled to outputs of photosensors of said nuclear imaging detector. The computer-readable medium of claim 7, wherein said calculating of said clock fraction comprises: (SD_P)/(SD_P-SD_N), wherein SD_P represents said second derivative of said clock value of an event when said clock value is positive and SD_N represents said clock value of said second derivative at a next occurring event. The computer-readable medium of claim 7, wherein said clock fraction represents a start of a clock cycle to before and after said second derivative passes through the zero crossing point. A circuit for determining the start time of a gamma interaction in a nuclear imaging detector, comprising: a first circuit adapted to obtain a digital sample of an energy signal from a nuclear imaging detector; a second circuit adapted to calculate a second derivative of said digital sample; a third circuit adapted to determine when said second derivative has passed through a zero crossing point; a fourth circuit adapted to retrieve a master clock value and said second derivative value before and after said second derivative passes through the zero crossing point, calculate a clock fraction and add said clock fraction to said master clock value; and a fifth circuit adapted to trigger, upon said determination that said second derivative has passed through the zero crossing point, an event start signal that initiates signal processing of signals from said nuclear imaging detector. The circuit of claim 14, further comprising: a sixth circuit adapted to determine, subsequent to said second derivative having returned to the zero crossing point, whether a value of said sample is greater than a first preselected value; a seventh circuit adapted to determine subsequent to said second derivative having returned to the zero crossing point, whether an event signal was previously generated when a value of said sample was greater than a second preselected value; and an eighth circuit adapted to trigger output of a pulse pile-up detection signal when results of said last two determinations are affirmative. The circuit of claim 14, further comprising: a ninth circuit adapted to determine whether said digital sample exceeds a first predetermined threshold prior to calculating said second derivative. The circuit of claim 14, wherein said calculating of said clock fraction comprises: (SD_P)/(SD_P-SD_N), wherein SD_P represents said second derivative of said clock value of an event when said clock value is positive and SD_N represents said clock value of said second derivative at a next occurring event. The circuit of claim 14, wherein said clock fraction represents a start of a clock cycle to before and after said second derivative passes through the zero crossing point. The circuit of claim 14, further comprising: means for calculating a first derivative of said digital sample and determining whether said first derivative is positive prior to calculating said second The circuit of claim 14, wherein said means for obtaining comprises means for summing a preselected number of successive outputs of an analog-to-digital converter coupled to outputs of photosensors of said nuclear imaging detector. FIELD [0001] Embodiments herein generally relate to nuclear medicine, and systems for obtaining images of a patient's body organs of interest. In particular, the present invention relates to a novel procedure and system for detecting the occurrence of valid scintillation events. DESCRIPTION OF THE RELATED ART [0002] Nuclear medicine is a unique medical specialty wherein radiation is used to acquire images that show the function and anatomy of organs, bones or tissues of the body. Radiopharmaceuticals are introduced into the body, either by injection or ingestion, and are attracted to specific organs, bones or tissues of interest. Such radiopharmaceuticals produce gamma photon emissions that emanate from the body. One or more detectors are used to detect the emitted gamma photons, and the information collected from the detector(s) is processed to calculate the position of origin of the emitted photon from the source (i.e., the body organ or tissue under study). The accumulation of a large number of emitted gamma positions allows an image of the organ or tissue under study to be displayed. Emitted gamma photons are typically detected by placing a scintillator over the region of interest. Such scintillators are conventionally made of crystalline material such as Nal(Tl), which interacts with absorbed gamma photons to produce flashes of visible light. The light photons emitted from the scintillator crystal are in turn detected by photosensor devices that are optically coupled to the scintillator crystal, such as photomultiplier tubes. The photosensor devices convert the received light photons into electrical pulses whose magnitude corresponds to the amount of light photons impinging on the photosensitive area of the photosensor device. Not all gamma interactions in a scintillator crystal can be used to construct an image of the target object. Some of the interactions may be caused by gamma photons that were scattered or changed in direction of travel from their original trajectory. Thus, one conventional method that has been used to test the validity of a scintillation event is to compare the total energy of the scintillation event against an energy "window" or range of expected energies for valid (i.e., unscattered) events. In order to obtain the total energy of the event, light pulse detection voltage signals generated from each photosensor device, as a result of a single gamma interaction, must be accurately integrated from the start of each pulse, and then added together to form an energy signal associated with a particular event. Energy signals falling within the predetermined energy window are considered to correspond to valid events, while energy signals falling outside of the energy window are considered to correspond to scattered, or invalid events, and the associated event is consequently not used in the construction of the radiation image, but is discarded. Without accurate detection of the start of an event, the total energy value may not be accurate, which would cause the signal to fall outside of the energy window and thereby undesirably discard a useful valid event. Another instance of inaccurate information may arise when two gamma photons interact with the scintillation crystal within a time interval that is shorter than the time resolution of the system (in other words the amount of time required for a light event to decay sufficiently such that the system can process a subsequent light event as an independent event), such that light events from the two gamma interactions are said to "pile up," or be superposed on each other. The signal resulting from a pulse pile-up would be meaningless, as it would not be possible to know whether the pulse resulted from two valid events, two invalid events, or one valid event and one invalid event. Different solutions to the pulse pile-up problem are known in the prior art. One such solution involves the use of pile-up rejection circuitry, which either precludes the detector from processing any new pulses before processing has been completed on a prior pulse, or stops all processing when a pile-up condition has been identified. This technique addresses the problem of post-pulse pile-up, wherein a subsequent pulse occurs before processing of a pulse of interest is completed. Such rejection circuitry, however, may undesirably increase the "deadtime" of the imaging system, during which valid gamma events are being received but are not able to be processed, thereby undesirably increasing the amount of time needed to complete an imaging procedure. Another known technique addresses the problem of pre-pulse pile-up, wherein a pulse of interest is overlapped by the trailing edge or tail of a preceding pulse. This technique uses an approximation of the preceding pulse tail to correct the subsequent pulse of interest. Such approximation is less than optimal because it is not accurate over the entire possible range of pile-up conditions. Further, it requires knowledge as to the precise time of occurrence of the preceding pulse, which is difficult to obtain using analog signals. Additionally, this technique consumes a large amount of computational capacity. Yet another problem encountered in the conventional detection and processing of valid light events is the effect of signal noise on accurate event location processing. In particular, direct current (DC) drifts or other sources of noise may alter the signals from the photosensor devices significantly enough to cause the calculation of the spatial location of an event to be unacceptably A known prior art solution to this problem is disclosed in commonly assigned U.S. Pat. No. 5,847,395 (hereinafter referred to as the "'395 patent"), which is incorporated by reference herein in its entirety. The '395 patent discloses the use of a flash analog-to-digital converter (FADC) associated with each photosensor device (e.g., a photomultiplier tube (PMT)) and a data processor that integrates the FADC output signals, generates a fraction of a running sum of output signals, and subtracts the fraction from the integrated output signals to generate an adjustment signal to correct the output signals for baseline drifts. However, this solution does not address the pile-up problem as it is concerned with energy-independent locational computation. Therefore, there exists a need in the art for a solution that eliminates the effects of system and event-related noise as well as addresses the problem of pulse pile-up. SUMMARY [0011] These and other deficiencies of the prior art are addressed by embodiments of the present invention, which generally relates to nuclear medicine, and systems for obtaining images of a patient's body organs of interest. In one embodiment, a method is provided which obtains a digital sample. The method calculates a second derivative of the digital sample and thereafter determines when the second derivative passed through a zero crossing point. A master clock value and the second derivative value before and after the second derivate passes through the zero crossing point are used to calculate a clock fraction and add the clock fraction to the master clock value. Thereafter, an event start signal is triggered that initiates signal processing. According to another embodiment, a computer-readable medium having stored thereon a plurality of instructions, the plurality of instructions which, when executed by a processor, cause the processor to perform various steps is provided. In this embodiment, a digital sample of an energy signal from a nuclear imaging detector is obtained. A second derivative of the digital sample is calculated. After the second derivative has reached a maximum value a determination is made when the second derivative has passed through the zero crossing point. Thereafter, a master clock value, and the second derivative before and after the second derivative passed through the zero crossing point are retrieved. A clock fraction is calculated and added to the master clock value to obtain an event start and event start time for accurate association with data from a corresponding detector. According to yet another aspect of the invention, a circuit for determining the start time of a gamma interaction in a nuclear imaging detector, is described herein. The circuit includes a first circuit adapted to obtain a digital sample of an energy signal from a nuclear imaging detector. In the circuit, there are sub-circuits adapted to calculate a second derivative of the digital sample; to determine, after reaching a maximum value, when the second derivative has passed through a zero crossing point; retrieve a master clock value and the second derivative value before and after the second derivative passes through the zero crossing point, calculate a clock fraction and the clock fraction to the master clock value; and trigger, upon said determination that the second derivative has passed through the zero crossing point, an event start signal that initiates signal processing of signals from said nuclear imaging detector. Other embodiments are also provided herein which utilize the second derivative of a digital sample to calculate a fraction of the master clock time (and add the fraction to the master clock time) for a more accurate determination of an even. BRIEF DESCRIPTION OF THE DRAWINGS [0015] So that the manner in which the above recited features of the present invention can be understood in detail, a more particular description of the invention, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments of this invention and are therefore not to be considered limiting of its scope, for the invention may admit to other equally effective embodiments. FIG. 1 depicts a flow diagram of an exemplary method according to an embodiment in accordance with this disclosure; FIG. 2 depicts a flow diagram of an exemplary method according to another embodiment in accordance with this disclosure; FIG. 3 depicts an embodiment of an exemplary logic circuit in accordance with this disclosure; FIG. 4 depicts an embodiment of a high-level block diagram in accordance with this disclosure; FIG. 5 depicts an embodiment of a exemplary event logic circuit in accordance with this disclosure; FIG. 6 depicts an embodiment of a high-level block diagram of a computer architecture used in accordance with aspects of the disclosure; and FIG. 7 depicts another flow diagram of an exemplary method according to another embodiment in accordance with this disclosure To facilitate understanding, identical reference numerals have been used, wherever possible, to designate identical elements that are common to the figures. DETAILED DESCRIPTION [0024] Aspects of this disclosure are described herein with respect to a pairing of events in PET systems. However, the description provided herein is not intended in any way to limit the invention to PET systems. Aspects of the material disclosed herein may be utilized in other imaging technologies (e.g., SPECT systems, etc.). In the current digital event detection used in the FORSIGHT detector (U.S. Pat. No. 7,115,880 B2) the point where the second derivative of the digitized energy signal of the event goes from positive to negative is used to determine the clock cycle in which the event started. This point is after the event start but is always a fixed number of cycles after the start so the real start can be determined by subtracting the fixed number from the determined cycle. The relative timing is all that is needed as events are compared to each other and the fixed delay cancels out. This produces a number between 0 and 1 that is added to the clock number when the second derivative was SD_value_clk_(N). This produce a fractional value between clock cycles for the event start time. Other methods of interrelation can be used to find the crossing point. As an example, if a 160 mhz is used to digitize the event energy signal, the starting point in the prior art can be determined to a timing accuracy of the clock period or 1/160 mhz or 6.25 ns. In accordance with embodiments disclosed herein, if the crossing point is calculated to 1/34 of a clock cycle the timing accuracy is 6.25 ns/64 or 0.098 ns. FIG. 1 depicts a method 100 for detecting the occurrence of a light event. The method involves the use of a digital energy signal E_SUM to detect the start of a light event in a scintillator. Such digital energy signal E_SUM is readily obtainable by connecting all of the outputs of the photosensor devices of the system to a summing amplifier, and feeding the output of the summing amplifier to a flash analog-to-digital converter (FADC) as disclosed in the aforementioned '395 patent. Accordingly, no further description of the E_SUM signal will be made, other than to note that in accordance with a preferred embodiment of the invention, the digital E_SUM signal outputted from the FADC is a 12-bit signal sampled at a rate of 160 MHz. In addition, this application incorporates by reference all of the material within U.S. Pat. No. 7,115,880 issued Oct. 3, 2006. Other FADC may be used dependent on the precession required such as a 14-bit signal sampled at a rate of 500 MHz. It is further noted that the method as shown in FIGS. 1 and 2 may be implemented in a number of different ways, such as by software, firmware, digital signal processing (DSP) or a hard-wired digital logic circuit as shown in FIG. 3, which is illustrated for purposes of explanation and exemplification only, and is not intended to restrict the scope of the present invention. Returning to FIG. 1, of method 100. At step 101, a sample E SMO of the instantaneous E_SUM signal is obtained. FIG. 3 depicts an exemplary sample E SMO (see output of 308). Illustratively, the sample E SMO, depicted in FIG. 3, is the sum of three successive output values of the FADC, which may be further processed (e.g., by averaging, filtering or the like). Alternatively, E SMO may be only the instantaneous output of the FADC. At step 103 of method 100, E SMO is compared with a preselected reference value, INDET LV (see FIG. 3), that is greater than the value of E_SUM from the photosensor devices when no light event is present, in order to distinguish the signal from the baseline of the photosensor devices. If E SMO is not greater than the reference value INDET LV, no event is considered to be present and processing returns to obtain the next E SMO sample. If, at step 103, E SMO is greater than the reference value, the method 100 proceeds towards step 105. At step 105 the first derivative of E SMO is calculated (E FD in FIG. 3). If the first derivative is positive (indicating that E SMO is rising) as determined at step 107, then processing advances to step 109 where the second derivative of E SMO is calculated (E SD in FIG. 3). If however at step 107, the first derivative is negative, the method 100 returns to step 101 to obtain the next sample E_SUM signal. At step 112, it is determined whether the second derivative E SD of sample E SMO has reached a maximum or peak value. This can be determined by comparing the instant second derivative value with the immediately preceding value, which can be stored in a buffer. Step 112 increases the likelihood that the second derivative E SD is large enough to be a result of an occurrence of an event and not a result due to noise. If an affirmative determination is made at step 112, the method 100 proceeds towards step 113. If however, a negative determination is made at step 112, the method 100 proceeds towards step 109. At step 113, it is determined whether the second derivative E SD has returned to zero. If at step 113, it is determined that the second derivative E SD has returned to zero then the method 100 proceeds towards step 115. If however it is determined at step 113 that the second derivative E SD has not returned to zero then the method 100 remains at step 113 until the E SD has returned to zero then the method 100 proceeds towards step 115. At step 115, an "event start" trigger signal is enabled (see "EVENT START" in FIG. 3), which accurately indicates the start time of a light event. The "event start" signal can be used to initiate further signal processing of the output signals from the photosensor devices for image construction. The method of digital detection of the start time of a light event as just described provides significantly better accuracy than the conventional analog method where an "event start" signal is simply triggered when the energy signal reaches a predetermined value, such as 40 mV. One of the features of method 100, is determining when the second derivative has changed to/from a positive value to a negative value. For example, when the second derivative is above the preset level, the second derivative is positive. Thereafter, a determination is made whether the second derivative passes through zero. By passing through zero, the second derivative has changed from a positive value to a negative value. As an analog-to-digital converter samples the E_SUM and the second derivative E SD is computed, one sample of the second derivative E SD would be positive and the next sample of the second derivative E SD would be negative. In short, method 100 determines the point when the second derivative E SD passes through zero to determine the start of an event. FIG. 2 is a flow diagram of a second preferred embodiment of the invention, wherein a method 200 is disclosed for detecting the occurrence of a light event even if there is a previous event still present in the E_SUM signal (in other words, in a pulse pile-up situation). At step 201, a sample E SMO of the instantaneous E_SUM signal is obtained. The sample E SMO according to the illustrative embodiment shown in FIG. 3 is the sum of three successive output values of the FADC, which may be further processed (e.g., by averaging, filtering or the like). Alternatively, E SMO may be only the instantaneous output of the FADC. At step 203, E SMO is compared with a preselected reference value, INDET LV (see FIG. 3), that is greater than the value of E_SUM from the photosensor devices when no light event is present, in order to distinguish the signal from the baseline of the photosensor devices. If E SMO is not greater than the reference value INDET LV, no event is considered to be present and processing returns to step 201 to obtain the next E SMO sample. If, at step 203, E SMO is greater than the reference value E SMO, then at step 205 the first derivative of E SMO is calculated (E FD in FIG. 3). If the first derivative is positive (indicating that E SMO is rising) as determined at step 207, then processing advances to step 209 where the second derivative of E SMO is calculated (E SD in FIG. 3). If at step 207, the first derivative is not positive, processing returns to step 201 to obtain the next sample E_SUM signal. At step 213, it is determined whether the second derivative E SD is above a preset threshold level EVENT LV (see FIG. 3) to avoid false triggering of the detector in response to noise. At step 213, if the second derivative E SD is not above the preset level, then processing returns to step 209. If, at step 213, E SD is above the preset level, the process proceeds towards step 215. At step 215 it is determined whether the second derivative has returned to zero. Once it has been determined that the second derivative E SD has returned to zero, the process proceeds towards step 217. If at step 215, the second derivative has not returned to zero, the process remains at step 215 until the second derivative E SD has returned to zero. At step 217, the master clock values for the second derivative value before and after the zero crossing are acquired. Thereafter, the method 200 proceeds towards step 219. At step 219, the information acquired in step 217 is used to calculate the clock fraction (i.e., the time it took from the start of the clock cycle to the point where the second derivative crossed through). The calculated clock fraction is added to the master clock time. Thereafter, the method 200 proceeds towards step 221. At step 221, output event start and event start timing. FIG. 3 is a general block diagram of a logic circuit according to one preferred implementation of the method according to the invention. As shown, the circuit is constructed of a logical connection of latch circuits, adders, subtractors, and comparators, which receive the input signal E_SUM and the various reference values. The combination of comparators, AND gate, and latch at the bottom of FIG. 3 is a circuit for determining whether the sample E SMO is outside of an acceptable energy range or window bounded by values EARLY LL and EARLY UL. If E SMO is outside of the energy window, then a DUMP signal is generated that causes the detector to discard the present E SMO value and to restart processing. In this embodiment of the invention the value of the second derivative on the clock cycle just before and just after the second derivative passes through zero is used to determine a fractional part of the clock cycle that the event occurred in. This is done by (SD_P)/(SD_P-SD_N) (Eq. 1) where "SD_P" represents the second derivative of the clock value of an event when the clock value is positive; and "SD_N" represents the clock value of the second derivative at the next occurring event (which will have a negative value). In positron emission tomography ("PET") imaging systems there is a ring of detectors surrounding a patient. In accordance with aspects of this disclosure, each detector has circuitry which has been re-configured to perform functions depicted in FIG. 3. There is a sync signal transmitted to each detector and synchronized with the master clock that is used to ensure that each master clock has the same time. The master clocks are run by energy ADC's that are generating a clock into the master counter 306. All of the Energy ADC's are clocked at the same time. As a result, greater accuracy is achieved when determining or analyzing an event from opposing detectors. Generally, the time of the event is the time from the master clock plus a fraction of the master clock time from the clock edge to the point where the second derivative passes through zero. The second derivative is used to determine the fraction of the master clock time. Equation 1 is used to calculate distances above and below zero line (i.e., a transition distance from a positive value to a negative value). Specifically, in FIG. 3, a sync align signal 302 and energy ADC Clk signal are transmitted towards a 160 Mhz to ADC clock converter 304. The clock converter 304 transmits a sync reset signal towards a master counter 306. The Master counter 306 receives the sync reset signal and the energy ADC Clk signal; and outputs a master time. Each detector 402 (depicted in FIG. 4 and further described below with respect to FIG. 4) contains circuitry for synchronization of time of all detectors. The master time is stored in a Capture Master Time latch 326. An Energy ADC signal is transmitted towards both E ADC 308 and Energy 316. The E ADC 308 "smoothes" the Energy ADC signal and transmits an Esmooth signal towards both INDET COMPARE 310 and E ADC FD and COMPARE 312. INDET COMPARE 310 receives both Esmooth signal and an energy threshold signal; determines whether Esmooth signal is above energy threshold signal; and transmits an INDET signal, towards an EVENT DETECTION ENABLE 314, indicative of whether the Esmooth signal is above the energy threshold signal. In addition to the Esmooth signal, E ADC FD and COMPARE 312 also receives a First Derivative THRESHOLD signal; determines whether the first derivative is positive; and generates an FD POSITIVE signal towards the EVENT DETECTION ENABLE 314. Upon receipt of both the INDET signal and the FD POSITIVE signal, the EVENT DETECTION ENABLE 314 transmits enable signal DET_ena. ENERGY smooth 316 generates and transmits an Esmooth_SD signal towards FIRST DERIVATIVE 318. ENERGY smooth 316 smoothes the Energy ADC signal. The degree of smoothing by the Energy smooth 316 is (e.g., from 1 point to 4 points) dependent upon the level of noise in the Energy ADC signal. The FIRST DERIVATIVE 318 provides a 1 point to 3 point smooth; and transmits an FD smooth signal towards SECOND DERIVATIVE 320. The SECOND DERIVATIVE 320 provides a 1 point to 4 point smooth; and transmits an SD smooth signal towards SD LEVEL COMPARE 322. In addition to the SD smooth signal, SD LEVEL COMPARE 322 also receives an SD THRESHOLD signal. The SD LEVEL COMPARE 322 compares the received signals and transmits an SD_LV enable signal. SD EVENT NEGATIVE DETECTION 324 receives enable signals DET_ena and SD_LV; and the SDsmo signal (an actual positive/negative value of the second derivative). When the SDsmo is negative, the SD NEGATIVE DETECTION 324 transmits an EVNTstart signal towards the Capture Master Time latch 326, a Latch SD First Negative Value 330, and Latch SD Last Positive Value 332. The SD NEGATIVE DETECTION 324 also transmits the EVNTstart signal (as a delay signal) towards a TIMING DATA register 328. The Capture Master Time latch 326 transmits the master time to upper bits of the TIMING DATA register 328. The TIMING DATA register 328 uses the master time, Delay, and a Fraction of the Master Clock (discussed further below) to generate and transmit a Coincident Timing Output. The Latch SD First Negative Value 330 receives the EVNTstart signal and the SDsmo signal to transmit an SD_N signal towards the Latch SD Last Positive Value 332. The Latch SD Last Positive Value 332 uses the SD_N signal and the EVNTstart signal to transmit an SD_P signal. A Subtractor 334 receives the SD_P and SD_N signals; subtracts the SD_N signal from the SD_P signal; and transmits the difference towards a Divider 336. The Divider 336 divides a received SD_P signal by the received difference between the signals (i.e., SD_P-SD_N) to calculate and transmit the Fraction of the Master Clock. As indicated above, the Fraction of the Master Clock is transmitted to towards the Timing Data register 328. FIG. 4 depicts an embodiment of a high-level block diagram in accordance with this disclosure. Illustratively, the high-level block diagram is high-level block diagram of a PET system 400. There are opposing detectors 402 on either side of a patient. Embodiments, as described above, look for events which occurred simultaneously in opposing detectors 402. Each of the detectors 402 detects a positron 414. As explained above, each detector 402 transmits data received to its respective detector processor 404 for processing; synchronizes its master clock; and transmits EVENT DATA with timing of an event (e.g., towards an Output Controller 406). The Output Controller 406 receives time sync data from an EVENT TIMING MATCH module 408 and EVENT DATA with its timing from the DATA PROCESSOR 404; transmits the received sync data towards the DATA PROCESSOR 404 and the received EVENT DATA with its timing towards the EVENT TIMING MATCH module 408. The EVENT TIMING MATCH module 408 receives control signals from an IMAGE GENERATION PROCESSOR 410; uses the EVENT DATA to match corresponding events; and transmits the paired EVENT DATA (i.e., event data from corresponding detectors) towards the IMAGE GENERATION PROCESSOR 410. The IMAGE GENERATION PROCESSOR 410 transmits an image generated from the data towards the Display 412. FIG. 5 depicts an embodiment of an exemplary event logic circuit 500 in accordance with this disclosure. The event logic circuit 500 includes a MASTER COUNTER 502, INTEGRATION TIMING 504, ENE INTEG 506, TOP INTEG 508, LEFT INTEG 510, PILEUP CORRECTION 512, POSITION CALCULATION 514, and DETECTOR CALIBRATION 516 modules. FIG. 6 depicts a high-level block diagram of a general-purpose computer architecture 600 for performing calculation of the time of an event. For example, the general-purpose computer 600 is suitable for use in performing the methods of FIGS. 1 and 2; and the logic circuit depicted in FIG. 3. The general-purpose computer of FIG. 6 includes a processor 610 as well as a memory 604 for storing control programs and the like. In various embodiments, memory 604 also includes programs (e.g., depicted as a "digital event calculator" 612 for calculating an occurrence of an event and an associated event time) for performing the embodiments described herein. The processor 610 cooperates with conventional support circuitry 608 such as power supplies, clock circuits, cache memory and the like as well as circuits that assist in executing the software routines 606 stored in the memory 604. As such, it is contemplated that some of the process steps discussed herein as software processes may be loaded from a storage device (e.g., an optical drive, floppy drive, disk drive, etc.) and implemented within the memory 604 and operated by the processor 610. Thus, various steps and methods of the present invention can be stored on a computer readable medium. The general-purpose computer 600 also contains input-output circuitry 602 that forms an interface between the various functional elements communicating with the general-purpose computer 600. Although FIG. 6 depicts a general-purpose computer 600 that is programmed to perform various control functions in accordance with the present invention, the term computer is not limited to just those integrated circuits referred to in the art as computers, but broadly refers to computers, processors, microcontrollers, microcomputers, programmable logic controllers, application specific integrated circuits, and other programmable circuits, and these terms are used interchangeably herein. In addition, although one general-purpose computer 600 is depicted, that depiction is for brevity on. It is appreciated that each of the methods described herein can be utilized in separate computers. FIG. 7 is a flow diagram of a second preferred embodiment of the invention, wherein a method 700 is disclosed for detecting the occurrence of a light event even if there is a previous event is still present in the E_SUM signal (in other words, in a pulse pile-up situation). At step 702, a sample E SMO of the instantaneous E_SUM signal is obtained. The sample E SMO according to the illustrative embodiment shown in FIG. 3 is the sum of three successive output values of the FADC, which may be further processed (e.g., by averaging, filtering or the like). Alternatively, E SMO may be only the instantaneous output of the FADC. Thereafter, the method 700 proceeds towards step 704. At step 704, the second derivative of E SMO is calculated. Thereafter, the method 700 proceeds towards step 706. At step 706, it is determined when the second derivative has returned to zero. Once it has been determined that the second derivative E SD has returned to zero, the process proceeds towards step 708. At step 708, the master clock values for the second derivative value before and after the zero crossing are acquired. Thereafter, the method 700 proceeds towards step 710. At step 710, the information acquired in step 708 is used to calculate the clock fraction (i.e., the time it took from the start of the clock cycle to the point where the second derivative crossed through). The calculated clock fraction is added to the master clock time. Thereafter, the method 700 proceeds towards step 712. At step 712, output event start and event start timing. The invention having been described, it will be apparent to those skilled in the art that the same may be varied in many ways without departing from the spirit and scope of the invention. In particular, while the invention has been described with reference to photomultiplier tube photosensor devices, the inventive concept does not depend upon the use of PMTs and any acceptable photosensor device may be used in place of a PMT. Further, any suitable gamma detector may be used in place of a scintillation crystal. Finally, the circuit of FIG. 3 is but one example of an implementation of the invention. As previously explained the digital event detection may be performed by a programmable computer loaded with a software program, firmware, ASIC chip, DSP chip or hardwired digital circuit. Any and all such modifications are intended to be included within the scope of the following claims. While the foregoing is directed to embodiments of the present invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow. Patent applications by Roger E. Arseneau, Buffalo Grove, IL US Patent applications by SIEMENS MEDICAL SOLUTIONS USA, INC. Patent applications in class Measured signal processing Patent applications in all subclasses Measured signal processing User Contributions: Comment about this patent or add new information about this topic:
{"url":"http://www.faqs.org/patents/app/20120101779","timestamp":"2014-04-19T04:52:01Z","content_type":null,"content_length":"70052","record_id":"<urn:uuid:ad4d6de8-abfc-46bf-a029-2aacf76ded1c>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00490-ip-10-147-4-33.ec2.internal.warc.gz"}
Firestone Algebra 1 Tutor Find a Firestone Algebra 1 Tutor ...I have over 100 hours of training with this agency. The training taught me study skills for all types of subjects, tests, and learning styles. I taught the Biology section of the MCAT for a national test prep agency for over 1 year. 26 Subjects: including algebra 1, reading, geometry, biology ...This experience includes my past work with high school students to prepare for AP exams, tutoring sessions held through university tutoring centers, as well as my prior work as a teaching fellow for an advanced undergraduate chemistry course while in graduate school. My primary focus as a tutor ... 8 Subjects: including algebra 1, chemistry, calculus, geometry ...I am married to my wonderful wife Helen and together we raise three fine young children who attend school locally in the St Vrain Valley school district. I appreciate your contact, and I look forward to working with you, your student, and/or friends on improving math skills, understanding, and s... 8 Subjects: including algebra 1, geometry, GED, trigonometry ...I was a site leader on an Alternative Breaks trip through CU Boulder focusing on youth education with 5th graders and a public achievement coach at Centaurus High School working with sophomores on youth empowerment. These experiences have taught me how to help students apply their subjects to th... 39 Subjects: including algebra 1, chemistry, physics, reading ...My tutoring style is using hints, facts, and questions to guide student's thinking. Please do NOT expect me to provide answers directly as that only hinder's a learning process. Please only contact me if you interested in learning rather than ONLY needing help completing an assignment as I will not be doing any problems for you. 7 Subjects: including algebra 1, chemistry, algebra 2, ACT Math
{"url":"http://www.purplemath.com/Firestone_algebra_1_tutors.php","timestamp":"2014-04-20T04:29:57Z","content_type":null,"content_length":"23938","record_id":"<urn:uuid:fef44365-b34e-49c2-ad94-dccf8dd077f1>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00260-ip-10-147-4-33.ec2.internal.warc.gz"}
Alice Programming Tutorial, by Richard G Baldwin Relational expressions Relational operators are used to create relational expressions. Evaluation rules regarding relational expressions are given below. Assume that A and B are numeric variables or expressions that evaluate to numeric values: A < B evaluates to true if A is algebraically less than B, and false otherwise. A > B evaluates to true if A is algebraically greater than B, and false otherwise.
{"url":"http://www.dickbaldwin.com/alice/Slides/Alice0170ag.htm","timestamp":"2014-04-21T13:45:11Z","content_type":null,"content_length":"1363","record_id":"<urn:uuid:372509d9-11f1-40a1-ba49-e0b17562cbd4>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00433-ip-10-147-4-33.ec2.internal.warc.gz"}
Conservation of Angular Momentum A collapsing Star [b]1. After a collapsing star decreases its radius to half its initial size, predict what will happen to its angular velocity(Assume uniform density at all times)Your answer describes the high angular speed of neutron stars. Find the change in the rotational kinetic energy of the star. Where does the energy come from? I=2/5MR^2 I think the energy comes from the force of the stars own gravity, I also thought that the angular velocity would increase but I don't know how to find the change in kinetic energy. Any clear explanation would be great, I think I am on the right track but need a little help. 3. The attempt at a solution
{"url":"http://www.physicsforums.com/showthread.php?t=195201","timestamp":"2014-04-19T04:37:38Z","content_type":null,"content_length":"23061","record_id":"<urn:uuid:581dbf5c-49cf-4174-8a43-7051c320f7e3>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00026-ip-10-147-4-33.ec2.internal.warc.gz"}
Rahns, PA Algebra 1 Tutor Find a Rahns, PA Algebra 1 Tutor ...However, it was revived again in 1995 when I migrated to Freeport, Bahamas. During stay there, I also started a plumbing supply import-export business for contractors. This was purely by accident, as I traveled to the US monthly to secure fabrics, etc. for my business. 51 Subjects: including algebra 1, Spanish, English, reading ...Tutoring math is my profession! I have had 15 years of tutoring experience beginning immediately after I received my degree in Mathematics, having a 3.9 GPA. I started my career in my college's learning lab working with college students of all levels. 10 Subjects: including algebra 1, calculus, geometry, ASVAB ...It was amazing to see how excited these kids were to learn about science and now, reflecting on this experience, I see the impact I can have on people directly. My passion for math and science has given me more than expertise in those subjects, but also a strong analytical mind that is essential... 16 Subjects: including algebra 1, Spanish, calculus, physics ...Thank you for your time.In biology, I teach cell anatomy, cellular processes, and human anatomy. I enjoy teaching Spanish. I learned Spanish by watching television, and I am very fluent in speaking it. 10 Subjects: including algebra 1, chemistry, Spanish, biology ...I have a master's degree in educational leadership and a bachelor's degree in mathematics. Due to my years of experience, I have compiled quite a variety of strategies that make math easier for all students to understand. I truly love tutoring because I watch that light bulb "go on" in my students' heads. 13 Subjects: including algebra 1, geometry, ASVAB, GED Related Rahns, PA Tutors Rahns, PA Accounting Tutors Rahns, PA ACT Tutors Rahns, PA Algebra Tutors Rahns, PA Algebra 2 Tutors Rahns, PA Calculus Tutors Rahns, PA Geometry Tutors Rahns, PA Math Tutors Rahns, PA Prealgebra Tutors Rahns, PA Precalculus Tutors Rahns, PA SAT Tutors Rahns, PA SAT Math Tutors Rahns, PA Science Tutors Rahns, PA Statistics Tutors Rahns, PA Trigonometry Tutors Nearby Cities With algebra 1 Tutor Charlestown, PA algebra 1 Tutors Congo, PA algebra 1 Tutors Creamery algebra 1 Tutors Delphi, PA algebra 1 Tutors Eagleville, PA algebra 1 Tutors Englesville, PA algebra 1 Tutors Fagleysville, PA algebra 1 Tutors Gabelsville, PA algebra 1 Tutors Graterford, PA algebra 1 Tutors Gulph Mills, PA algebra 1 Tutors Linfield, PA algebra 1 Tutors Morysville, PA algebra 1 Tutors Trappe, PA algebra 1 Tutors Valley Forge algebra 1 Tutors Zieglersville, PA algebra 1 Tutors
{"url":"http://www.purplemath.com/Rahns_PA_algebra_1_tutors.php","timestamp":"2014-04-16T07:58:31Z","content_type":null,"content_length":"23929","record_id":"<urn:uuid:8a818d52-ae63-4503-8e48-dd4fc23a7dfb>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00233-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Total # Posts: 577 1) Prepare 100mL of 10%w/w HCl from 37%concentrated HCl(density of 37%HCl =1.19g/ml) 2) Prepare 100mL of 10%w/v HCl from 37% concentrated HCl(density of 37%HCl=1.19g/ml) Can you please show the calculation steps for 1) & 2)? Is 10%w/v HCl the same as 10%w/w HCl? Thanks. English - thank you ms sue i will wait i will wait and thanks a lot English - please help i had posted 3 times and no please can you make correction in my short story. i am very bad at grammar and even my parents does not khnow english well. can you proof reed my short story. Love-Hate story Happy birthday Khushi , her family said as they came to her room at midnight. Thank y... eng lit please proof read my short story please and make some correction Love-Hate story Happy birthday Khushi , her family said as they came to her room at midnight. Thank you, Mom, Dad and my lovely sister , Khushi said with a huge smile. Khushi laid down on he... Oh, sorry. 25 g of Zinc and 30 g of sulfur. I already found that Zinc is the limiting reagent, but I'm stuck on the rest. Zinc and sulfur react to form zinc sulfide according to the equation Zn+S=>ZnS a) How many grams of ZnS will be formed? b) How many grams of the excess reactant will remain after the reaction is Physics help. If a curve with a radius of 94.5 m is perfectly banked for a car traveling 72.4 km/h, what must be the coefficient of static friction for a car not to skid when traveling at 96.4 km/h? The genes for hair color and eye color are linked r=40-2t shows the relationship between the average rate of speed, r, of a car in meters per second, and the time, t,in seconds, as the car slows down. College Physics A Fraunhofer diffration pattern is produced on a screen 210cm from a single slit. The distance from the center of the central maximum to the first-order maximum is 7560 times the wavelength (lambda). Calculate slit width. Answer in __m College Physics The lines in a grating are uniformly spaced at 1530 nm. Calculate the angular separation of the second order bright fringes between light of wavelength 600nm and 603.11nm. Answer in __mrad f someone scored two Standard Deviations above the Mean on a standardized test where the Mean = 100 and the Standard Deviation = 15, then that person s numerical score would be ? A. 125 B. 130 C. 160 D. 90th Percentile Chidren,Families,and Society Larry is a very aggressive four-year-old child.he watches many television programs that contain violence.it's most likely that Help needed! I'm comparing France and the USA. If France had the Paris vs. Versaille conflict with regards to the city capital. As much as capitals showed the unity amongst the people, it also produced conflicts between different sectors of society. What did the United Sta... How to prepare 100mL of 3% w/w Hydrogen Peroxide from 30% w/w Hydrogen Peroxide?Specific Gravity of 30%w/w Hydrogen Peroxide is 1.11g/mL. Can you please show the calculation steps? Thanks. okay. thank you :) Thank you. :) Though I was thinking, should it be "full potential"? or just the potential.. I mean, urban life is still moving forward so there's no way of telling that it has reached its full potential, right? :) Or maybe I'm just overthinking. Well, basically, here is what I've got so far.. I believe that the industrial revolution promoted urban life. To a certain point, people would a associate urban living to better living. Migration from rural areas to urban areas was popular back then. During the industrial ... Cities and urban areas were a source of economic opportunities in the post-Medieval Western world. How was the full potential of Western cities and urban areas in this regard was realized during the Industrial Revolution? Chilsren,Families,and Society 18.Which of the following statements implies rules of confidentality?A.You can tell the librarian that you heard that Joey's father was arrested for drug use.B.Parents must give permission to release school records to the child's therapist at the community clinic. C. y... How do I prepare 100mL of 10g/L HCl from concentrated HCl(37%HCl, density 1.19g/mL) Is it 2.28mL of conc HCl dilute to 100mL with water? Please show the calculation steps. t(s)=0.0,1.0,2.0,3.0,4.0,5.0,6.0,7.0 v(m/s)=0.0,4.6,8.9,13.4,18.0,22.9,27.6,32.0 find the area underneath the graph between t=2.0s and t=6.0s, i know i will have to break this down into two separate shapes, a rectangle and a triangle then add them together, i just don't kn... The product of two consecutive intergers is 56. Find the intergers. m_dot = 2000 kg/h q = 111600 kJ/h T_final = 95C C_p = 3.9 kJ/(kgC) equation: q = m_dot*c_p*(T_initial-T_final) plug in all the value and solve for T_initial *NOTE* remeber deta_T both in C and K is the same. So don't worry Can u help me find a picture of a sea sponge budding ? Aluminum metal generates H2 gas when dropped into 6.00 M HCl. Calculate the mass of H2 that will form from the complete reaction of 0.365 g Al with 8.40 mL of 6.00 M HCl. Determine the values of k for which the function f(x)= 4x^2-3x+2kx+1 has two zeros , check these values in the original equation Determine the values of k for which the function f(x)= 4x^2-3x+2kx+1 has two zeros , check these values in the original equation Factor completely x(x+1)^2-4(x+1) (2x+3)^2-9 solve the equation (1/4)^x = (1/8)^2x-1 solve the equation 3^-x+2=27^x-4 Why is a pizza cutter handle a lever? sorry i meant arteries name three major blood vessels that branch off the aortic arch and the body part the blood is carried to name two major artries that branch off the dorsal aorta and the organ they carry blood to for a fetal pig why is it difficult to find the urinary bladder by describing the specific location ? what structures in the umbilical cord remove waste materials from a fetal mammal? The fetal pig does not have functional lungs, since the lungs are not being used is there any need to transport blood tot he lungs before the fetus is born I did a pig dissection , but i am not to sure about these questions, any help would be great ! 1. The lungs of a fetal pig serve as a protection for what important body organ 2. The respiratory system is located on the ventral side of the pigs body, what 3 ways is the tract ph... in 2nd year Adam owed $ 977.53, in the 3rd year he owed $ 1036.18 and in the 4th year he owed $1098.35 how much was the loan originally, and determine the future value of the loan after 10 years Loan #1 Year Amount owed 1 $3796 2 $3942 3 $4088 Loan # 2 Year Amount owed 1 $977.53 2 $1036.18 3 1098.35 For loan #1 is simple interest. Loan #2 is compound interest How much was each loan originally Determine the future value of each loan after 10 years No one has answered t... Vlad purchased some furniture for his apartment. The total cost was $ 2943.37. He paid $ 850 down and financed the rest for 18 months. At the end of the finance period .Vlad owed 4 2147.28 . What annual interest rate, compounded monthly , was he being charged ? Round your answ... oh sorry 6%/a Sally invests some money at 65/a compounded annually. After 5 years . she takes the principal and interest and reinvests it all at 7.2%/a compounded quarterly for 6 more years. At the end of this time, her investment is worth $ 14 784.56 . How much did Sally originally invest? on the day Rachel was born, her grandparents deposited $ 500 into a savings account that earns 4.8%/a compounded monthly . They deposited the same amount on her 5th, 15th, and 10th birthdays. determine the balance in the account on Rachel's 18th birthday oh sorry 45%/a Bermie deposited $ 4000 into an account that pays 45/a compounded quarterly during the first year. The interest rate on this account is then increased by o.2% each year. Calculate the balance in Bernie's account after three years. Sara buys a washer and a dryer for $2112.She pays $500 and borrows the remaining amount. A year and a half later she pays off the loan, which compounded semi-annually, was Sara being charged. Thank you very much! I appreciate your help! :) A student rides on a train traveling at Vt = 36ikm/hr (with the respect to the ground). What are the speed and the velocity of the student with respect to the ground if he runs at 5.0im/s? -5.0 m/s? 5.0j m/s perpendicular to the direction of motion of the train? Tia is investing $ 2500 that she would like to grow to $ 6000 in 10 years, at what annual interest rate compounded quaterly must Tia invest her money Nazir saved $900 to buy a plasma tv, he borrowed the rest at an interest rate of 18%/a compounded monthy, 2 years later he paid $ 1420.50 for the principal and the interest, how much did the tv originally cost Dieter deposists $ 9000 inoan account that pays 10%/a compunded quarterly, after 3 threes the inerest rate changes to 9%/a compounded semi-annually, calculate the value of his investments 2 years after this change Sima invests $ 4240.00 in the first year, $ 4494.40 in the second year , and $ 4764.06 in the third year , how much did sima invest A simple fractal tree Grows in stages. A each new stage, two new line segments branch out from each segment at the top of the tree. The first five stages are shown. How many line segments need to be drawn to create stage 20? Stage 1- 1 Stage 2 - 3 Stage 3 - 7 Stage 4 -15 Stage... George invests $ 15 000 at 7.2%/a compounded monthly . How long will it take for his investment to grow to $ 34 000 Serena wanst to borrow $15 000 an pay it back in 10 years. Interest rates are so high, so the bank makes her 2 offers option 1 - borrow the money at 12%/a compounded quarterly for the full term option 2 -Borrow the money at 12%/a compunded quarterly for 5 years and then renego... How long would you have to invest $5300 at 7.2%/a simple interest to earn $1200 interest How much money must be invested at 6.3%/a simple interest to earn $250 in interest each month world religions I really need help with thses questions ! Any help would be great true or false- Moses was a wealthy egyptian what is Yavneh and why was it so important? find the area of the region enclosed by y=squareroot of x, the line tangent to y= squareroot of x at x=4 and the x-axis the 65th term in a geometric sequence is 8000, and the 69th term is 500. find the 64th term the 10th term of an arithmetic series is 34, and the sum of the first 20 terms is 710, determine the 25th term. in the video game gemetric constructors a number of the shapes have to be arranged into a predefined form , in level 1, you are given 3 min 20s to complete the task, at each level afterward, a fixed number of seconds are removed from the time until, at level 20 , 1 min 45s are... Jamal got a job working on a assembly line in a toy factory, on the 20th day of work, he assembled 137 toys, he noticed that since he started , every day he assembed 3 more toys then the day before , how many toys did jamal assemble all together during his first 20 days To Annie: Re: Penn Foster I"ve been asking for help to see if my anaswers were correct, the work was my own .I'm not the only student asking for alittle help. This isn't cheating!!!! Penn Foster, Language and Other Barries 6. to say that all African American males are fast runners is an example of A.prejudice B.sexism C.sterotype D.bias Penn Foster, Language and Other Barries D.keep the lines of communication open between teachers, students, and parents. Penn Foster, Language and Other Barries Would the answer to question 11.,be D ? Penn Foster, Language and Other Barries I think the answer is C Penn Foster, Language and Other Barries 11.As a teacher in an ESL classroom,it's within your responsibilities to A.meet parents one-on-one to discuss their child's progress B.provide language translation for the teacher when reequested C.determine when an ESL, student is ready to be mainstreamed into a &quot... Penn Foster, Language and Other Barries 11.As a teacher aide in an ESL classroom,it's within your responsibility to physical science pour a liter of water at 40 c into a liter of water at 20 and the final temperature of the two becomes about____________________c Language and Other Barries 8.In general,the best type of multicultural classroom materials are those that Early Child Educationand After-School Day Care Whi are pre-primer programs desinged for? calculate the distance it traveled before it is brought to rest from 44 km/h if the speed of a truck going south is reduced from 80 km/h to 44 km/h in a distance of 250 m. can anyone answer these questions i have to say all of them tomorrow in spanish. 1. ¿Cuántos estudiantes hay en la clase de español? 2. ¿Hay cuadernos en tu mochila? 3. ¿Qué es? 4. ¿Quién es? 5. ¿De quién es... can you please look over my answers? clauses Sarah throws rocks into a quarry lake from the top of a 67 foot high wall. The chart gives the horizontal distance, x (in feet), the rock has travled from Sarah and the height, y (in feet), of the rock above the lake. The chart is: |distance, x:|9| 19| 36|50| |height, y: |75.3... English III are you doing odyssey ware to?? i just had this same question pop up Teacher Aide in the Special Education Setting When tutoring learning challenged settings,learning challenged students are most often placed in A.resource rooms B.the nurse's office C.self-contained classrooms D.phychiatric clinics social studies, Anthro psych , and soc In the movie Shawshank redemption, how did the prison in the film try to rehabilitate its inmates, was it successful? Provide 3 examples of ways in which the prison guards enforced compliance to prison rules, were the enforcements necessary or effective? Why was Red continuall... 10tan theta= .5+1/2costheta + 2sintheta How do I simplify the right side of the equation. Technology in the Classroom A software package that you would like to run on your classroom computer lists a 486 computer as a minimum requirement. On which of the following computers would you expect the software to be able to Instructional Materials :Their preparation and use which machine would you use to project a color picture from a book? A.Bioscope B.Slide projector C.overhead projector D.Opaque projector Instructional Maferials: Their Preperation and Use Why would you mount an overhead transparency in a cardboard frame? A. to make copies of it B. To color it C. To add overlays D. To project it onto a screen Technology in the Classroom Which of the following scenarios most likely describes the future of technology in education? Technology in the Classroom What is the ideal computer arrangement to use when you want to integrate computers directly into the classroom curriculum? Technology in the Classroom Technology in the Classroom There are several different hard drives available for a computer.Which of the following size hard drives is the largest? yay! thanks! What is the volume of a cone that has a radius of 3 cm and a height of 7 cm? (use 22/7 for pi) I got 594 cubic cm, but I have a feeling I'm terribly off. you could collect differrent types of skin moisturizers, and cover them. Then, ask test subjects to use them, each person not knowing which mouisterizer is which, and ask them to write down how they feel after using the mouisterizers. Or, you could test them on yourself. wait. are the brackets around the numbers saying to find their absolute value, or is that just ot seperate them? Suppose you were to swim into an underwater cave while keeping the same depth below the surface of the water. Would the water pressure around you increase, decrease, or remain the same? a. The pressure would increase. b. The pressure would decrease. c. The pressure would remai... That's what I thought at firts,but then I kept thinking I ;eft something out.... Thanks! Can you simplify the fraction 63/323? I've tried.... That is the wrong answer. a 160 g hockey puck slides horizontally along the ice, and the coefficient of kinetic friction between puck and ice is 0.15. if the puck a tarts sliding at 3 m/s, how far will it travel before it comes to a stop? a 160g hockey puck slides horizontally along the ice, and the coefficient of kinetic friction between puck and ice is 0.15. if the puck starts sliding at 3 m/s before it comes to a stop it travels 3.6m how do you get to this answer? 1. A boy drags a wooden crate with a mass of 20 kg, a distance of 12 m, across a rough level floor at a constant speed of 1.50 m/s by pulling on the rope tied to the crate with a force of 50 N. The rope makes an angle of 25¡ã with the horizontal. a.What is the mag... 8 mL of 0.0100 M HCl are added to 22 mL of 0.0100 M acetic acid. What is the pH of the resulting solution? Pages: <<Prev | 1 | 2 | 3 | 4 | 5 | 6 | Next>>
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Annie&page=3","timestamp":"2014-04-16T17:39:40Z","content_type":null,"content_length":"29326","record_id":"<urn:uuid:718a17e4-2252-481f-83ef-e8741c5249c1>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00603-ip-10-147-4-33.ec2.internal.warc.gz"}
Student Definition Noticing number patterns. Repeating number series, Number order, Number patterning Why Teach The process of pattern recognition facilitates learning and significantly enhances academic performance by enabling students to predict outcomes, to organize the world around them, and to establish relationships for meaning. □ Analyzing math problems to introduce operations □ Constructing a house □ Designing art work □ Determining numeric probability □ Observing nature □ Planning for retirement □ Planning to purchase a home □ Predicting a recession □ Predicting g.p.a. □ Projecting weight loss or gain □ Solving math puzzles □ Understanding the national debt Students will be able to: □ Name and give examples of different types of recurring numeric patterns (see "Background Information on Numeric Patterns"). □ Recognize numeric patterns in their environment. Metacognitive Objective Students will be able to: □ Reflect upon their thinking processes when using this skill and examine its effectiveness. Skill Steps 1. Analyze the relationship among adjacent numbers. Look for a recurring pattern. 2. Hypothesize a pattern structure. 3. Test your hypothesis (see "Background Information on Numeric Patterns"). 4. If a pattern does not appear, look for a different pattern (see "Background Information on Numeric Patterns"). 5. Repeat steps 1-4 as necessary. Metacognitive Step 6. Reflect upon the thinking process used when performing this skill and examine its effectiveness: ☆ What worked? ☆ What did not work? ☆ How might you do it differently next time? □ Debrief - review and evaluate process, using both cognitive and affective domains to achieve closure of the thinking activity. □ Metacognition - the act of consciously considering one's own thought processes by planning, monitoring, and evaluating them (thinking about your thinking). □ Pattern - an organizational arrangement □ Fibonacci series - see "Background Information on Numeric Patterns". Possible Procedure for Teaching the Skill General Strategy 1. Define the skill and discuss its importance. 2. Introduce and model a repetitive numeric pattern from "Background Information." 3. Practice the pattern. 4. Repeat steps 2 & 3 with each numeric pattern from "Background Information on Numeric Patterns," providing activities that allow discrimination among patterns learned. 5. Debrief: Discuss techniques students used to arrive at conclusions. Include both triumphs and tragedies (facilitations and roadblocks). Note: A general strategy can be given to students that will help them identify which type of series is used (these are explained, in detail, in "Background Information on Numeric Patterns." ☆ If the series increases or decreases rapidly, look for a multiplication, division or exponential series (the V technique). ☆ If the series does not increase or decrease rapidly, apply the next level of the V technique, looking for a solution. ☆ If no solution can be found, look for a combined addition and multiplication series. ☆ If no solution can be found, look for a Fibonacci series. ☆ If no solution can be found, look for another pattern. Primary Procedure 1. Define repetitive numeric patterning and discuss its purpose. 2. Place ten piles of beans in a line; the first pile having one bean, the second having two beans, the third having three beans... and so on up to ten beans in the last pile. 3. Ask students to count the beans in the first and second pile to compare the numbers. 4. Repeat #3 with the second and third pile, the third and fourth pile, etc. 5. Explain to students that adding one bean each time creates a repetitive numeric pattern. 6. Tell students that being able to identify patterns can help them learn math. 7. Go through skill steps with students and show them how skill steps apply to figuring out the bean pattern. 8. Repeat bean procedure with a +2 pattern. Help students apply the skill steps to figure out the pattern. 9. Debrief students on the process, the definition, and the importance of this skill. Integrating the Skill into the Curriculum Understanding of non-linguistic recurring numeric patterns can be facilitated by using the overhead projector and charts showing the numbers 1 to 100 in boxes on the chart. Involve the class in finding and describing numerical patterns on the chart. Begin by covering the chart with a blank transparency and then marking all the even numbers with a blue dot. Ask the class to explain how you arrived at the resulting pattern. Depending on their experiences, they may explain it as "plus two" or "even numbers." Remove the marked transparency. On a clean transparency, mark in yellow the numbers in intervals of three. After discussion, overlay the two sheets and note that +2 and +3 addition patterns emerge. Mark these points with a red square, and ask students if they can discover a new pattern. This can be continued with the class, and then with clean, duplicated copies of the 1-to-100 chart allow students to construct their own patterns. Have them compare and check identified patterns with fellow students. Introduce and teach recurring numeric pattern recognition both as an interesting intellectual skill and as a test-taking device. Teach students the numeric problem-solving processes and the common types of problems they are likely to encounter. Ask students to create problems using two or three of these common types plus one or two types that they make up. Have students exchange their "made-up" problems and try to solve each other's. Using a transparency or ditto of a 1-to-100 chart, blank out a pattern of numbers. Ask students to fill in missing numbers. Have a student describe the weather (weather chart, calendar). Background Information Patterns can be found in all forms of non-linguistic information. The more students can "see" patterns in non-linguistic information, the more they can organize their world. This skill can assist the brain with its natural tendency to organize information into patterns. The ability to perceive repeating patterns in various settings is important for the learner. Aptitude tests frequently include problems involving numeric patterns. Everyday problems can often be solved by recognizing the recurring patterns within them. Understanding that the environment contains countless examples of obvious and subtle repeating patterns introduces the learner to the elegant nature of our universe. There are many types of non-linguistic patterns. In this unit we will consider a few of those patterns--those commonly found on aptitude tests and other forms of tests. However, if students are not presented with anything more than this procedure, all they are learning is a relatively sophisticated test-taking technique. The ultimate goal of having students solve number series problems is to have them identify different types of numeric patterns. Students should be encouraged to find other types of patterns in numeric series. We will consider numeric patterns, especially those identified by David Lewis and James Greeene (Thinking Better, New York: Holt, Rinehart and Winston, 1982) as important for aptitude tests. More Background Information about Numeric Patterns Additional Resources Jacobs, Harold R. Mathematics: A Human Endeavor. New York.: W.H. Freeman and Co., 1982. Chapter 2, pp 57-118. Lewis, David and James Green. Thinking Better. New York: Holt, Rinehart, and Winston, 1982. Marzano, Robert J. and Daisy E. Arredondo. Tactics for thinking. Colorado: Mid-Continent Regional Educational Laboratory, 1986.
{"url":"http://media.bethelsd.org/website/resources/static/thinkingSkillsGuide/skills/recog_num_pat.htm","timestamp":"2014-04-16T21:51:38Z","content_type":null,"content_length":"14231","record_id":"<urn:uuid:26798002-ff25-4060-be2c-a7c407f6fa42>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00418-ip-10-147-4-33.ec2.internal.warc.gz"}
Delphi, PA Math Tutor Find a Delphi, PA Math Tutor ...This combination of real world and teaching experience allows me to enthusiastically demonstrate the relevance of the subject to all students! I specialize in tutoring math and physics. My teaching philosophy includes keeping students challenged, but not overwhelmed. 3 Subjects: including calculus, chemistry, physics ...I thought back on my first teaching experience back when I was in college. I took part in this program where students from my university taught a group of public school children how to make a model rocket and how it worked. I remember I got the chance to instruct a small group of children on the names of all the parts of the rocket and a basic explanation of how they functioned. 16 Subjects: including precalculus, algebra 1, algebra 2, calculus ...Whether it be elementary, secondary, or college level, I will address your specific needs. I was tutor for two years at Penn State, hired by the Biology Department to tutor groups of 3-6 students for an hour at a regularly scheduled time each week. I have tutored introductory biology, ecology, and molecular/cellular biology. 16 Subjects: including algebra 1, SAT math, trigonometry, prealgebra ...While the basic understanding of shapes and size is fairly straight forward, students often find themselves overwhelmed by the various rules for proofs in geometry. I aim to help the student understand what the theorems and axioms mean physically, so the student will better understand the concep... 9 Subjects: including algebra 1, algebra 2, chemistry, geometry ...I am certified to teach grades 7-12, and I have experience tutoring for high stakes tests. My education is rooted in literature, writing pedagogy, and communications. Whether you need help with your college or graduate school applications, improving your SAT grades, or writing your graduate the... 15 Subjects: including ACT Math, English, reading, grammar Related Delphi, PA Tutors Delphi, PA Accounting Tutors Delphi, PA ACT Tutors Delphi, PA Algebra Tutors Delphi, PA Algebra 2 Tutors Delphi, PA Calculus Tutors Delphi, PA Geometry Tutors Delphi, PA Math Tutors Delphi, PA Prealgebra Tutors Delphi, PA Precalculus Tutors Delphi, PA SAT Tutors Delphi, PA SAT Math Tutors Delphi, PA Science Tutors Delphi, PA Statistics Tutors Delphi, PA Trigonometry Tutors Nearby Cities With Math Tutor Center Square, PA Math Tutors Clayton, PA Math Tutors Congo, PA Math Tutors Englesville, PA Math Tutors Gabelsville, PA Math Tutors Linfield, PA Math Tutors Morysville, PA Math Tutors Neiffer, PA Math Tutors Niantic, PA Math Tutors Plymouth Valley, PA Math Tutors Salford Math Tutors Salfordville Math Tutors Spring Mount Math Tutors Worman, PA Math Tutors Zieglersville, PA Math Tutors
{"url":"http://www.purplemath.com/delphi_pa_math_tutors.php","timestamp":"2014-04-19T17:29:30Z","content_type":null,"content_length":"23855","record_id":"<urn:uuid:66f10b59-6682-48f0-9ba3-5e39053d8abb>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00321-ip-10-147-4-33.ec2.internal.warc.gz"}
Scale, Elevation, Grid, and Combined Factors Used in Instrumentation How Things Work: Scale, Elevation, Grid, and Combined Factors Used in Instrumentation Professional Surveyor Magazine - February 2006 Surveyors deal with small but important adjustments that their modern instrumentation can automatically do for them, but are not always sure exactly how to deal with this. Some of these adjustments are for variations in the speed of light or radio waves through the atmosphere due to temperature, barometric pressure and relative humidity, prism constants, antenna constants, horizontal and vertical collimation errors, height of standards (also called trunnion axis) error, and errors in horizontal and vertical angles due to the instrument not being perfectly level. One adjustment that is often ignored, but shouldn't be, is what is often referred to as "scale error." As it stands "scale error" may mean different things in different situations, geographic locations, and even instrumentation. Not only must the theory of how scale error is computed and applied be understood, but that understanding must then be applied to all the work the surveyor does in the field and in the office, particularly when trying to match field results to inversed valued computed from grid coordinates. Part of the reason for needing to be concerned about knowing how scale factors are developed and applied comes from using State Plane Coordinates. Whether one uses a total station or GPS or both, scale factor cannot be ignored. In addition, we have the added complication that GPS positioning (at its core) is computed on an ellipsoid, which means that some understanding of geodetic surveying is required by practitioners who may have only practiced plane surveying, and who may not have not been exposed to geodetic concepts and ways to implement them in field procedures. Many people think that "geodetic surveying" refers to surveying that is extremely accurate; that may be so, but it doesn't have to be, so long as it takes the curvature of the earth's surface and the ellipsoid into The challenge for local surveyors is to use GPS without misunderstanding what the scale factors are, so that errors aren't compounded. This only gets more complicated when combining an instrument that produces "plane" measurements at ground level if no corrections are applied with an instrument that generates ellipsoidal measurements. This is the situation with the combination of total station and GPS. In Figure 1 you see one step of the scale factor process, assuming that one is converting a ground distance to a grid distance. Using the mean radius of the earth and the elevation of the end points of the ground distance, the sea-level or geodetic distance may be computed. This factor can be called the elevation factor, although other names may be used. In Figure 2, this sea-level distance is then projected onto a developable surface; that is, a geometric shape that can be made into a plane. A sphere cannot be made into a plane. An ellipsoid cannot be made into a plane, but a cylinder or cone, if you imagine it to be made of paper, can be cut and laid out flat. Regardless of whether it is State Plane or Universal Transverse Mercator (UTM) coordinates, this conversion of individual distances to grid distances must be done. To get from sea-level distance to grid distance, the constants applicable to the particular State Plane Coordinate or Universal Transverse Mercator System zone must be used to determine what is often called the grid factor, though other names may be used. Then, using conventional coordinate geometry, the grid coordinates of the end points of the lines can be determined. The two-step conversion may be combined into one by multiplying the elevation and grid factors together to produce what is often called a "combined" factor. This number must be used guardedly as it only applies for similar elevations and X and Y values in the grid system. Of course a tie to points whose grid coordinates are known must be done. Even if the coordinate system used is an arbitrary one, these reductions must be done to properly combine total station and GPS It gets more problematic when doing stake-out, or checking the positions of existing monuments whose grid coordinates are known. If a measurement is made, for example, with an EDM, the EDM distance must either be adjusted to grid and then compared with the inversed grid coordinates. Alternatively, the inversed distance can be converted to ground by dividing by the combined factor and compared with the ground distance. It is particularly important that in the process of doing these calculations that surveyors not multiply coordinates by these scale factors; ONLY distances must be multiplied by these factors. Geomatics Industry Association of America (GIA) is an organization of manufacturers, suppliers, and distribution partners, en-compassing the present and emerging technologies which address customer needs in surveying, GPS, engineering, construction, GIS/LIS, and related fields by providing leadership in training and education to enhance the efficiency and effectiveness of their related business. www.giaammerica.org » Back to our February 2006 Issue
{"url":"http://www.profsurv.com/magazine/article.aspx?i=1550","timestamp":"2014-04-18T18:12:01Z","content_type":null,"content_length":"30733","record_id":"<urn:uuid:0852f18d-0779-41b9-a299-487822021afb>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00098-ip-10-147-4-33.ec2.internal.warc.gz"}
MathFiction: Monster's Proof (Richard Lewis) a list compiled by Alex Kasman (College of Charleston) Home All New Browse Search About Monster's Proof (2009) Richard Lewis With parents and a younger brother who are all "mathematical geniuses", Livey Ell (who is in danger of getting kicked out of cheerleading unless she improves her algebra grades) is a bit too normal. Things get out of control when her brother's computations brings Bob, a mathematical creature from Hilbert space, to life. This science fiction novel aimed at a young adult audience includes brief discussions of the Riemann Hypothesis (perpetuating the myth that it would somehow be upsetting if zeroes of the zeta function were to be discovered off the critical line), Hilbert space, Pythagorean cults and fractals (in the form of a "fractal sword"). Additionally, there are religious overtones to the story which develop when some of the characters are discovered to be angels and/or demons. The author knows quite a bit about mathematics (he insists that the highpoint of his mathematical education was truly understanding the definition of the limit in calculus, but has clearly read a lot of popular mathematics since then), and so much of this rings true. He includes some famous mathematical jokes and anecdotes and presents characters that (while stereotypical) reflect some knowledge of the mathematical community. I don't think he knows what a Hilbert space is (besides that it is a cool sounding name), and suspect his notion of "the Hilbert space of all Hilbert spaces" does not even make mathematical sense. (What would be the inner product of two Hilbert spaces? I seriously doubt it would be complete under any reasonable definition.) But, that doesn't prevent the book from being fun and worth reading. Buy this work of mathematical fiction and read reviews at amazon.com. (Note: This is just one work of mathematical fiction from the list. To see the entire list or to see more works of mathematical fiction, return to the Homepage.) Works Similar to Monster's Proof According to my `secret formula', the following works of mathematical fiction are similar to this one: 1. Mother's Milk by Andrew Thomas Breslin 2. Evil Genius by Catherine Jinks 3. Napier's Bones by Derryl Murphy 4. Panda Ray by Michael Kandel 5. Doctor Who: The Algebra of Ice by Lloyd Rose (pseudonym of Sarah Tonyn) 6. The Living Equation by Nathan Schachner 7. Jack and the Aktuals, or, Physical Applications of Transfinite Set Theory by Rudy Rucker 8. Numbercruncher by Si Spurrier (writer) / PJ Holden (artist) 9. On the Quantum Theoretic Implications of Newton's Alchemy by Alex Kasman 10. Dude, can you count? by Christian Constanda Ratings for Monster's Proof: Content: Have you seen/read this work of mathematical fiction? Then click here to enter your own votes on its mathematical content and literary quality or send me comments to post on 5/5 (1 votes) this Webpage. Literary Quality: 3/5 (1 votes) Genre Humorous, Science Fiction, Fantasy, Children's Literature, Motif Prodigies, Insanity, Proving Theorems, Female Mathematicians, Math Education, Religion, Medium Novels, Home All New Browse Search About Your Help Needed: Some site visitors remember reading works of mathematical fiction that neither they nor I can identify. It is time to crowdsource this problem and ask for your help! You would help a neighbor find a missing pet...can't you also help a fellow site visitor find some missing works of mathematical fiction? Please take a look and let us know if you have seen these missing stories (Maintained by Alex Kasman, College of Charleston)
{"url":"http://kasmana.people.cofc.edu/MATHFICT/mfview.php?callnumber=mf899","timestamp":"2014-04-18T18:10:42Z","content_type":null,"content_length":"10454","record_id":"<urn:uuid:b48c2bc3-5386-4119-a0f4-7ebbc66f5796>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00399-ip-10-147-4-33.ec2.internal.warc.gz"}
Rounding values to the nearest fraction in MS Excel Excel’s ROUND() function works great if you want to round of a number to the nearest whole number. But there may be situations where we want to Round off to some nearest fraction. Say, you want to Round off a number to the nearest quarter. How do you do that? Let’s see with an example. Lets say we have a value $1.65 and we want it to be displayed as $1.75.(or if it was $1.60 it should be displayed as $1.50). We can still use Excel’s ROUND() function to achieve this but with a little twist. Assuming that the value is in cell A1, the following formula will do the trick: The formula divides the original value by 0.25, rounds it up and then multiplies the result by 0.25. Rounding off to other fractions is equally easy. For example, to round off to the nearest 5 cents, simply substitute 0.05 for each of the two occurrences of 0.25 in the above formula. I would be interested to know if there are other ways to do this. 1. Zoeloo says A BIG BIG THANK YOU FOR YOUR TIPS!! GOD BLESS YOU!!!!!! 2. vikas says hi.. can u advise if this is possible. for eg.12.10/12.09/12.25 it should be rounded of to the next number i.e. 13. plz advise i understand that if its lesser than .49 it remains to the same number and above .50 it jumps to the next number. 3. Prahlad Sharma says verry verry useful me your formul number change in words in excel sheet indian format 4. yelbaut says 5. Narendra Patel (Himatnagar) says Verry usefull me . 6. HARVINDER MUDGAL says i wnat a value to next higher rupee if the value is 36.01 i want it to be 37 please send formula 7. Sunil says can anyone tell me how to copy text and figure automaticaly in another cell in excel 8. AmanV says thanks a lot…this is exactly what i wanted… keep up with the good work! 9. Abhilash says Can anyone tell me how to round off values in A1 through C86 cells in one shot? That’s 256 cells at one go. Excel 2007 doesn’t seem to have that feature. 10. kuber says Lokesh bhai…. try his…. for any round for 12…. =round(12,0) … for me it automatically came 12 in case of 12.3 and 13 incase of 12.7… hope this helps 11. Lokesh says Sir, Give me the IF condition to round off like: 12.49 or less should be rounded to 12 and 12.5 or more should be rounded to 13. Please reply as soon as possible. Lokesh Joshi 12. Lokesh says Sir, I wanted to round off a number. if number is 12.4 then it should be rounded to 12 and if it is 12.5 or more then 13. Give me the If condition so that I can apply it over a range of cells. 13. Marshal says This formula really helper. Thanks dude 14. s.goswami says Thanks for the addin.. it exactly met my requirements.. thanks once again. 10/07/2007 15. Mikeee says Nice job, that really helped me. Leave a Reply Cancel reply
{"url":"http://dq.winsila.com/tips-tricks/ms-office-tips/rounding-values-to-the-nearest-fraction-in-ms-excel.html","timestamp":"2014-04-24T10:29:02Z","content_type":null,"content_length":"41717","record_id":"<urn:uuid:eba2b860-fe40-4e77-bced-a8d07a0a17ac>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00663-ip-10-147-4-33.ec2.internal.warc.gz"}
Tuple Definition Discussion Extension of . How exactly is a tuple different than a map (dictionary)? Tuples do not have keys. They are 1-dimensional, static arrays. Arrays are a sub-set of maps. The "keys" are (usually) sequential integers. I am not sure "array" is well-defined either. Different languages have varying features in their array implementation and interfaces. I should have said vector instead of array, which is the more correct terminology. My C++ past at work. In any case, while in the strict sense arrays can be considered a subset of maps (because they're indexed by integer), it's a useless definition for explaining why tuples aren't maps. Perhaps this is a LaynesLaw issue. Positional "parameters" can be used to represent named parameters and visa versa, for example. It is a matter of convenience and circumstances which is preferred. This is because a position can stand for only an integer, or can represent something else such as a color. It just requires a conversion between position and color either explicitly or by convention. <rant enjoyable="true" irrelevant="true" unproductive="definitely"> • It is not irrelevant because such discussion can spark alternative ways of thinking about or implementing something, such as dynamic RDBMS (DynamicRelational). -- top I blame the s for this one. In most branches of mathematics, a "tuple" is a fixed-length, ordered, unlabelled sequence of things (technically, a "tuple" is an abstract concept that requires a size for its instantiation - instead of speaking of a "tuple", you instead often hear of a "triple" or a "5-tuple" or a " -tuple"). However, in the , the word "tuple" is used to mean what most mathematicians call a "record" - an unordered, labelled aggregation of things. And of course, it's always amusing that in much of the industrial database literature (in particular, documentation for numerous RDBMS products as well as SQL), the word "record" is often used for the individual rows within a table (here referring to the data structure in many programming languages, not the mathematical concepts) often prompting database theoreticians like to tsk-tsk-tsk the authors of such literature for their sloppy terminology - such a thing ought to be called a tuple, according to them. But when you consider the centuries of mathematical history that occurred before started writing about the - who's right and who's wrong? </rant> Actually there's no need to blame for this one. Nor there is need for , unless Top desperately seeks yet another fight for a definition to circularly prove his points. • I only focused on it is an issue when others got picky about its usage. Otherwise, I would have let it be. -- top • Maybe I'm wrong but you did try to imply that in RelationalModel a table is an array of dictionaries or something like this, and when challenged came here to suggest a LaynesLaw fight to suit your purpose. So no, tuple is not a map or dictionary structure, although you can use a dictionary to represent tuples. And defining a table as an array of dictionaries serves no useful purpose. Just let the RelationalModel be defined as DrCodd did it and the database theory used it for decades already. Will you ? • Because DrCodd, although otherwise a genius, was a poor communicator. • And you think you are better ? • I would bet money that more people know what a "map" or "dictionary" is than "tuple". And, yes, I do feel that I could write a more approachable book on relational than DrCodd. His may be academically more precise, but that is not the only audience for such books. I would start by using everyday concepts and analogies, and build on them as we go deeper into the topic. I wish to bring tables to the masses so that they can share in my joy of tables. □ Or you might read ISBN 0764507524 □ DrCodd changed his name? ☆ Uh, no. The point is, that non-technical, non-scholarly database books are a dime a dozen. Oh, and joy of tables? It sounds to me like you're not peddling science; instead you're peddling religion. :) □ [Some are not fond of the term "dictionary" since we all know a dictionary is that book with pages that you flip open. I for one am a bit fed up with all the terms like associative array, dictionary, hash - it just seems people have too many terms to describe the same thing. I'm also annoyed that there is a struct, record, tuple - it seems there is some overlap. I suppose I could just GetOverIt though.] • Forgive me for telling you bluntly but you are either delusional or suffering from UnconsciousIncompetence. Most OO developers whom you fought endlessly on comp.object still hold DrCodd in high esteem, while they consider you more or less of a troll. Your approach certainly had little convincing power out in the wild, therefore your hypothesis was not quite confirmed in practice. But let no facts interfere with your deeply held convictions, write your own book, Top, and see how that goes. Heck, you can even write it on wiki, if you are careful and honest to avoid the confusion between the proper RelationalModel and your private "school of thought" on the subject. You can easily inflate the right to have your own opinion, to the point where you want that to have equal stature with others which have a lot more to show for it. In other words be mindful of EveryoneHasHisOpinion / EveryoneHasHisDefinition fallacy. • I always find that if one says the same thing from different viewpoints, the audience is more likely to cling. No one single way of explaining something will gel with everybody. Thus, if you try multiple approaches you increase the chance of one of them gelling. Explaining tables both in terms of maps and vectors increases the "learning paths" available. As far as pleasing the comp.object crowd is concerned, it is a lost cause. They think OO is the greatest thing since sliced bread and I will never convince them otherwise. Perhaps OO models their pointer-hopping heads. I can't change that, other than make sure they don't extrapolate subjective preferences onto everybody else, which is what I feel they do. -- top • [On a softer note: It's an admirable sentiment, but do be sure in your approach to make clear what is analogy, and where the analogy breaks down. I think you'll find in the end that it will be a more verbose way of explaining something that really isn't very complicated to begin with. I personally have yet to meet a developer who couldn't understand a simple explanation of "tuple" (in the relational sense) without resorting to analogies to more complex data structures. -- DanMuller] □ Seconded. Vectors are very simple, dictionaries are more complicated. Besides, do you really think that "A tuple is a dictionary, except you can only access values by index, and the size is constant, and you can't change the values" is a good definition? Maybe we need JustLikeaFooButDifferent?. □ But the size does not have to be constant, at least not per "row". See DynamicRelational and PageAnchor "Equiv01" below. Does doing that make it non-relational? I don't think so. It is just a different approach, a different view or implementation of the same info. We could enumerate all the possible attribute names in a central spot and implement it positionally, but that is an implementation detail. It is representationally equivalent as far as the output. □ Doing that definitely makes it non-relational. That conflicts with the very definition of relation. Go check it. Since what you can represent this way is superset of relations, the output of your operators could only be "equivalent" in cases where the inputs are in fact relations, and you'll have to go through each and every relational operation and define very exactly what it means for your "super-relations". Then you'll have to work at making it all logically consistent, proving that what you end up with is a closed algebra (or that it isn't, and live with the unaesthetic consequences), and explain it well enough to convince others that it is. What you're talking about is a whole new theory, you can't blithely change one of the underpinnings of the relational model and pretend that you're only "tweaking" it - every definition and logical consistency of every operator proceeds from and depends on these fundamental definitions. -- DanM □ How is it a superset? Please show an example. Any set of maps can be turned into a set of tuples and visa versa. As far as I can tell, they are representationally equivalent. (Which is more "convenient" is a separate matter.) □ By "size" I assume that you mean the number of attributes in a tuple. If I've misunderstood, explain. By definition, each tuple in a given relation has the same number of attributes, with the same names. (I'll leave domains aside for the moment.) So one of your tables would only be relational in the case where each one of its rows had the same number of fields, with the same names. Should be obvious without an example, which would be tedious to construct. -- DanM □ Afterthought: If you define the heading of one of your tables to be the union of the headings of all the rows (or conversely restrict your rows to have no attributes that are not defined in the heading of the owning table), and treat each row as if it had this heading, but consider "missing" attributes to be null, then you could indeed call your view relational. You've just described it differently and confusingly. (This interpretation didn't come to me immediately, since I'm firmly in the anti-null relational camp.) -- DanM ☆ There's a pro-null relational camp? :) ☆ Sadly, yes, we haven't managed to nullify them yet. ☆ Of course, this view is only relational if the union of all of the attributes - the maximal set - is fixed. If you can insert arbitrary attributes into arbitrary rows, with no constraints on what can get stuffed where; that can't be considered relational under any definition of the term. ☆ Well, you'd be potentially redefining the relation heading with each insertion. Yup, that definitely puts you in new territory. ☆ This is the "dynamic" approach described in MultiParadigmDatabase (mentioned above already). If it is not clear there to you, you are welcome to add clarification. Relational does not limit tables "growing" columns in the middle of operation that I know of. Many existing RDBMS allow that. We are just using that feature an extreme in a dynamic relational DB. I generally defined a non-explicit column as blank, but if you want to use null it does not change the general concept. It can act just like a static RDBMS, it is just that a list of all (active) columns takes longer to compute. -- top ☆ OK, this can fit in with thoughts that I touched on in DoesRelationalRequireTypes. But I don't like the implications. I'll explain there later tonight if I can make time. (However, I think you're quite wrong about tables changing in mid-operation in either relational theory or existing RDBMS. Data definition operations are distinct and outside of the relational algebra.) -- DanM ☆ If they are outside the definition, then it is not something that can "violate" it, it appears. I would note that I purposely made a non-distinction between empty cells and non-existent cells in dynamic relational proposal (MultiParadigmDatabase). Does this potentially violate relational theory because it creates a potentially infinite set of empty columns, and if so does it contribute to nasty side-effects beyond traditional static-vs-dynamic issues? ☆ Depends on what "outside the definition" means. If A' introduces new requirements on A such that none of the constraints on A are violated, then A' can still be part of A. If A', on the other hand, waives requirements on A, then A' is most definitely not part of A (though it might be the case then that A is part of A'). In the present instances, adding variable schemas like this is in direct contradiction to the RelationalModel, which holds that the shape of tables are fixed. That doesn't mean your idea is bad; it just means it's not strictly "relational" any more (here, defining "relational" to refer to DrCodd's model specifically; not to something that might have similar properties). ☆ {Note that Microsoft's RDBMS allows one to add a column while it is still running. Although, it must be an explicit DBA action, it would probably have similar implications. I don't see how that causes any great logical conundrums. I suppose it may introduce name-space collision possibilities, but these can happen even if you stop and start the RDBMS to add a column.} ☆ That is what I cannot figure out. Does dynamic schemas violate relational theory, or only relational tradition? Has DrCodd or his colleagues ever considered the idea of dynamic schemas, and if they did, did they reject them on practical grounds, on theoretical grounds, etc. ☆ You'll probably have to go tromping through the literature to answer that question (I don't know myself). Based on a back-of-my-hand estimation, I suspect a consistent theory could be built on such a system, but I haven't delved into it deeply. OTOH, it is often the case that relaxing a constraint on some system/model produces undesirable results (making the system inconsistent, making formal manipulation or examination of the system undecidable or intractable, etc.) There is undoubtedly lots of research material out there on dynamic schemas and other relational extensions. ☆ So in summary, you are not necessarily disputing its "relational" qualifications, at least not without researching it further. But, you are suggesting it may have practical downsides (which are probably similar to the heavy-typing versus dynamic arguments/battles already found on this wiki.) --top The existence of overloaded words is acceptable in mathematics, especially since the two usages are related (just think of a definition for isomorphism). So for simple-minded mathematical use an n-tuple is a function f : {1..n} -> X where X can be whatever. For students of set-theory this is technically wrong because a function is a special kind of a relation, and a relation is a set of tuples, so voila, we have a circular definition. Therefore a little technical trick allows us to define the ordered pair: (a,b) := {{a}, {a,b}} // other variations of this trick are possible as well with the basic theorem (a,b) = (a',b') <=> (a=a') /\ (b=b') and then by induction we defined n-tuple to be an ordered pair of (n-1) tuple and the n-th element, and then we prove a similar theorem on equality. The definition of an n-tuple as a function f : {1..n} -> X isn't wrong, it can be made to work perfectly easily within any reasonable SetTheory. Define a relation as a set of maplets, and a maplet a --> b as {{a}, {a,b}}. There are various other ways. But the mathematical trick above, albeit elegant and useful for its purpose is not practical for programming, so what did was to "decorate" the definition, see , with stuff that doesn't change much the mathematical apparatus, but makes it more convenient for programming, namely column names and domains for each columns. The two versions of tuples are perfectly interchangeable, , for example, uses both versions: the authors call that the versus the "perspective" of the same . Some theorems are easier to prove in the unnamed perspective, while others, and especially practical example and exercises are easier to work with in the named perspective. Definitely for programmers the named perspective is generally better, because we don't want to refer to information items by integers rather than names. That's an advantage of the f : {1..n} -> X formulation; it can easily be generalized to f : Name -> X. No need for separate "perspectives". Nice cleanup of the original page, whoever did it. Thanks. -- DanM Tuples impose meaning to order, but columns are not required to have order in relational theory IIRC. Is there a conflict there? It seems to me a map is more appropriate because it does not require an ordering. In functional terms, any "cell" can be referenced via the following function: cellValue = getCell(table, primaryKey, columnName) Whether that is implemented with tuples, maps, or gerbils on treadmills is an implementation detail. If "tuple" is hard-wired into the definition of relation, then perhaps made a of sorts. As far as a "getRow" function, what comes back is also an implementation detail. But if columns have no required order, then where does the order of a tuple come from if we use a tuple in the implementation? We are back to the same issue. Maps seem a better fit because they don't require any more information that is required by the definition. -- top Oh, you just don't get it. Tuple is an abstract mathematical concept. It's a specification. You can implement it with stone tablets if that's what suits you. Or with hashmaps. Or with whatever. Provided whatever you implement matches the mathematical definitions. Now the mathematical definitions for tuple in database theory are two and they are used interchangeably: one is based on 1..n indices and one uses named columns. Again, the one without column names is not used to implement anything nor to limit the programmer in any way shape or form. It's just used to prove some theorems. and FYI DrCodd used the named perspective. So your comments are totally misinformed. If named columns are used, then it is or is equivalent to a map. Done. No, it ain't. Almost done. Maybe I should ask you to provide TopDefinitionForMap?, then who knows maybe you refer to a "map" that is the same as tuple. In functional terms, a map can be defined as: value = getValue(mapInstance, key) setValue(mapInstance, key, value) booleanValue = keyExists(mapInstance, candidateKey) (Bonus options are iterators, getListOfKeys(), clearMap(), etc.) So, no, it's not the same thing. See TupleDefinition and you can convince for yourself that it is not the same thing. Immutability. A tuple is immutable. Further, the order of a tuple only matters so far as the ordering of two related tuples are the same (i.e., (mapcar [1 2 3] square) results in [1 4 9]). In this view, an immutable map may be held to be equivalent to a tuple iff the keys can be completely arbitrary. Therefore: [1 2 3] != [3 2 1] ( [1 2 3]->[1 4 9] ) == ( [3 2 1]->[9 4 1] ) {a:1 b:2 c:3} != {x:1 y:2 z:3} ( {a:1 b:2 c:3}->{a:1 b:4 c:9} ) == ( {x:1 y:2 z:3}->{x:1 y:4 z:9} ) {a:1 b:2 c:3} != [1 2 3] ( {a:1 b:2 c:3}->{a:1 b:4 c:9} ) == ( [3 2 1]->[9 4 1] ) Defend or refute. :) -- WilliamUnderwood What does "immutable" mean? I could not find it in any online math dictionary. That's because it's a computing concept. Something is immutable if it can't be changed. In imperative languages you constantly change the value given to a variable. In Functional languages you (often) can't, or don't. If something can't have its value changed, it's immutable. That explains why the only search hits I got were complaints about Java's immutable strings. Anyhow I fail to see how immutability is a big deal. In practice, rows are obviously mutable, and one can use the concept of an immutable map if need be to describe functional versions of the world. -- TopMind You need to re-read . Technically, rows (and tables!) are mutable. If they were, they would then have , etc. What happens (at the theoretical level) is that a database is a collection of s, and that these relvars are the only mutable things in the database. When you do an UPDATE operation, what theoretically happens is that a relation is created, containing the stuff in the old relation plus whatever changes, and the relvar is then bound to this new relation (the old one may be discarded). In many cases, the relation is only referenced in one place (through the relvar), so an in-place update can be done (in , relations in a database are ). Of course, all of this is transparent to the client. -- ScottJohnson This sounds like the "no side-effects" concept from functional programming. Whether it is updating or replacing is perhaps an EverythingIsRelative issue. Maybe you don't really move your arm, but God quickly replaces an old you with a new one but with a changed arm position. Either way, it looks the same to us. The side effect is that the relvar changes. The whole idea of no side effects in functional programming has only confused the industry and done damage IMO. A computer has a hard drive and memory. The whole point of using a computer is to have side effects galore. A relvar is very very mutable. : Equiv01 Here's an example to clarify the equivalency issue: A: // missing equiv to "null" <row foo="zock" bar=7> <row bar=5 <row foo="ff"> B: // (traditional) <row foo="zock" bar=7> <row foo=null bar=5> <row foo="ff" bar=null> -top See also:
{"url":"http://c2.com/cgi-bin/wiki?TupleDefinitionDiscussion","timestamp":"2014-04-19T10:38:04Z","content_type":null,"content_length":"25953","record_id":"<urn:uuid:90c1042e-acb8-4c57-aaf7-a56003990df4>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00488-ip-10-147-4-33.ec2.internal.warc.gz"}
GMAT Data Sufficiency Tips Probably on every standardized test you have taken in your life, you have had multiple choice math problems — that’s exactly what the Problem Solving questions are. No big surprise there. The other type of question, though, Data Sufficiency, is a beast unique to the GMAT. I would argue that the Data Sufficiency section is extraordinarily apt for the GMAT, as it tests uniquely managerial skills . On the GMAT Quantitative section, you get 75 minutes for 37 questions —- of these 37 questions, approximately 14-16 will be Data Sufficiency questions. If you are just encountering this question for the first time, here are some tips that will help you negotiate this often confusing and confounding question format. When you’re done having a look at them, check out our GMAT eBook, to help you navigate the rest of the test. GMAT Data Sufficiency Tip #1: memorize the answer choices The five answer choices are always the same. Know this cold. If someone breaks into your residence at night and wakes you by throwing cold water on you, and when you wake up in utter shock and disorientation they ask you to recite the five DS answer choices, you should be able to rattle them off with no problem. That’s how well you should know them. Here are those five answer choices. 1. Statement 1 alone is sufficient but statement 2 alone is not sufficient to answer the question asked. 2. Statement 2 alone is sufficient but statement 1 alone is not sufficient to answer the question asked. 3. Both statements 1 and 2 together are sufficient to answer the question but neither statement is sufficient alone. 4. Each statement alone is sufficient to answer the question. 5. Statements 1 and 2 are not sufficient to answer the question asked and additional data is needed to answer the statements. GMAT Data Sufficiency Tip #2: use elimination and guessing to your advantage On a Problem Solving question, if you eliminate one answer, the other four are still there. While guessing the answer from one of four is slightly better odds than guessing the answer from one of five, this is not much of a gain. By contrast, you can often eliminate answers two or three at a time on the Data Sufficiency. Suppose you can make a sensible decision about one of the two statements, and the other statement is completely incomprehensible to you. Here’s what you can do: Case One: Statement #1 is sufficient on its own, Statement #2 is incomprehensible Eliminate: B, C, E Only possible answers: A, D Case Two: Statement #1 is not sufficient on its own, Statement #2 is incomprehensible Eliminate: A, D Only possible answers: B, C, E Case Three: Statement #1 is incomprehensible, Statement #2 is sufficient on its own Eliminate: A, C, E Only possible answers: B, D Case Four: Statement #1 is incomprehensible, Statement #2 is not sufficient on its own Eliminate: B, D Only possible answers: A, C, E In all four cases, you can eliminate at least two answers, so even if you guess randomly from the remaining answers, the odds are very much in your favor. GMAT Data Sufficiency Tip #3: focus on the sufficiency question On Problem Solving questions and in most traditional math, the focus is: find the answer. That’s not the focus in Data Sufficiency. The focus in DS is: could you find the answer? In other words, do you have enough information to be able to find the answer? That’s the sufficiency question, and which answers choices you choose or eliminate depends on answers you find to the sufficiency questions for each statement. The prompt for a Data Sufficiency problem is always a question, either “find the value of blah blah blah” question or “is x blah blah blah”, either a “value” question or a “yes-no” question. Especially if the prompt is a “yes-no” question, do not confuse the answer to the prompt question with the answer to the sufficiency question. Let’s consider a very easy example to demonstrate this difference. 1. Is x > 5? Statement #1: blah blah blah Statement #2: x = 3 This question is would be way too easy to appear on the GMAT, but let’s consider it. Statement #2 tells us, definitively, that x = 3. Therefore, we can give a clear and resounding “no” answer to the prompt question. But that is NOT the answer to the sufficiency question. If we are in a position to give a clear and definitive answer to the prompt question, any answer as long as it’s clear and certain, that means the information was sufficient, and therefore the answer to the sufficiency question is “yes”, regardless of whether the definitive answer to the prompt question was “yes” or “no.” By contrast, a “no” answer to the sufficiency question means something very different — this always means we were not able to determine anything definitive at all about the prompt — that is, you can’t get any clear answer of any sort to the prompt question. If you can get an answer to the prompt, any answer, that’s sufficient. GMAT Data Sufficiency Tip #4: don’t calculate an exact answer if you don’t have to This is a continuation of the previous tip. Data Sufficiency is all about — “could you find the answer?” Suppose the prompt is “What is the value of x?”, a standard DS problem. Suppose, in the course of solving something this problem, you get to a step like 23x + 144 = 5670. The benighted student with poor managerial instincts will dutifully work through the several steps necessary for finding the actual value of x — without a calculator. The skilled GMAT test taker would realize: “From the step 23x + 144 = 5670, I could solve for x. That, in and of itself, answers the sufficiency question right there, and in answering the sufficiency question, I am done. The actual value of x is irrelevant to answering the question.” BTW, this idea has some profound applications in Euclidean geometry. GMAT Data Sufficiency Tip #5: consider the statements separately first On the Data Sufficiency, you have to first consider each statement separately —- consider whether each statement, by itself, on its own, all alone, is sufficient. Only if both statements are not sufficient separately would you consider the sufficiency of the information in the combined statements. One of the biggest mistake folks make in the Data Sufficiency format is to read Statement #1, do some calculations or deduction, and then carry all that information with them when they consider Statement #2. This is particular tempting if Statement #1 contains all kinds of juicy information with fascinating logical implications. People get wrapped up in the “what if Statement #1 is true” world, and they have trouble leaving that world behind when it comes time to pay attention to statement #2 alone and by itself. One helpful strategy: whichever statement is simpler, consider that one first. The GMAT loves making Statement #1 a huge, complicated, juicy statement and Statement #2 something incredibly brief. If that’s the case, consider Statement #2 first. You have to consider the two statements separately, but there’s no law saying in which order you need to consider them. Choose the simpler one GMAT Data Sufficiency Tip #6: be smart about picking number Many folks, before studying for the GMAT, haven’t thought about math for a while. For these folks, it’s often the case that if the problem uses the word “number”, their minds default to a very limited set. Often, that set consists of only the natural numbers, a.k.a the counting numbers —- {1, 2, 3, 4, …}. People forget that a “number” could be positive or negative or zero, could be a fraction, could be a square root, could be pi or some other decimal, etc. etc. The possibilities are literally infinite. The GMAT loves to test number properties, and one of the principle reasons the test makers love this is that people without number sense make all kinds of predictable mistakes. The statement might say, for example, x > 7, and people will read that and assume in hordes that x must be 8 or more, totally forgetting that there is an entire continuous infinity of decimals between 7 and 8. When picking numbers for variables, people predictably pick positive whole numbers and predictably forget negative integers, positive fractions, negative fractions, etc. etc. Be smart about picking numbers on Data Sufficiency. Always have absolutely every kind of number in mind when you are analyzing a Data Sufficiency question. Here’s a practice question from inside the Magoosh product: 2. http://gmat.magoosh.com/questions/928 Let us know below how you do! 4 Responses to GMAT Data Sufficiency Tips 1. Leandro Portela February 22, 2013 at 5:24 am # we have a situation! I was very confident, before reading this post, that if I encounter a question, in the test day, like this you just posted I could without fear assign statment 2 as 100% sufficient since it provides enough information to answer the target question is: X > 5? No, x is not greater than 5 since x = 3. So that´s enough to me! But I was confused when you said that “A “no” answer to the sufficiency question always means — we were not able to determine anything definitive about the prompt.” I hope I didn´t make any misguided assumption due to a language barrier. But what you meant by saying this is that always when the answer to the target question is “no” this statment will be insufficient even if it provides enough info to answer that question? It seems contradictory to me! Can you please add more light to this matter? warm regards, □ Mike February 22, 2013 at 11:42 am # Leandro — I believe something got muddled between what I said and what you heard. I retyped the end of that paragraph, trying to add a little more clarity. Your original understanding was perfectly correct. If you *can* give a clear answer to the prompt — either a clear “YES” or a clear “NO”, that’s sufficient —- it’s a resounding “YES” to the sufficiency question. What I was trying to say in the sentence you quoted — the *opposite* case, the case in which you have to answer “NO” to the sufficiency question, would be a case in which *no clear answer* of any sort was possible. I was making a contrast at the end of that paragraph, and I went back to try to make the contrast clearer. Your original understanding is 100% correct: let me know whether you see this reflected in what the paragraph now says. 2. three four February 20, 2013 at 12:55 am # there s some good points here □ Mike February 20, 2013 at 9:55 am # Dear “three four” — Thank you for your kind words. Best of luck to you. Leave a Reply Click here to cancel reply.
{"url":"http://magoosh.com/gmat/2013/gmat-data-sufficiency-tips/","timestamp":"2014-04-18T13:06:48Z","content_type":null,"content_length":"71022","record_id":"<urn:uuid:4007d92b-1bd4-4ffa-915e-4fae067c8dad>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00561-ip-10-147-4-33.ec2.internal.warc.gz"}
1.1 Mathematics and its Notation A distinguishing feature of mathematics is the use of a complex and highly evolved system of two-dimensional symbolic notations. As J.R. Pierce has written in his book on communication theory, mathematics and its notations should not be viewed as one and the same thing [Pierce1961]. Mathematical ideas exist independently of the notations that represent them. However, the relation between meaning and notation is subtle, and part of the power of mathematics to describe and analyze derives from its ability to represent and manipulate ideas in symbolic form. The challenge in putting mathematics on the World Wide Web is to capture both notation and content (that is, meaning) in such a way that documents can utilize the highly-evolved notational forms of written and printed mathematics, and the potential for interconnectivity in electronic media. Mathematical notations are constantly evolving as people continue to make innovations in ways of approaching and expressing ideas. Even the commonplace notations of arithmetic have gone through an amazing variety of styles, including many defunct ones advocated by leading mathematical figures of their day [Cajori1928]. Modern mathematical notation is the product of centuries of refinement, and the notational conventions for high-quality typesetting are quite complicated. For example, variables and letters which stand for numbers are usually typeset today in a special mathematical italic font subtly distinct from the usual text italic. Spacing around symbols for operations such as +, -, × and / is slightly different from that of text, to reflect conventions about operator precedence. Entire books have been devoted to the conventions of mathematical typesetting, from the alignment of superscripts and subscripts, to rules for choosing parenthesis sizes, and on to specialized notational practices for subfields of mathematics (for instance, [Chaundy1954], [Swanson1979],[Swanson1999], [Higham1993], or in the T[E]X literature [Knuth1986] and [Spivak1986]). Notational conventions in mathematics, and in printed text in general, guide the eye and make printed expressions much easier to read and understand. Though we usually take them for granted, we rely on hundreds of conventions such as paragraphs, capital letters, font families and cases, and even the device of decimal-like numbering of sections such as we are using in this document (an invention due to G. Peano, who is probably better known for his axioms for the natural numbers). Such notational conventions are perhaps even more important for electronic media, where one must contend with the difficulties of on-screen reading. However, there is more to putting mathematics on the Web than merely finding ways of displaying traditional mathematical notation in a Web browser. The Web represents a fundamental change in the underlying metaphor for knowledge storage, a change in which interconnectivity plays a central role. It is becoming increasingly important to find ways of communicating mathematics which facilitate automatic processing, searching and indexing, and reuse in other mathematical applications and contexts. With this advance in communication technology, there is an opportunity to expand our ability to represent, encode, and ultimately to communicate our mathematical insights and understanding with each other. We believe that MathML is an important step in developing mathematics on the Web. 1.2 Origins and Goals 1.2.1 The History of MathML The problem of encoding mathematics for computer processing or electronic communication is much older than the Web. The common practice among scientists before the Web was to write papers in some encoded form based on the ASCII character set, and e-mail them to each other. Several markup methods for mathematics, in particular T[E]X [Knuth1986], were already in wide use in 1992 just before the Web rose to prominence, [Poppelier1992]. Since its inception, the Web has demonstrated itself to be a very effective method of making information available to widely separated groups of individuals. However, even though the World Wide Web was initially conceived and implemented by scientists for scientists, the possibilities for including mathematical expressions in HTML has been very limited. At present, most mathematics on the Web consists of text with images of scientific notation (in GIF or JPEG format), which are difficult to read and to author, or of entire documents in PDF form. The World Wide Web Consortium (W3C) recognized that lack of support for scientific communication was a serious problem. Dave Raggett included a proposal for HTML Math in the HTML 3.0 working draft in 1994. A panel discussion on mathematical markup was held at the WWW Conference in Darmstadt in April 1995. In November 1995, representatives from Wolfram Research presented a proposal for doing mathematics in HTML to the W3C team. In May 1996, the Digital Library Initiative meeting in Champaign-Urbana played an important role in bringing together many interested parties. Following the meeting, an HTML Math Editorial Review Board was formed. In the intervening years, this group has grown, and was formally reconstituted as the first W3C Math Working Group in March 1997. The second W3C Math Working Group was chartered in July 1998 with a term which was later extended to run to the end of the year 2000. The MathML proposal reflects the interests and expertise of a very diverse group. Many contributions to the development of MathML deserve special mention, some of which we touch on here. One such contribution concerns the question of accessibility, especially for the visually handicapped. T. V. Raman is particularly notable in this regard. Neil Soiffer and Bruce Smith from Wolfram Research shared their experience with the problems of representing mathematics in connection with the design of Mathematica 3.0; this expertise was an important influence in the design of the presentation elements. Paul Topping from Design Science also contributed his expertise in mathematical formatting and editing. MathML has benefited from the participation of a number of working group members involved in other mathematical encoding efforts in the SGML and computer-algebra communities, including Stephen Buswell from Stilo Technologies, Nico Poppelier at first with Elsevier Science, Stéphane Dalmas from INRIA (Sophia Antipolis), Stan Devitt at first with Waterloo Maple, Angel Diaz and Robert S. Sutor from IBM, and Stephen M. Watt from the University of Western Ontario. In particular, MathML has been influenced by the OpenMath project, the work of the ISO 12083 working group, and Stilo Technologies' work on a "semantic" mathematics DTD fragment. The American Mathematical Society has played a key role in the development of MathML. Among other things, it has provided two working group chairs: Ron Whitney led the group from May 1996 to March 1997, and Patrick Ion, who has co-chaired the group with Robert Miner from The Geometry Center from March 1997 to June 1998, and since July 1998 with Angel Diaz of IBM. 1.2.2 Limitations of HTML The demand for effective means of electronic scientific communication remains high. Ever increasingly, researchers, scientists, engineers, educators, students and technicians find themselves working at dispersed locations and relying on electronic communication. At the same time, the image-based methods that are currently the predominant means of transmitting scientific notation over the Web are primitive and inadequate. Document quality is poor, authoring is difficult, and mathematical information contained in images is not available for searching, indexing, or reuse in other applications. The most obvious problems with HTML for mathematical communication are of two types. Display Problems. Consider the equation This equation has a descender which places the baseline for the equation at a point about a third of the way from the bottom of the image. One can pad the image like this: Image-based equations are generally harder to see, read and comprehend than the surrounding text in the browser window. Moreover, these problems become worse when the document is printed. The resolution of the equations as images will be around 70 dots per inch, while the surrounding text will typically be 300, 600 or more dots per inch. The disparity in quality is judged to be unacceptable by most people. Encoding Problems. Consider trying to search this document for part of an equation, for example, the "=10" from the first equation above. In a similar vein, consider trying to cut and paste an equation into another application; even more demanding is to cut and paste a sub-expression. Using image-based methods, neither of these common needs can be adequately addressed. Although the use of the alt attribute in the document source can help, it is clear that highly interactive Web documents must provide a more sophisticated interface between browsers and mathematical notation. Another problem with encoding mathematics as images is that it requires more bandwidth. Markup describing an equation is typically smaller and more compressible than an image of the equation. In addition, by using markup-based encoding, more of the rendering process is moved to the client machine. 1.2.3 Requirements for Mathematics Markup Some display problems associated with including mathematical notation in HTML documents as images could be addressed by improving image handling by browsers. However, even if image handling were improved, the problem of making the information contained in mathematical expressions available to other applications would remain. Therefore, in planning for the future, it is not sufficient merely to upgrade image-based methods. To integrate mathematical material fully into Web documents, a markup-based encoding of mathematical notation and content is required. In designing any markup language, it is essential to consider carefully the needs of its potential users. In the case of MathML, the needs of potential users cover a broad spectrum, from education to research, and on to commerce. The education community is a large and important group that must be able to put scientific curriculum materials on the Web. At the same time, educators often have limited time and equipment, and are severely hampered by the difficulty of authoring technical Web documents. Students and teachers need to be able to create mathematical content quickly and easily, using intuitive, easy-to-learn, low-cost tools. Electronic textbooks are another way of using the Web which will potentially be very important in education. Management consultant Peter Drucker has prophesied the end of big-campus residential higher education and its distribution over the Web. Electronic textbooks will need to be interactive, allowing intercommunication between the text and scientific software and graphics. The academic and commercial research communities generate large volume of dense scientific material. Increasingly, research publications are being stored in databases, such as the highly successful physics and mathematics preprint server and archive at Los Alamos National Laboratory. This is especially true in some areas of physics and mathematics where academic journal prices have been increasing at an unsustainable rate. In addition, databases of information on mathematical research, such as Mathematical Reviews and Zentralblatt für Mathematik, offer millions of records on the Web containing mathematics. To accommodate the research community, a design for mathematical markup must facilitate the maintenance and operation of large document collections, for which automatic searching and indexing are important. Because of the large collection of legacy documents containing mathematics, especially in T[E]X, the ability to convert between existing formats and any new one is also very important to the research community. Finally, the ability to maintain information for archival purposes is vital to academic research. Corporate and academic scientists and engineers also use technical documents in their work to collaborate, to record results of experiments and computer simulations, and to verify calculations. For such uses, mathematics on the Web must provide a standard way of sharing information that can be easily read, processed and generated using commonly available, easy-to-use tools. Another general design requirement is the ability to render mathematical material in other media such as speech or braille, which is extremely important for the visually impaired. Commercial publishers are also involved with mathematics on the Web at all levels from electronic versions of print books to interactive textbooks and academic journals. Publishers require a method of putting mathematics on the Web that is capable of high-quality output, robust enough for large-scale commercial use, and preferably compatible with their previous, often SGML-based, production 1.2.4 Design Goals of MathML In order to meet the diverse needs of the scientific community, MathML has been designed with the following ultimate goals in mind. MathML should: • Encode mathematical material suitable for teaching and scientific communication at all levels. • Encode both mathematical notation and mathematical meaning. • Facilitate conversion to and from other mathematical formats, both presentational and semantic. Output formats should include: □ graphical displays □ speech synthesizers □ input for computer algebra systems □ other mathematics typesetting languages, such as T[E]X □ plain text displays, e.g. VT100 emulators □ print media, including braille It is recognized that conversion to and from other notational systems or media may entail loss of information in the process. • Allow the passing of information intended for specific renderers and applications. • Support efficient browsing of lengthy expressions. • Provide for extensibility. • Be well suited to template and other mathematics editing techniques. • Be human legible, and simple for software to generate and process. No matter how successfully MathML may achieve its goals as a markup language, it is clear that MathML will only be useful if it is implemented well. To this end, the W3C Math Working Group has identified a short list of additional implementation goals. These goals attempt to describe concisely the minimal functionality MathML rendering and processing software should try to provide. • MathML expressions in HTML (and XHTML) pages should render properly in popular Web browsers, in accordance with reader and author viewing preferences, and at the highest quality possible given the capabilities of the platform. • HTML (and XHTML) documents containing MathML expressions should print properly and at high-quality printer resolutions. • MathML expressions in Web pages should be able to react to user gestures, such those as with a mouse, and to coordinate communication with other applications through the browser. • Mathematical expression editors and converters should be developed to facilitate the creation of Web pages containing MathML expressions. These goals have begun to be addressed for the near term by using embedded elements such as Java applets, plug-ins and ActiveX controls to render MathML. However, the extent to which these goals are ultimately met depends on the cooperation and support of browser vendors, and other software developers. The W3C Math Working Group has continued to work with the working groups for the Document Object Model (DOM) and the Extensible Style Language (XSL) to ensure that the needs of the scientific community will be met in the future, and feels that MathML 2.0 shows considerable progress in this area over the situation that obtained at the time of the MathML 1.0 Recommendation (April 1998). 1.3 The Role of MathML on the Web 1.3.1 Layered Design of Mathematical Web Services The design goals of MathML require a system for encoding mathematical material for the Web which is flexible and extensible, suitable for interaction with external software, and capable of producing high-quality rendering in several media. Any markup language that encodes enough information to do all these tasks well will of necessity involve some complexity. At the same time, it is important for many groups, such as students, to have simple ways to include mathematics in Web pages by hand. Similarly, other groups, such as the T[E]X community, would be best served by a system which allowed the direct entry of markup languages like T[E]X into Web pages. In general, specific user groups are better served by specialized kinds of input and output tailored to their needs. Therefore, the ideal system for communicating mathematics on the Web should provide both specialized services for input and output, and general services for interchange of information and rendering to multiple media. In practical terms, the observation that mathematics on the Web should provide for both specialized and general needs naturally leads to the idea of a layered architecture. One layer consists of powerful, general software tools exchanging, processing and rendering suitably encoded mathematical data. A second layer consists of specialized software tools, aimed at specific user groups, which are capable of easily generating encoded mathematical data that can then be shared with a particular audience. MathML is designed to provide the encoding of mathematical information for the bottom, more general layer in a two-layer architecture. It is intended to encode complex notational and semantic structure in an explicit, regular, and easy-to-process way for renderers, searching and indexing software, and other mathematical applications. As a consequence, raw MathML markup is not primarily intended for direct use by authors. While MathML is human-readable, which helps a lot in debugging it, in all but the simplest cases it is too verbose and error-prone for hand generation. Instead, it is anticipated that authors will use equation editors, conversion programs, and other specialized software tools to generate MathML. Alternatively, some renderers and systems supporting mathematics may convert other kinds of input directly included in Web pages into MathML on the fly, in response to a cut-and-paste operation, for In some ways, MathML is analogous to other low-level, communication formats such as Adobe's PostScript language. You can create PostScript files in a variety of ways, depending on your needs; experts write and modify them by hand, authors create them with word processors, graphic artists with illustration programs, and so on. Once you have a PostScript file, however, you can share it with a very large audience, since devices which render PostScript, such as printers and screen previewers, are widely available. Part of the reason for designing MathML as a markup language for a low-level, general, communication layer is to stimulate mathematical Web software development in the layer above. MathML provides a way of coordinating the development of modular authoring tools and rendering software. By making it easier to develop a functional piece of a larger system, MathML can stimulate a "critical mass" of software development, greatly to the benefit of potential users of mathematics on the Web. One can envision a similar situation for mathematical data. Authors are free to create MathML documents using the tools best suited to their needs. For example, a student might prefer to use a menu-driven equation editor that can write out MathML to an XHTML file. A researcher might use a computer algebra package that automatically encodes the mathematical content of an expression, so that it can be cut from a Web page and evaluated by a colleague. An academic journal publisher might use a program that converts T[E]X markup to HTML and MathML. Regardless of the method used to create a Web page containing MathML, once it exists, all the advantages of a powerful and general communication layer become available. A variety of MathML software could all be used with the same document to render it in speech or print, to send it to a computer algebra system, or to manage it as part of a large Web document collection. To render high-quality printed mathematics the MathML encoding will often be converted back to standard typesetting and composition languages, including T[E]X which is widely appreciated for the job it does in this regard. Finally, one may expect that eventually MathML will be integrated into other arenas where mathematical formulas occur, such as spreadsheets, statistical packages and engineering tools. The W3C Math Working Group has been working with vendors to ensure that a variety of MathML software will soon be available, including both rendering and authoring tools. A current list of MathML software is maintained on the public Math page at the World Wide Web Consortium. 1.3.2 Relation to Other Web Technology The original conception of an HTML Math was a simple, straightforward extension to HTML that would be natively implemented in browsers. However, very early on, the explosive growth of the Web made it clear that a general extension mechanism was required, and that mathematics was only one of many kinds of structured data which would have to be integrated into the Web using such a mechanism. Given that MathML must integrate into the Web as an extension, it is extremely important that MathML, and MathML software, can interact well with the existing Web environment. In particular, MathML has been designed with three kinds of interaction in mind. First, in order to create mathematical Web content, it is important that existing mathematical markup languages can be converted to MathML, and that existing authoring tools can be modified to generate MathML. Second, it must be possible to embed MathML markup seamlessly in HTML markup, as it evolves, in such a way that it will be accessible to future browsers, search engines, and all the kinds of Web applications which now manipulate HTML. Finally, it must be possible to render MathML embedded in HTML in today's Web browsers in some fashion, even if it is less than ideal. As HTML evolves into XHTML, all the preceding requirements become increasingly needed. The World Wide Web is a fully international and collaborative movement. Mathematics is a language used all over the world. The mathematical notation in science and engineering is embedded in a matrix of local natural languages. The W3C strives to be a constructive force in the spread of possibilities for communication throughout the world. Therefore MathML will encounter problems of internationalization. This version of MathML is not knowingly incompatible with the needs of languages which are written from left to right. However the default orientation of MathML 2 is left-to-right, and it is clear that the needs for the writing of mathematical formulas embedded in some natural languages may not yet be met. So-called bi-directional technology is still in development, and better support for formulas in that context must be a matter for future developers. 1.3.2.1 Existing Mathematical Markup Languages Perhaps the most important influence on mathematical markup languages of the last two decades is the T[E]X typesetting system developed by Donald Knuth [Knuth1986]. T[E]X is a de facto standard in the mathematical research community, and it is pervasive in the scientific community at large. T[E]X sets a standard for quality of visual rendering, and a great deal of effort has gone into ensuring MathML can provide the same visual rendering quality. Moreover, because of the many legacy documents in T[E]X, and because of the large authoring community versed in T[E]X, a priority in the design of MathML was the ability to convert T[E]X mathematics input into MathML format. The feasibility of such conversion has been demonstrated by prototype software. Extensive work on encoding mathematics has also been done in the SGML community, and SGML-based encoding schemes are widely used by commercial publishers. ISO 12083 is an important markup language which contains a DTD fragment primarily intended for describing the visual presentation of mathematical notation. Because ISO 12083 mathematical notation and its derivatives share many presentational aspects with T[E]X, and because SGML enforces structure and regularity more than T[E]X, much of the work in ensuring MathML is compatible with T[E]X also applies well to ISO 12083. MathML also pays particular attention to compatibility with other mathematical software, and in particular, with computer algebra systems. Many of the presentation elements of MathML are derived in part from the mechanism of typesetting boxes. The MathML content elements are heavily indebted to the OpenMath project and the work by Stilo Technologies on a mathematical DTD fragment. The OpenMath project has close ties to both the SGML and computer algebra communities, and has laid a foundation for an SGML- and XML-based means of communication between mathematical software packages, amongst other things. The feasibility of both generating and interpreting MathML in computer algebra systems has been demonstrated by prototype software. 1.3.2.2 HTML Extension Mechanisms As noted above, the success of HTML has led to enormous pressure to incorporate a wide variety of data types and software applications into the Web. Each new format or application potentially places new demands on HTML and on browser vendors. For some time, it has been clear that a general extension mechanism is necessary to accommodate new extensions to HTML. At the very beginning, the working group began its work thinking of a plain extension to HTML in the spirit of the first mathematics support suggested for HTML 3.2. But for a good number of reasons, once we got into the details, this proved to be not so good an idea. Since work first began on MathML, XML [XML], has emerged as the dominant such general extension mechanism. XML stands for Extensible Markup Language. It is designed as a simplified version of SGML, the meta-language used to define the grammar and syntax of HTML. One of the goals of XML is to be suitable for use on the Web, and in the context of this discussion it can be viewed as the general mechanism for extending HTML. As its name implies, extensibility is a key feature of XML; authors are free to declare and use new elements and attributes. At the same time, XML grammar and syntax rules carefully enforce regular document structure to facilitate automatic processing and maintenance of large document collections. Mathematically speaking XML is essentially a notation for decorated rooted planar trees, and thus of great generality as an encoding tool. Since the setting up of the first W3C Math Working Group, XML has garnered broad industry support, including that of major browser vendors. The migration of HTML to an XML form has been important to the W3C, and has resulted in the XHTML Recommendation which delivers a new modularized form of HTML. MathML can be viewed as another module which fits very well with the new XHTML. Indeed in A.2.3 MathML as a DTD Module there is a new DTD for mathematics which is the result of collaboration with the W3C HTML Working Group. Furthermore, other applications of XML for all kinds of document publishing and processing promise to become increasingly important. Consequently, both on theoretical and pragmatic grounds, it has made a great deal of sense to specify MathML as an XML application. 1.3.2.3 Browser Extension Mechanisms By now, as opposed to the situation when the MathML 1.0 Recommendation was adopted, the details of a general model for rendering and processing XML extensions to HTML are largely clear. Formatting Properties, developed by the Cascading Style Sheets and Formatting Properties Working Group for CSS and made available through the Document Object Model (DOM), will be applied to MathML elements to obtain stylistic control over the presentation of MathML. Further development of these Formatting Properties falls within the charters of both the CSS&FP and the XSL working groups. For an introduction to this topic see the discussion in 7 The MathML Interface. For detailed commentary on how to render MathML with current systems consult the W3C Math WG Home Page. Until style sheet mechanisms are capable of delivering native browser rendering of MathML, however, it is necessary to extend browser capabilities by using embedded elements to render MathML. It is already possible to instruct a browser to use a particular embedded renderer to process embedded XML markup such as MathML, and to coordinate the resulting output with the surrounding Web page, however the results are not yet entirely as one wishes. See 7 The MathML Interface. For specialized processing, such as connecting to a computer algebra system, the capability of calling out to other programs is likely to remain highly desirable. However, for such an interaction to be really satisfactory, it is necessary to define a document object model rich enough to facilitate complicated interactions between browsers and embedded elements. For this reason, the W3C Math Working Group has coordinated its efforts closely with the Document Object Model (DOM) Working Group. The results are described in 8 Document Object Model for MathML. For processing by embedded elements, and for inter-communication between scientific software generally, a style sheet-based layout model is in some ways less than ideal. It can impose an additional implementation burden in a setting where it may offer few advantages, and it imposes implementation requirements for coordination between browsers and embedded renderers that will likely be unavailable in the immediate future. For these reasons, the MathML specification defines an attribute-based layout model, which has proven very effective for high-quality rendering of complicated mathematical expressions in several independent implementations. MathML presentation attributes utilize W3C Formatting Properties where possible. Also, MathML elements accept class, style and id attributes to facilitate their use with CSS style sheets. However, at present, there are few settings where CSS machinery is currently available to MathML renderers. The use of CSS style sheet mechanisms has been mentioned above. The mechanisms of XSL have also recently become available for the transformation of XML documents to effect their rendering. Indeed the alternative forms of this present recommendation, including the definitive public HTML version, have been prepared from an underlying XML source using XSL transformation language tools. As further developments in this direction become available to MathML, it is anticipated their use will become the dominant method of stylistic control of MathML presentation meant for use in rendering environments which support those mechanisms.
{"url":"http://www.w3.org/TR/2002/WD-MathML2-20021219/chapter1.html","timestamp":"2014-04-16T16:31:13Z","content_type":null,"content_length":"54731","record_id":"<urn:uuid:a01dcfac-c803-4cce-ae51-041a590cd1f5>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00142-ip-10-147-4-33.ec2.internal.warc.gz"}
Monotonic and residuated logic programs Results 11 - 20 of 35 - Nonmonotonic Reasoning, Answer Set Programming and Constraints, volume 05171 of Dagstuhl Seminar Proceedings. Internationales Begegnungs- und Forschungszentrum für Informatik (IBFI), Schloss Dagstuhl , 2005 "... In this work, we define a new framework in order to improve the knowledge representation power of Answer Set Programming paradigm. Our proposal is to use notions from possibility theory to extend the stable model semantics by taking into account a certainty level, expressed in terms of necessity mea ..." Cited by 8 (1 self) Add to MetaCart In this work, we define a new framework in order to improve the knowledge representation power of Answer Set Programming paradigm. Our proposal is to use notions from possibility theory to extend the stable model semantics by taking into account a certainty level, expressed in terms of necessity measure, on each rule of a normal logic program. First of all, we introduce possibilistic definite logic programs and show how to compute the conclusions of such programs both in syntactic and semantic ways. The syntactic handling is done by help of a fix-point operator, the semantic part relies on a possibility distribution on all sets of atoms and we show that the two approaches are equivalent. In a second part, we define what is a possibilistic stable model for a normal logic program, with default negation. Again, we define a possibility distribution allowing to determine the stable models. 1 - In: Proceedings of the 11th International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems (IPMU-06 , 2006 "... We present Annotated Answer Set Programming, that extends the expressive power of disjunctive logic programming with annotation terms, taken from the generalized annotated logic programming framework. ..." Cited by 7 (0 self) Add to MetaCart We present Annotated Answer Set Programming, that extends the expressive power of disjunctive logic programming with annotation terms, taken from the generalized annotated logic programming - In Proc. FUZZ-IEEE’01. The 10th IEEE International Conference on Fuzzy Systems, IEEE , 2001 "... Muki-adjoint logic programs generalise monotonic and residuated logic pro- grams [21 in that simultaneous use of several implications in the rules and rather general connectives in the bodies are allowed. As our approach has continuous fixpoint semantics, in this work, a procedural semantics is g ..." Cited by 6 (2 self) Add to MetaCart Muki-adjoint logic programs generalise monotonic and residuated logic pro- grams [21 in that simultaneous use of several implications in the rules and rather general connectives in the bodies are allowed. As our approach has continuous fixpoint semantics, in this work, a procedural semantics is given for the paradigm of multi-adjoint logic programming and a completeness result is proved. - 6th Intl Workshop on Termination , 2003 "... Residuated Logic Programs allow to capture a spate of different semantics dealing with uncertainty and vagueness. A first result states that for any definite residuated logic program the sequence of iterations of the immediate consequences operator reaches the least fixpoint after only finitely m ..." Cited by 5 (4 self) Add to MetaCart Residuated Logic Programs allow to capture a spate of different semantics dealing with uncertainty and vagueness. A first result states that for any definite residuated logic program the sequence of iterations of the immediate consequences operator reaches the least fixpoint after only finitely many steps. Then, a tabulation query procedure is introduced, and it is shown that the procedure terminates every definite residuated logic program. - In Logic Programming, ICLP’01 , 2001 "... Multi-adjoint logic programs has been recently introduced [9, 10] as a generalization of monotonic logic programs [2, 3], in that simultaneous use of several implications in the rules and rather general connectives in the bodies are allowed. ..." Cited by 4 (4 self) Add to MetaCart Multi-adjoint logic programs has been recently introduced [9, 10] as a generalization of monotonic logic programs [2, 3], in that simultaneous use of several implications in the rules and rather general connectives in the bodies are allowed. - IN PROC. CAEPIA’03, LECT. NOTES IN ARTIFICIAL INTELLIGENCE 3040:608–617 , 2003 "... Multi-adjoint logic programs were recently proposed as a generalization of monotonic and residuated logic programs, in that simultaneous use of several implications in the rules and rather general connectives in the bodies are allowed. In this work, the need of biresiduated pairs is justified thr ..." Cited by 4 (3 self) Add to MetaCart Multi-adjoint logic programs were recently proposed as a generalization of monotonic and residuated logic programs, in that simultaneous use of several implications in the rules and rather general connectives in the bodies are allowed. In this work, the need of biresiduated pairs is justified through the study of a very intuitive family of operators, which turn out to be not necessarily commutative and associative and, thus, might have two different residuated implications; finally, we introduce the framework of biresiduated multi-adjoint logic programming and sketch some considerations on its fixpoint semantics. - Electronic Notes in Theoretical Computer Science "... In this paper, we develop the notion of fuzzy unification and incorporate it into a novel fuzzy argumentation framework for extended logic programming. We make the following contributions: The argumentation framework is defined by a declarative bottom-up fixpoint semantics and an equivalent goal ..." Cited by 3 (0 self) Add to MetaCart In this paper, we develop the notion of fuzzy unification and incorporate it into a novel fuzzy argumentation framework for extended logic programming. We make the following contributions: The argumentation framework is defined by a declarative bottom-up fixpoint semantics and an equivalent goal-directed top-down proof-procedure for extended logic programming. Our framework allows one to represent positive and explicitly negative knowledge, as well as uncertainty. Both concepts are used in agent communication languages such as KQML and FIPA ACL. One source of uncertainty in open systems stems from mismatches in parameter and predicate names and missing parameters. To this end, we conservatively extend classical unification and develop fuzzy unification based on normalised edit distance over trees. - In JELIA. 300–312 , 2010 "... Abstract. Tabled Logic Programming (TLP) is becoming widely available in Prolog systems, but most implementations of TLP implement only answer variance in which an answer A is added to the table for a subgoal S only if A is not a variant of any other answer already in the table for S. While TLP with ..." Cited by 3 (2 self) Add to MetaCart Abstract. Tabled Logic Programming (TLP) is becoming widely available in Prolog systems, but most implementations of TLP implement only answer variance in which an answer A is added to the table for a subgoal S only if A is not a variant of any other answer already in the table for S. While TLP with answer variance is powerful enough to implement the well-founded semantics with good termination and complexity properties, TLP becomes much more powerful if a mechanism called answer subsumption is used. XSB implements two forms of answer subsumption. The first, partial order answer subsumption, adds A to a table only if A is greater than all other answers already in the table according to a user-defined partial order. The second, lattice answer subsumption, may join A to some other answer in the table according to a user-defined upper semi-lattice. Answer subsumption can be used to implement paraconsistent and quantitative logics, abstract analysis domains, and preference logics. This paper discusses the semantics and implementation of answer subsumption in XSB, and discusses performance and scalability of answer subsumption on a variety of problems. 1 - WILF 2005. LNCS (LNAI , 2006 "... Abstract. A prospective study of the use of ordered multi-lattices as underlying sets of truth-values for a generalised framework of logic programming is presented. Specifically, we investigate the possibility of using multi-lattice-valued interpretations of logic programs and the theoretical proble ..." Cited by 2 (2 self) Add to MetaCart Abstract. A prospective study of the use of ordered multi-lattices as underlying sets of truth-values for a generalised framework of logic programming is presented. Specifically, we investigate the possibility of using multi-lattice-valued interpretations of logic programs and the theoretical problems that this generates with regard to its fixed point semantics. 1 - Journal of Applied Logic , 2004 "... A generalization of the homogenization process needed for the neural implementation of multi-adjoint logic programming (a unifying theory to deal with uncertainty, imprecise data or incomplete information) is presented here. The idea is to allow to represent a more general family of adjoint pairs, b ..." Cited by 2 (1 self) Add to MetaCart A generalization of the homogenization process needed for the neural implementation of multi-adjoint logic programming (a unifying theory to deal with uncertainty, imprecise data or incomplete information) is presented here. The idea is to allow to represent a more general family of adjoint pairs, but maintaining the advantage of the existing implementation recently introduced in [6]. The soundness of the transformation is proved and its complexity is analysed. In addition, the corresponding generalization of the neural-like implementation of the fixed point semantics of multi-adjoint is presented. 1
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=118571&sort=cite&start=10","timestamp":"2014-04-21T13:54:57Z","content_type":null,"content_length":"37358","record_id":"<urn:uuid:b3560fc6-e02b-4bde-b8fa-e7ef39e0d3a9>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00386-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - DFT and Graphene Hello all, I have read a few papers lately that have used DFT based techniques to investigate metallic adatom adsorption on top of graphene (see for instance PRB 77, 235430 2008). I was under the impression that electrons in graphene are described by the Dirac equation and not the Schrodinger equation. How then can the electronic properties be accurately described using DFT techniques based in the Shcrodinger equation. Thanks in advance
{"url":"http://www.physicsforums.com/showpost.php?p=2120675&postcount=1","timestamp":"2014-04-18T15:52:02Z","content_type":null,"content_length":"8850","record_id":"<urn:uuid:44d96456-e025-4b79-a202-8c286b8247d4>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00226-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: November 2010 [00166] [Date Index] [Thread Index] [Author Index] Re: Programmatically creating functions of many variables • To: mathgroup at smc.vnet.net • Subject: [mg113683] Re: Programmatically creating functions of many variables • From: Leonid Shifrin <lshifr at gmail.com> • Date: Sun, 7 Nov 2010 05:13:07 -0500 (EST) Hi Yaroslav, Unfortunately I don't quite follow the specific problem you stated - if you with an adjacency matrix, I don't quite see how you can get improper labeling - if you have ones on the diagonal that would just mean self-loops. Regarding a general question of run-time generation of multivariate this is rarely needed since in most such cases all arguments have similar and can then be combined into a list, so that you have a function of a single argument being a list. If you still want to do this, here are a few ways (I have no idea whether or not they are typical - these are what I use sometimes). I will use a model problem of a function which takes any nonzero number of arguments and checks that none of them is a multiple of some fixed number. One way to generate such a function is to create a pattern-based definition for some and then return that symbol: noDivisorCheck[n_Integer] := checkF[args__Integer] := And @@ Thread[Mod[{args}, n] != 0]; In[3]:= fn = noDivisorCheck[5] Out[3]= checkF$110 In[4]:= fn[1, 2, 3, 4] Out[4]= True In[5]:= fn[1, 2, 3, 4, 5] Out[5]= False You can also do this using a pure function, but then we have to use the form Function[Null, body], which can accept any number of arguments: noDivisorCheckPF[n_Integer] := Function[Null, And @@ Thread[Mod[{##}, n] != 0]]; In[8]:= fnPF = noDivisorCheckPF[5] Out[8]= Function[Null, And @@ Thread[Mod[{##1}, 5] != 0]] In[9]:= fnPF [1, 2, 3, 4] Out[9]= True In[10]:= fnPF [1, 2, 3, 4, 5] Out[10]= False Note that these two methods have a couple of subtle differences in the way they work. One of them is that if you copiously produce functions with the first method, these symbols are not garbage-collected unless you explicitly Remove so you may eventually produce a lot of garbage in your working context. This does not exist for pure functions. If you know the number of arguments at the moment when you create a and it will be fixed, you can do something similar to what you suggested (can not use subscripts though): noDivisorCheckPFFixedArgNum[n_Integer, argnum_Integer?Positive] := With[{args = ToExpression["x" <> ToString@#] & /@ Range[argnum]}, Function @@ Hold[args, And @@ Thread[Mod[args, n] != 0]]]] Block[{x},...] was used to make sure that we don't capture any global value that <x> may have, at the moment when we create our function (This could have been also by using Hold wrappers on the individual x variables, but that would be more code). Hold was used to prevent the premature evaluation of the body of the In[13]:= fn4 = noDivisorCheckPFFixedArgNum[5, 4] Out[13]= Function[{x1, x2, x3, x4}, And @@ Thread[Mod[{x1, x2, x3, x4}, 5] != 0]] In[14]:= fn4[1, 2, 3, 4] Out[14]= True In this method, however, we have to create a new function for a different number of arguments, so it is less flexible in that sense: In[15]:= fn5 = noDivisorCheckPFFixedArgNum[5, 5] Out[15]= Function[{x1, x2, x3, x4, x5}, And @@ Thread[Mod[{x1, x2, x3, x4, x5}, 5] != 0]] In[16]:= fn5[1, 2, 3, 4, 5] Out[16]= False Hope this helps. On Sat, Nov 6, 2010 at 1:01 PM, Yaroslav Bulatov <yaroslavvb at gmail.com>wrote: > Suppose I have a graph, and I'd like to create a function which checks > whether given assignment of nodes is a proper labeling -- ie, no two > adjacent nodes get the same number. One could do something like the > following, but that gives Part warnings. Is there a preferred > approach? > graph = GraphData[{"Grid", {2, 2}}, "AdjacencyMatrix"] // Normal; > f = Function @@ {{x}, And @@ (x[[First[#]]] != x[[Last[#]]] & /@ > Position[graph, 1])} > This is actually a problem that I periodically come across. Part > approach causes warnings meanwhile something like Function @@ > {Subscript[x,#]&/@Range[n], ...} doesn't work. What are typical ways > of generating multivariate functions automatically? > ---- > Yaroslav > http://stackoverflow.com/users/419116/yaroslav-bulatov
{"url":"http://forums.wolfram.com/mathgroup/archive/2010/Nov/msg00166.html","timestamp":"2014-04-20T11:09:55Z","content_type":null,"content_length":"29398","record_id":"<urn:uuid:09bba17d-3bf8-48b4-bee0-82350eb9349b>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00061-ip-10-147-4-33.ec2.internal.warc.gz"}
Matches for: Author/Editor=(Mizuta_Yoshihiro) Advanced Studies in Pure Mathematics 2006; 413 pp; hardcover Volume: 44 ISBN-10: 4-931469-33-7 ISBN-13: 978-4-931469-33-4 List Price: US$104 Member Price: US$83.20 Order Code: ASPM/44 This volume collects, in written form, eight plenary lectures and twenty-five selected contributions from invited and contributed lectures delivered at the International Workshop on Potential Theory 2004. The workshop was held at Shimane University, Matsue, Japan, from August 23 to 28, 2004. The topic of the workshop was potential theory and its related fields. There were stimulus talks from classical potential theory to pluripotential theory and probabilistic potential theory. Volumes in this series are freely available electronically 5 years post-publication. Published for the Mathematical Society of Japan by Kinokuniya, Tokyo, and distributed worldwide, except in Japan, by the AMS. Graduate students and research mathematicians interested in analysis. • Z. Błocki -- The Bergman kernel and pluripotential theory • K. Burdzy -- Neumann eigenfunctions and Brownian couplings • T. Carroll -- Brownian motion and harmonic measure in conic sections • S. J. Gardiner -- Radial limits of harmonic functions • Y. Guivarc'h -- Renewal theorems, products oif random matrices, and toral endomorphisms • J. Ortega-Cerdà -- Densities and harmonic measure • N. Shanmugalingam -- Sobolev type spaces on metric measure spaces • J.-M. Wu -- Quasisymmetric extension, smoothing and applications • J. Björn -- Wiener criterion for Cheeger \(p\)-harmonic functions on metric spaces • F. Di Biase -- On the sharpness of certain approach regions • T. Futamura and Y. Mizuta -- Continuity of weakly monotone Sobolev functions of variable exponent • K. Hirata -- Martin kernels of general domains • K. Ishizaki and N. Yanagihara -- Singular directions of meromorphic solutions of some non-autonomous Schröder equations • K. Janssen -- Integral representation for space-time excessive functions • T. Kurokawa -- A decomposition of the Schwartz class by a derivative space and its complementary space • K. Kuwae and M. Takahashi -- Kato class functions of Markov processes under ultracontractivity • E. G. Kwon -- A subharmonic Hardy class and Bloch pullback operator norms • H. Masaoka -- Quasiconformal mappings and minimal Martin boundary of \(p\)-sheeted unlimited covering surfaces of the once punctered Riemann sphere \(\hat{\mathbb{C}}\setminus\{0\}\) of Heins • H. Masaoka and S. Segawa -- Hyperbolic Riemann surfaces without unbounded positive harmonic functions • I. Miyamoto and H. Yosida -- On a covering property of rarefied sets at infinity in a cone • Y. Miyazaki -- The \(L^p\) resolvents for elliptic systems of divergence form • Y. Mizuta and T. Shimomura -- Maximal functions, Riesz potentials and Sobolev's inequality in generalized Lebesgue spaces • M. Murata -- Representations of nonnegative solutions for parabolic equations • M. Nakai -- Types of pasting arcs in two sheeted spheres • M. Nishio, K. Shimomura, and N. Suzuki -- \(L^p\)-boundedness of Bergman projections for \(\alpha\)-parabolic operators • Y. Okuyama -- Vanishing theorem on the pointwise defect of a rational iteration sequence for moving targets • T. Ono -- Hölder continuity of solutions to quasilinear elliptic equations with measure data • Y. Pinchover -- On Davies' conjecture and strong ratio limit properties for the heat kernel • Premaltha and A. K. Kalyani -- Some potential theoretic results of an infinite network • M. Stoll -- The Littlewood-Paley inequalities for Hardy-Orlicz spaces of harmonic functions on domains in \(\mathbb{R}^n\) • H. Watanabe -- Estimates of maximal functions by Hausdorff contents in a metric space • M. Yamada -- Harmonic conjugates of parabolic Bergman functions • M. Yanagishita -- On the behavior at infinity for non-negative superharmonic functions in a cone
{"url":"http://www.ams.org/cgi-bin/bookstore/booksearch?fn=100&pg1=CN&s1=Mizuta_Yoshihiro&arg9=Yoshihiro_Mizuta","timestamp":"2014-04-19T20:04:51Z","content_type":null,"content_length":"17801","record_id":"<urn:uuid:93e4bff2-5fc8-480e-ac5b-1216913a2071>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00104-ip-10-147-4-33.ec2.internal.warc.gz"}
Discussion about math, puzzles, games and fun. Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ ° You are not logged in. #101 2009-03-22 16:02:20 Re: Exercise What is the slope of the normal to the curve at the point whose x-coordinate is 2? Character is who you are when no one is looking. #102 2009-03-22 16:12:07 Re: Exercise What are the order and degree of the differential equation Character is who you are when no one is looking. #103 2009-03-23 02:10:32 Re: Exercise What is the inverse of the matrix Character is who you are when no one is looking. #104 2009-03-23 02:21:58 Re: Exercise What is the value of Character is who you are when no one is looking. #105 2009-03-23 02:34:43 Re: Exercise what is the value of Character is who you are when no one is looking. #106 2009-03-23 02:57:19 Re: Exercise are non-coplanar and then what is equal to? Character is who you are when no one is looking. #107 2009-03-23 03:11:37 Re: Exercise are a right handed triad of mutually perpendicular vectors of magnitude a, b, and c, then what is the value of Character is who you are when no one is looking. #108 2009-03-23 04:12:20 Re: Exercise If the edges meet at a vertex, find the volume of the parallelopiped. Character is who you are when no one is looking. #109 2009-03-23 04:39:45 Re: Exercise What is the Cartesian equation of the line through the points (2, -3, 5) and (3, 6, -2)? Character is who you are when no one is looking. #110 2009-03-23 05:06:25 Re: Exercise What is the center and the radius of the sphere given by Character is who you are when no one is looking. #111 2009-03-23 05:39:27 Re: Exercise Find the angle between the vectors Character is who you are when no one is looking. #112 2009-03-23 06:03:08 Re: Exercise Character is who you are when no one is looking. #113 2009-03-24 02:11:41 Re: Exercise What is the rate at which the volume of a spherical bubble of radius r increases as fast as the radius? Character is who you are when no one is looking. #114 2009-03-24 02:17:51 Re: Exercise The distance of a particle at time t is given by What is the value of t when the particle comes to rest? Character is who you are when no one is looking. #115 2009-03-24 02:17:56 Re: Exercise The distance of a particle at time t is given by What is the value of t when the particle comes to rest? Character is who you are when no one is looking. #116 2009-03-24 02:36:46 Re: Exercise If the distance of a particle at time t is given by then what is its acceleration? Character is who you are when no one is looking. #117 2009-03-24 02:48:26 Re: Exercise The distance of a particle at time t is given by What is the velocity of the particle after 2 seconds? Character is who you are when no one is looking. #118 2009-03-24 03:31:25 Re: Exercise If the distance 's' travelled by a particle in time 't' is given by at what value of t would the particle come to rest? Character is who you are when no one is looking. #119 2009-03-24 03:56:45 Re: Exercise A stone thrown upwards has its equation of motion where s in in meters and t in seconds. What is the maximum height reached by it? Character is who you are when no one is looking. #120 2009-03-24 04:34:30 Re: Exercise The volume of a sphere is increasing at the rate of 2 cm³ per second. Find the rate at which the surface is increasing when the radius is 3 cm. Character is who you are when no one is looking. #121 2009-03-24 04:58:58 Re: Exercise What is the slope of the tangent to the curve Character is who you are when no one is looking. #122 2009-03-24 05:33:58 Re: Exercise For what value of would the normal to the curve be parallel to the y-axis? Character is who you are when no one is looking. #123 2009-03-24 06:07:15 Re: Exercise What is the point of intersection of the curves Character is who you are when no one is looking. #124 2009-03-24 06:15:44 Re: Exercise What is the angle between the two curves Character is who you are when no one is looking. #125 2009-03-24 06:37:37 Re: Exercise State whether the following function is increasing, decreasing, constant or undefined. Character is who you are when no one is looking. Discussion about math, puzzles, games and fun. Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ ° What is the slope of the normal to the curve What are the order and degree of the differential equation What is the Cartesian equation of the line through the points (2, -3, 5) and (3, 6, -2)? What is the center and the radius of the sphere given by What is the rate at which the volume of a spherical bubble of radius r increases as fast as the radius? The distance of a particle at time t is given by If the distance of a particle at time t is given by If the distance 's' travelled by a particle in time 't' is given by The volume of a sphere is increasing at the rate of 2 cm³ per second. Find the rate at which the surface is increasing when the radius is 3 cm. What is the slope of the tangent to the curve State whether the following function is increasing, decreasing, constant or undefined.
{"url":"http://www.mathisfunforum.com/viewtopic.php?id=11736&p=5","timestamp":"2014-04-20T03:27:18Z","content_type":null,"content_length":"40704","record_id":"<urn:uuid:3a273056-95ed-420e-a6d8-c0c2aa74aea1>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00221-ip-10-147-4-33.ec2.internal.warc.gz"}
Topic, How to create a new level + tips and tricks yah, i dont think i'll ever need to get a path specifically to a speed of 76.8221. I dont bother with that, I just make 2 paths xD, Gary, if you want to teach pro people who want to know complex things, go make your own topic. @garygoh, would it KILL you to make something easy for noobs to understand for once? ':_ Does it mean that it's really fast? o.o I'm only in 8th grade.. Edit: Never mind, i got it. How to Make Path Speeds to be in the Form a√b (note that b ≠ n^2, n is any integer) Push: Always check for ways to cheat! The highscorers are very sneaky in this game so make sure that you dont get outsmarted! bump anyway. GIS got bored of having 99% of the horrible levels to reject... not that this made a difference. :p LOL I think we should keep up this topic, it's VERY useful. Bump . uh i'm my years math freak I work at 2-3 times the pace of the best in my year and in Y3 I was in Y6 math and the pic onfused me until I read instructions :P I wasn't, It made perfect sense and was very logical, Although I am my years Maths freak, I was confused by the pic It's nice, but a little too advaced for new users... Actually, this is used when you have measuring problems. um...you can always measure how many squares along and up it is and swap them round for perpendiular lines... AB is 8 right 5 down OR 8 left 5 up so CG is 8 up 5 right OR 8 down 5 left How to draw a perpendicular line in Rolling Turtle, Jump Gear II, Hog Pop and Moonlights games! Make sure that AB is a straight line. Draw lines AC and CB such that ACB = 90˚. Draw a vertical line upwards to extend CA. Draw quarter-circle BD such that point D touches the vertical line that you've previously drawn. Draw line DE, which is perpendicular to DC. Draw quarter-circle AF, which point F touches the line BC. Draw a line upwards, touches point F, which is perpendicular to BC. The two lines that you've previously drawn, intercepts at point G. Draw a line CG. This shows that AB is perpendicular to CG. You can make a square by duplicating each of the lines, AB and CG respectively! Massive update. Moonlights tips: Some useful features of a Moonlights level, would be different moon sizes, if the sizes have a variety, then it looks better as a level. Thought through strategies, Imagine how person is going to complete this level, what are there thoughts? Originality in this form is the key to an amazing level. Players finding shortcuts in Moonlights levels are getting smarter and smarter, when checking over your level, Search for all the unintended ways to cheat, and block them if possible, General HP/JG2/HP/ML tips: Double path speed: when making a path the you want to go faster than the 'Global speed 100', but you still want the object moving to go the same distance you can make 2 paths of global speed 100, and half their size, like so: O--------------------------------------O =Speed 100 O------------------OO------------------O =speed 200! If you it to go faster than that then you can try diving the paths into thirds of quarters & so on, Bump, I gues...? Am I Self-ish ? what ? I didn't, Ringmania is brand new, But i'll get working :) GIS, you forgot RingMania 2.5 You're still game-ist. xP bump ? GIS, that's GAME-IST! xD This is how to create a new level + tips and tricks, right? Why not add it? No, because it's not a game. Are you gonna put Info lines toy here? Hello, I found out it's quite hard for beginners to make a good level now. Level expections are higher than before. That's why I made this tuto, but if you know somethings more, say, it in a comment, and I'll add it in this tuto. Please note that you don't need to read all, it has individual leveltricks for each game, it's a lot of text, but you don't need to read it all. Quoting birjolaxew from Leveldesigning tips & tricks Blockoban and blockoban 2: Rolling turtle: Quoting ilikerollingturtle from Leveldesigning tips & tricks Hog Pop Jump gear 2 Quoting ilikerollingturtle from Leveldesigning tips & tricks Captain dan VS zombie plan Quoting birjolaxew from Leveldesigning tips & tricks Path for mouse Quoting ilikerollingturtle from Leveldesigning tips & tricks Finding a theme Quoting birjolaxew from Leveldesigning tips & tricks How to find ideas ? Quoting ilikerollingturtle from Leveldesigning tips & tricks General levelcreating tips: Click here if you want help with removing the messiness from your level So that's it, more can possibly come later ;)
{"url":"http://www.bonuslevel.org/forum/topic_19519_2.html","timestamp":"2014-04-20T00:39:02Z","content_type":null,"content_length":"42390","record_id":"<urn:uuid:29a1c20b-4370-43ce-91da-42059a1d7d42>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00283-ip-10-147-4-33.ec2.internal.warc.gz"}
Alexander Rosenberg Sasha Rosenberg had defended in 1973 dissertation at Moscow State University studying Tannakian reconstruction theorems, mainly using methods of functional analysis. His main goals were in representation theory. Until leaving Soviet Union around 1987, Rosenberg worked in applied mathematics; on the side however he was developing some methods in representation theory which included functional analysis and categorical and ring/module theoretic methods, and the noncommutative localization in particular. A main construction, presented at a conference at Baikal (1981), was a new spectrum of a ring, so-called left spectrum, later generalized to a spectrum of an abelian category, used to prove (in the quasicompact case) the Gabriel-Rosenberg reconstruction theorem for commutative schemes; more spectral constructions followed. This work outgrown into a wide ongoing work on foundations of noncommutative algebraic geometry, including a natural definition of noncommutative scheme. More recently he proposed a new framework for nonabelian derived functors, including a new version of algebraic K-theory. Other related entries in $n$Lab include Q-category, neighborhood of a topologizing subcategory, sheaf on a noncommutative space, spectral cookbook. • 3 volume work on noncommutative geometry: Geometry of Noncommutative ‘Spaces’ and Schemes, pdf; Homological algebra of noncommutative ‘spaces’ I., pdf (note: longer version than another MPI preprint with the same title mentioned before); Noncommutative ‘Spaces’ and ‘Stacks’, pdf • video of msri lecture Noncommutative schemes and spaces (Feb 2000) can be found at msri • A. L. Rosenberg, Topics in noncommutative algebraic geometry, homological algebra and K-theory, preprint MPIM Bonn 2008-57 pdf • А. Л. Розенберг, Теоремы двойственности для групп и алгебр Ли, УМН, 26:6(162) (1971), 253–254, pdf, Инвариантные алгебры на компактных группах, Матем. сб., 81(123):2 (1970), 176–184, pdf • A. L. Rosenberg, Almost quotient categories, sheaves and localizations, 181 p. Seminar on supermanifolds 25, University of Stockholm, D. Leites editor, 1988 (in Russian; partial remake in English • A. L. Rosenberg, Non-commutative affine semischemes and schemes, Seminar on supermanifolds 26, Dept. Math., U. Stockholm (1988) • A. L. Rosenberg, Noncommutative algebraic geometry and representations of quantized algebras, MIA 330, Kluwer Academic Publishers Group, Dordrecht, 1995. xii+315 pp. ISBN: 0-7923-3575-9 • A. L. Rosenberg, Reconstruction of groups, Selecta Math. N.S. 9:1 (2003) doi ($n$lab remark: this paper is on a generalization of Tannaka–Krein and not of the Gabriel–Rosenberg kind of • A. L. Rosenberg, The left spectrum, the Levitzki radical, and noncommutative schemes, Proc. Nat. Acad. Sci. U.S.A. 87 (1990), no. 21, 8583–8586. • A. L. Rosenberg, Noncommutative local algebra, Geom. Funct. Anal. 4 (1994), no. 5, 545–585. • A. L. Rosenberg, The existence of fiber functors, The Gelfand Mathematical Seminars, 1996–1999, 145–154, Gelfand Math. Sem., Birkhäuser Boston, Boston, MA, 2000. • A. L. Rosenberg, The spectrum of abelian categories and reconstructions of schemes, in Rings, Hopf Algebras, and Brauer groups, Lectures Notes in Pure and Appl. Math. 197, Marcel Dekker, New York, 257–274, 1998; MR99d:18011; and Max Planck Bonn preprint Reconstruction of Schemes, MPIM1996-108 (1996). • A. L. Rosenberg, Spectra of noncommutative spaces, MPIM2003-110 ps dvi (2003) • A. L. Rosenberg, Noncommutative schemes, Compos. Math. 112 (1998) 93–125 (doi) • V. A. Lunts, A. L. Rosenberg, Differential operators on noncommutative rings, Selecta Math. (N.S.) 3 (1997), no. 3, 335–359 (doi); sequel: Localization for quantum groups, Selecta Math. (N.S.) 5 (1999), no. 1, pp. 123–159 (doi). • V. A. Lunts, A. L. Rosenberg, Differential calculus in noncommutative algebraic geometry I. D-calculus on noncommutative rings, MPI 1996-53 pdf, II. D-Calculus in the braided case. The localization of quantized enveloping algebras, MPI 1996-76 pdf • V. A. Lunts, A. L. Rosenberg, Kashiwara theorem for hyperbolic algebras, MPIM1999-82, dvi, ps • M. Kontsevich, A. Rosenberg, Noncommutative spaces, preprint MPI-2004-35 (dvi,ps), Noncommutative spaces and flat descent, MPI-2004-36 dvi,ps, Noncommutative stacks, MPI-2004-37 dvi,ps • M. Kontsevich, A. Rosenberg, Noncommutative smooth spaces, The Gelfand Mathematical Seminars, 1996–1999, 85–108, Gelfand Math. Sem., Birkhäuser Boston, Boston, MA, 2000; (arXiv:math/9812158) • A. Rosenberg, Homological algebra of noncommutative ‘spaces’ I, 199 pages, preprint Max Planck, Bonn: MPIM2008-91. • A. Rosenberg, Underlying spaces of noncommutative schemes, MPIM2003-111, dvi, ps • A. Rosenberg, Noncommutative spaces and schemes, MPIM1999-84, dvi, ps
{"url":"http://www.ncatlab.org/nlab/show/Alexander+Rosenberg","timestamp":"2014-04-20T15:52:33Z","content_type":null,"content_length":"24092","record_id":"<urn:uuid:bb18b436-bcf6-4ecf-ba08-12c1f58abb4f>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00396-ip-10-147-4-33.ec2.internal.warc.gz"}
Read The Instructions Carefully. Show All Your ... | Chegg.com I need help solving this problem from linear algebra Image text transcribed for accessibility: Read the instructions carefully. Show all your work. The following table shows average major league baseball salaries for the years 1970-2005. Find the least squares approximating quadratic for this data. Find the least squares approximating exponential for this data. Which equation gives the better approximation? Why? What do the estimate the average major league baseball salary will be in 2015 and 2020? ax2 + bx +c = y solve for a, b, c
{"url":"http://www.chegg.com/homework-help/questions-and-answers/read-instructions-carefully-show-work-following-table-shows-average-major-league-baseball--q3961457","timestamp":"2014-04-24T16:29:41Z","content_type":null,"content_length":"21106","record_id":"<urn:uuid:211d6d69-93dd-4603-b55e-756d47a27667>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00422-ip-10-147-4-33.ec2.internal.warc.gz"}
M8S algorithm is a new spatially stabilised scheme of the M8 algorithm application. It accounts for natural distribution of seismic activity, eliminates subjectivity in positioning the circles of investigation, CI's, and provides additional stability of predictions. In Italy the experimental testing aims at prediction of earthquakes in the three consequent magnitude ranges - M6.5+, M6.0+, and M5.5+ - in the CI’s of 192-km, 138-km, and 106-km radius correspondingly. To guarantee unique reproducible outcome, we update the each half-year in IEPT&MG RAS, Moscow and DST University of Trieste independently.
{"url":"http://www.mitp.ru/en/m8s/M8s_italy.html","timestamp":"2014-04-20T18:22:53Z","content_type":null,"content_length":"12206","record_id":"<urn:uuid:2ed5fe6f-4cb9-40bf-b30d-abfc2c20a9c2>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00052-ip-10-147-4-33.ec2.internal.warc.gz"}
Wallington Algebra 2 Tutor ...Various parts of discrete math were used in all the other math courses I took as well. I used elements of discrete math and graph theory to invent a new online authentication method. The company I worked for (RSA security) has filed for a patent and listed me as the primary inventor. 22 Subjects: including algebra 2, calculus, geometry, trigonometry ...I have degrees in Biology and Mathematics and am comfortable teaching any and all of the subjects in both science and math. I have personally tutored everything from Algebra to Advanced Calculus and English to AP Biology and everything in between. I also have experience teaching to the SAT and ... 22 Subjects: including algebra 2, reading, English, chemistry Lisa, an accomplished writer, and instructor, completed her Bachelor of Arts degree in English and Anthropology, and graduated summa cum laude from SUNY at Buffalo. She then earned a Master’s Degree in English with honors from Columbia University and completed most of her doctoral credits on full academic scholarship. She also won a NY Times Foundation scholarship to NYU Writer’s 27 Subjects: including algebra 2, reading, writing, English ...As far as academic tutoring goes, I first started tutoring my junior and senior year of high school. One of my electives was tutoring for the Advancement Via Individual Determination (AVID) program that my high school offered. I am currently tutoring students who go to some of the top prep schools in NYC, including Trinity and Collegiate. 16 Subjects: including algebra 2, chemistry, geometry, biology ...My illustrations have been published in journals including Science, Phys. Rev. Lett., Proc. 13 Subjects: including algebra 2, reading, writing, physics
{"url":"http://www.purplemath.com/wallington_nj_algebra_2_tutors.php","timestamp":"2014-04-18T13:32:29Z","content_type":null,"content_length":"23988","record_id":"<urn:uuid:c17be3b4-cdac-49a0-b616-b53bcef13a18>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00346-ip-10-147-4-33.ec2.internal.warc.gz"}
(You may click on each of the pictures on this page to get a larger version) Fractal With Equation z = c -.3*z^4+ 1.3*z^2 Fractal With Equation z = c + 1/((z-1)(z-i)) In these factals, we use iterative equations z = c + f(z) for various functions f. The iterations may result in converging or diverging values of z. Scanning the screen as if it were the complex plane, we use the iterative equation a fixed number of times to compute the values of z. We then color the "points" (pixels) according to various convergence criteria. For example, we color a point one color if the values of z converge, another color if they oscillate between two values, another color if they oscillate among three values, etc..., and still another color if they become unbounded. This method produces "interior" (non-black) colors in Mandelbrot sets and other fractals. Fractal With Equation z = c + -1.5z^2 + 2.5z^4 Here are three fractals of the form z =c + kz^4 + (1-k)/z , with k = 1, .95 and 1.05. The k=1 case results in a "three-lobed" Mandelbrot type fractal but the addition of a pole at the origin,(1-k)/ z,interrupts the three-fold symmetry. z= c + z^4 z = c + z^4 + .95/z z = z^4 + 1.05/z mathfile home page
{"url":"http://www.sonoma.edu/math/faculty/falbo/vfract2.html","timestamp":"2014-04-18T13:07:51Z","content_type":null,"content_length":"2558","record_id":"<urn:uuid:ec16e04e-15d5-4a77-93a5-a824bde345b9>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00660-ip-10-147-4-33.ec2.internal.warc.gz"}
From Latin: rectus "right" + angle Try this Drag the orange dots on each to reshape the rectangle. The rectangle, like the square, is one of the most commonly known quadrilaterals. It is defined as having all four interior angles 90° (right angles). Properties of a rectangle • Opposite sides are parallel and congruent Adjust the rectangle above and satisfy yourself that this is so. • The diagonals bisect each other • The diagonals are congruent Side 1 clear Side 2 clear Use the calculator on the right to calculate the properties of a rectangle. Enter the two side lengths and the rest will be calculated. For example, enter the two side lengths. The area, perimeter and diagonal lengths will be found. Other ways to think about rectangles A rectangle can be thought about in other ways: • A square is a special case of a rectangle where all four sides are the same length. Adjust the rectangle above to create a square. • It is also a special case of a parallelogram but with extra limitation that the angles are fixed at 90°. See Parallelogram definition and adjust the parallelogram to create a rectangle. Coordinate Geometry In coordinate geometry, a rectangle has all the same properties described here, but also, the coordinates of its vertices (corners) are known. See Rectangle (Coordinate Geometry) for more. Other rectangle pages: Area of a rectangle Perimeter of a rectangle Golden rectangle Rectangle (Coordinate Geometry) Related polygon topics Types of polygon Area of various polygon types Perimeter of various polygon types Angles associated with polygons Named polygons (C) 2009 Copyright Math Open Reference. All rights reserved
{"url":"http://www.mathopenref.com/rectangle.html","timestamp":"2014-04-21T15:12:03Z","content_type":null,"content_length":"16741","record_id":"<urn:uuid:1b78da6f-9885-4169-ad05-f35b470945f6>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00029-ip-10-147-4-33.ec2.internal.warc.gz"}
I-categories and duality - PROCEEDINGS OF THE REX WORKSHOP ON SEMANTICS: FOUNDATIONS AND APPLICATIONS, VOLUME 666 OF LECTURE NOTES IN COMPUTER SCIENCE , 1998 "... Canonical solutions of domain equations are shown to be final coalgebras, not only in a category of non-standard sets (as already known), but also in categories of metric spaces and partial orders. Coalgebras are simple categorical structures generalizing the notion of post-fixed point. They are ..." Cited by 48 (10 self) Add to MetaCart Canonical solutions of domain equations are shown to be final coalgebras, not only in a category of non-standard sets (as already known), but also in categories of metric spaces and partial orders. Coalgebras are simple categorical structures generalizing the notion of post-fixed point. They are also used here for giving a new comprehensive presentation of the (still) non-standard theory of non-well-founded sets (as non-standard sets are usually called). This paper is meant to provide a basis to a more general project aiming at a full exploitation of the finality of the domains in the semantics of programming languages --- concurrent ones among them. Such a final semantics enjoys uniformity and generality. For instance, semantic observational equivalences like bisimulation can be derived as instances of a single `coalgebraic' definition (introduced elsewhere), which is parametric of the functor appearing in the domain equation. Some properties of this general form of equivalence are also studied in this paper. , 1996 "... This paper provides foundations for a reasoning principle (coinduction) for establishing the equality of potentially infinite elements of self-referencing (or circular) data types. As it is well-known, such data types not only form the core of the denotational approach to the semantics of programmin ..." Cited by 37 (3 self) Add to MetaCart This paper provides foundations for a reasoning principle (coinduction) for establishing the equality of potentially infinite elements of self-referencing (or circular) data types. As it is well-known, such data types not only form the core of the denotational approach to the semantics of programming languages [SS71], but also arise explicitly as recursive data types in functional programming languages like Standard ML [MTH90] or Haskell [HPJW92]. In the latter context, the coinduction principle provides a powerful technique for establishing the equality of programs with values in recursive data types (see examples herein and in [Pit94]). - PROC. MFPS '93, SPRINGER LNCS 802 , 1993 "... The Structural Induction Theorem (Lehmann and Smyth, 1981; Plotkin, 1981) characterizes initial F-algebras of locally continuous functors F on the category of cpo's with strict and continuous maps. Here a dual of that theorem is presented, giving a number of equivalent characterizations of final c ..." Cited by 7 (1 self) Add to MetaCart The Structural Induction Theorem (Lehmann and Smyth, 1981; Plotkin, 1981) characterizes initial F-algebras of locally continuous functors F on the category of cpo's with strict and continuous maps. Here a dual of that theorem is presented, giving a number of equivalent characterizations of final coalgebras of such functors. In particular, final coalgebras are order strongly-extensional (sometimes called internal full abstractness): the order is the union of all (ordered) F-bisimulations. (Since the initial fixed point for locally continuous functors is also final, both theorems apply.) Further a similar co-induction theorem is given for a category of complete metric spaces and locally contracting functors. - Theoretical Computer Science , 1997 "... The partial real line is an extension of the Euclidean real line with partial real numbers, which has been used to model exact real number computation in the programming language Real PCF. We introduce induction principles and recursion schemes for the partial unit interval, which allow us to verify ..." Cited by 6 (1 self) Add to MetaCart The partial real line is an extension of the Euclidean real line with partial real numbers, which has been used to model exact real number computation in the programming language Real PCF. We introduce induction principles and recursion schemes for the partial unit interval, which allow us to verify that Real PCF programs meet their specification. They resemble the so-called Peano axioms for natural numbers. The theory is based on a domain-equation-like presentation of the partial unit interval. The principles are applied to show that Real PCF is universal in the sense that all computable elements of its universe of discourse are definable. These elements include higher-order functions such as integration operators. Keywords: Induction, coinduction, exact real number computation, domain theory, Real PCF, universality. Introduction The partial real line is the domain of compact real intervals ordered by reverse inclusion [28,21]. The idea is that singleton intervals represent total rea... - In Proceedings of the Twelveth Annual IEEE Symposium on Logic in Computer Science , 1997 "... of bifree algebras ..."
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=2371310","timestamp":"2014-04-16T16:53:19Z","content_type":null,"content_length":"23394","record_id":"<urn:uuid:1885e23b-655a-4e71-bfbc-99897f02d2e0>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00208-ip-10-147-4-33.ec2.internal.warc.gz"}
Lectures on Discrete Geometry Lectures on Discrete Geometry (Jiri Matousek) Errata and remarks • P. 6, the second remark under the separation theorem, stating that the convex hull of a closed set X equals the intersection of all closed halfspaces containing X, is false, since the convex hull of a closed set need not be closed. (Noted by Uli Wagner.) • P. 7, the proof of the separation theorem (the case of general C and D, non-strict separation) doesn't work as stated. (Noted by Wolfgang Weil.) First of all, C is not contained in the union of the C_n but only in their closure. Second, and more serious, the C_n and D_n as defined in the proof need not be disjoint! (For example, take C as a union of an open halfplane H and a point p on its boundary, and D as another point q on the boundary of H, and use p and q as centers of the shrinking.) To have them disjoint, one needs to choose the center of the shrinking as a point in the relative interior. This requires more work, and with this adjustment, this proof is perhaps no longer the method of choice. Here is an outline of one possible proof. First, a standard step for simplyfing matters is noticing that separation of C from D is equivalent to separating A:=C-D={x-y: x\in C, y\in D} from {0}. Let B be the closure of A. If 0 is not in B, we are done. Otherwise, a small argument shows that 0 must be on the boundary of A, and so we can choose a sequence {x_n} of points outside B converging to 0 and verify that separating hyperplanes of B and x_n have a convergent subsequence whose limit is a separating hyperplane (containing 0 and with all of A on one side). • P. 8 bottom, the Bonnice-Klee theorem should read "Any k-interior point of conv(X) is a k-interior point of conv(Y) for some Y\subseteq X with at most max(2k,d+1) points, where x is called a k-interior point of a set C if x lies in the relative interior of the convex hull of some k+1 affinely independent points of Y." (Noted by Sergio Cabello.) • P. 9, Exercise 4(b), it is to be proved f(conv(X))=conv(f(X)). (Noted by Sergio Cabello.) • P. 9, Exercise 6(b), we need to assume that C is a closed convex cone. (Noted by Yonathan Mizrahi.) • P. 19, in Proposition 2.1.3 we want N and n to be at least 1, but for m we have to admit m=0 as well. Similarly in Exercise 2.2.5. (Noted by Marek Scholle.) • P. 21, 2nd line from below: BU should be ZU. (Noted by Sergio Cabello.) • P. 22, l. 16, Example 2.1.3 should be Proposition 2.1.3. (Noted by Sergio Cabello.) • Exercise 2.1.2, we should really have k+1 points instead of k (with k the result is valid too but the natural statement is with k+1). (Noted by Leonard Schulman.) • P. 23, the conclusion of the proof of Theorem 2.2.2 is oversimplified; the point v' need not lie in P. One needs to rewrite v'=\sum_{j=1}^{i-1}(\alpha_j+1)\gamma_j z_j + \gamma_i\alpha_i w, and reduce once more; that is, to consider v'' = \sum_{j=1}^{i-1} fract.part((\alpha_j+1)\gamma_j)z_j +\gamma_i\alpha_i w, obtaining \gamma_i=0 as before. (Noted by Sergio Cabello.) • P. 23 (what an unfortunate page!), Bibliography and Remarks, 3rd line: An assumption is omitted, namely, that C has to be a star body, i.e. a compact set containing 0 in the interior and such that x\in C implies tx\in C for all t\in[0,1]. A few lines below: "det(Lambda) > D" should be "det(Lambda) < D". (Noted by Sergio Cabello.) • P. 34, the 6-hole problem was resolved in 2005 by Tobias Gerken, who showed by a clever case analysis that any set containing 9 points in convex positions has a 6-hole, and independently by Carlos M. Nicolas. Gerken's proof was simplified by Pavel Valtr to about 4 pages (but one needs more than 9 points in convex position to start with, and hence the resulting upper bound for n(6) is worse). • P. 44, above the picture, the Fano plane is of order 2, not 3 (noted by Gabriel Nivasch). • P. 46, "Point-plane indicences", the first upper bound for point-plane incidences is attributed to Aronov and Agarwal by mistake. The result can be found in a paper by Guibas, Overmars, and Robert [Comp. Geom. Theor. Appl. 6(1996) 215-230]. (Noted by Yoshio Okamoto.) • P. 49, l. 6, "at most 8 distinct curves \gamma_{ij}" better "at most 8 of the curves \gamma_{ij}" (some of the curves may conicide) • P. 52, l.2, the constant should not be 1/4 (but rather 4^{-1/3}). (Noted by Akyoshi Shiboura.) • P. 53, l. 10 from below: 2^{r-1}/16 should be n2^{r-1}/8. (Noted by Micha Sharir and his students.) • P. 54, Exercise 4.2.1, the inequalities should be reversed (n^2 >= m and m^2 >= n). (Noted by Abbas Mehrabian.) • P. 55, in the proof of the crossing number lemma we should assume that no two edgs sharing a vertex cross in the chosen drawing of G, for otherwise, E[x']=p^4 x need not hold. For straight-line drawings this assumption is satisfied automatically, and for curvilinear drawings it is easily enforces. • P. 56, 2nd line below the picture, P should be E. (Noted by Nati Linial.) • P. 57, line 6: "closed under REMOVING edges" (instead of addding). (Noted by Uli Wagner.) • P. 70 Exercise 4.5.2, O(n\sqrt m + m) (instead of O(n\sqrt m + n)). (Noted by Sergio Cabello.) • P. 72, footnote, the number of pairs of possible unbounded rays is of order n^4, rather than n^2 (which doesn't matter for the proof). (Noted by Gabriel Nivasch.) • P. 74, above the picture, t/q+2 should be t/q+3. (Noted by Salman Parsa.) • P. 77, picture, the top vertex of the front square should be 1423 (rather than 1342). (Noted by Petr Skovron.) • P. 81 exercise 6(b), C^* equals the CLOSURE of the stated convex hull. (Noted by Uli Wagner.) • P. 84, line 2, < \sigma,\leq > 1 should be < \sigma, x >\leq 1. • P. 89, the elements 15 and 235 shouldn't be connected in the face lattice, while 35 should be connected to 235. (Noted by Yoshio Okamoto and by Tomasz Gdala.) • P. 97 picture, the labels gamma_5 and gamma_6 should be gamma_4 and gamma_5. (Noted by Petr Skovron.) • P. 98, last line of the proof of Theorem 5.4.5, the binomial coefficient should be n-k-1 choose k-1. (Noted by Micha Sharir and his students.) • P. 101, in the sentence at lines 6-8, the words "upwards" and "downwards" should be interchanged. (Noted by Yoshio Okamoto.) • P. 102, last line of the first paragraph, "simple" should be "simplicial". (Noted by Micha Sharir and his students.) • P. 108 middle, the matrix A should be (d+1)x n, instead of d x n. (Noted by Yoshio Okamoto.) • P. 123, last line of exercise 5.7.2, n^k should be n^{k+1}. (Noted by Uli Wagner.) • P. 129 Exercise 6(b), d-tuples should be (d+1)-tuples (twice). (Noted by Yoshio Okamoto.) • P. 131, as I've learned from Ricky Pollack, Oleinik and Petrovskii only proved bounds for the sum of Betti numbers (and in particular, for the number of connected components) of an algebraic set, i.e. the intersection of the zero sets of n polynomials, under the additional assumption that these zero sets are smooth hypersurfaces. Thom and Milnor obtained more general bounds but still for algebraic sets. Sign patterns were explicitely considered only later (but a reduction of this problem to the number of connected components of a suitable algebraic set is not very hard). • P. 139, Exercise 2(c): There are actually quite simple examples (which I've overlooked, unfortunately) showing that the number of intersection graphs of convex sets is at least 2^{const. n^2}. Recently Pach and Toth ("How many drawings are there?", manuscript, 2002) determined the right constant in the exponent: the number is 2^{(3/8+o(1))n^2}, for both intersection graphs of planar convex sets and intersection graphs of planar curves. • P. 144 in the displayed formula below Lemma 6.3.2, (k+1)^{-d} should be (k+1)^d. (Noted by Hiroshi Maehara.) • In the last displayed formula in the proof of Lemma 6.3.2, (r/dn) should be (r/2n). (Noted by Yoshio Okamoto.) • P. 158 top picture, there is a missing case (a trapezoid with a top side, bottom side, left vertical side but unbounded on the right. (Noted by Gabriel Nivasch.) • P. 159, above the second displayed formula from below, t(r/n) should be t(n/r). (Noted by Uli Wagner.) • P. 163, the remark on optimal cuttings for arrangements of spheres is unfounded (or confusing at best). As far as I could find, the best known decompositions for arrangements of spheres need O(n^ d log n) cells, and the corresponding (1/r)-cuttings have r^d log r cells, probably slightly suboptimal. This error of mine, unfortunately, has been propagated in the (marvellous) paper of Jozsef Solymosi and Van Ha Vu, Near optimal bounds for the Erdos distinct distances problem in high dimensions [Combinatorica 28,1 (2008): 113-125]. • P. 174, displayed equation lambda_3(n) <= 2n alpha(n) + O(sqrt(n alpha(n))), the lower-order term should be O(n sqrt(alpha(n))). (Noted by Gabriel Nivasch.) • P. 178 middle, definition of "nonrepetitive segment", the index of the last symbol a_{i+k} should be i+k-1 (noted by Pavel Valtr) • P. 182 Exercise 1, \psi_s(n,m) should be \psi_s(m,n). (Noted by Yoshio Okamoto.) In part (d), it seems that the tools given in (a)-(c) do not lead to the claimed exponent of the logarithm in a simple way. One can get some suitable exponent c(s) for every s. (Noted by Gabriel Nivasch.) • P. 203, colored Tverberg theorem: it should be said that the A_i are (d+1)-element sets. (Noted by Peter Landweber.) • P. 204 4th line of the last paragraph, A should be X. (Noted by Gabriel Nivasch.) • P. 205 line 2: T_3(X) is always nonempty for a (2d+3)-point set in dimension d. A correct formulation is "for a set X in R^d, where d is a part of input, is NP-complete." (Noted by Emo Welzl.) • P. 210, it is incorrectly claimed that EVERY centerpoint of X is contained in 2/9 of all X-triangles. Boros and Furedi showed this only for a point of the maximum depth (which may be larger than n/3 for some X). (Noted by Nabil Mustafa.) • P. 214 above the picture, "3-regular" should be "3-uniform". (Noted by Yoshio Okamoto.) • P. 218 Lemma 9.3.2: the sets C_i should also be compact, for otherwise, the implication (i) implies (ii) need not hold. (Noted by Uli Wagner.) • P. 219, line 9: "no more than 2^{d-1} halvings" should be "no more than 2^d halvings". (Noted by Gabriel Nivasch.) • P. 224, picture, the second Y_3 should be Y_4. (Noted by Salman Parsa.) • P. 226, the third line from below in the string of inequalities and equations, epsilon^k should be epsilon^{k-1}. (Noted by Salman Parsa.) • P. 236 footnote, the known result is weaker than claimed there: The assumption needed for the hardness of approximation with a better factor is not P not equal to NP. It is a stronger but still quite believable assumption about the hardness of NP-complete problems. (Noted by Yoshio Okamoto.) • P. 237, Exercise 10.1.6(b), all of the inequality signs should be flipped. (Noted by Leonard Schulman.) • P. 241, l. 6, missing curly brackets {...} in formula. (Noted by Yoshio Okamoto.) • P. 241, bottom displayed formula, 2er ln r r^{-C/4} should be 2eCr ln r r^{-C/4}. (Noted by Guy Even.) • P. 250 l. -18: the visibility regions in the cited example from [Val99b] occupy 5/9 of the gallery (rather than 1/2). (Noted by Pavel Valtr) • P. 251, 10.3.4b: a 1/r-cutting for H, not for L. (Noted by G"unter Rote.) • P.254, middle. Grini should be Grigni. (Noted by G"unter Rote.) • P. 257, the binomial coefficient {n-d+1\choose p-d+1} should be {n-d-1\choose p-d-1}, twice. (Noted by Uli Witting, Bonn.) • P. 259, Theorem 10.6.1, and next page, Lemma 10.6.2: with the current argument in the proof of the lemma, we need to assume that the sets are compact. (Noted by Yoshio Okamoto.) • P. 269, first displayed formula, the sum in the r.h.s. should be under expectation. (Noted by Gabriel Nivasch.) • P. 275, the line lambda_v in the upper picture should be drawn one "floor" lower, so that the marked vertices lie on the middle level. (Noted by Micha Sharir.) The same adjustment for the lower picture (Noted by Gabriel Nivasch.) • P. 276, end of the 2nd paragraph, the definition of d_i should be as on the previous page; that is, d_i is the number of intersections of l_i with l_j, j<i. (Noted by Gabriel Nivasch.) Later on, instead of "each line in the bundle of l_i has exactly d_i vertices of V_{m+1}" it should be "each line in the bundle of l_i has at most D_m vertices of V_{m+1}". (Noted by Guenter Rote.) • P. 280, Exercise 1(a), k+1 should be replaced by 2(k+1); k+1 counts only the directed k-edges going from left to right. The same correction also applies to the hint to Exercise 2. (Noted by Gabriel Nivasch.) • P. 285, Fig. 11.1, the horizontal segment in the leftmost devil shouldn't be there. (Noted by Gabriel Nivasch.) • P. 286, in the outline of the proof of the improved k-set bound in dimension 3, n_p should be replaced by n throughout. Moreover, for the case where the chains C_1 and C_2 do not cross, we still need to distinguish two subcases: If both C_1 and C_2 end with infinite rays on both sides, then the argument in the outline can be applied. The other case, where at least one of C_1, C_2 ends with at least one bounded edge, needs to be accounted for separately: there are at most c_p n such pairs. (Noted by Uli Wagner.) • P. 286, 6th line below the first picture, t\in {\cal T} should be T\in {\cal T}. (Noted by Yoshio Okamoto.) • P. 287, top picture - the point q^* should be on the line pq, rather than on rq. (Noted by Yoshio Okamoto.) • P. 291, Berge's Strong Perfect Graph Conjecture was proved by Chudnovsky, Robertson, Seymour, and Thomas, by graph-theoretic (as opposed to polyhedral) methods. See Chvatal's web page about it. Even more recently, a polynomial-time algorithm for recognizing perfect graphs was found by Chudnovsky, Cornuejols, Liu, Seymour, and Vuskovic (to appear in Combinatorica). • P. 298 l. 3, x should be x_1. (Noted by Bernard Lidicky.) • P. 298 bottom, in the "basic fact from measure theory" one should assume vol(X_1) finite (which is no problem in the application since the considered sets are compact). (Noted by Eva Kopecka.) • P. 294 Exercise 2: K should be a MAXIMAL clique. (Noted by Robert Samal.) • P. 300, 3rd line from the bottom: A+B should be (1-t)A+tB. (Noted by Robert Samal.) • P. 301, end of "Bibliography and Remarks", the sets should be A,B,K_3,K_4,...,K_n and the left-hand side of the inequality should be V(A,B,K_3,...,K_n)^2. (Noted by Robert Samal.) • P. 301, Exercise 2, A and B should be assumed nonenmpty. (Noted by Yoshio Okamoto.) • P. 305 bottom, c_1 equals (1/(n+1))(h(b)-h(a)) (a and b exchanged). (Noted by Robert Samal.) • P. 308 line -14 the right bar of the absolute value sign is missing. (Noted by Nati Linial.) • P. 312, 4th line below the first picture, 2\pi e/n should be \pi e/2n. (Noted by Robert Samal.) • P. 314 ex. 2, the integral should range over R^n, not R^d. (Noted by Nati Linial.) • P. 315, 5 lines below Theorem 3.1.2, c^N should be c^n. (Noted by Robert Samal.) • P. 317, the second paragraph ("By more refined consideration...") is oversimplified and confusing at best. Here is an attempt at a correct concise explanation: "By more refined consideration, one can improve the lower bound to approximately the square of the quantity just given. Having generated the query points x_1,...,x_N for B^n, we check that for K=conv {+e_1,-e_1,...,+e_n,-e_n,+x_1^ 0,-x_1^0,...,+x_N^0, -x_N^0}, with x_i^0=x_i/||x_i||, the oracle is free to give the same answers as those for B^n, and the same holds for the dual body K^*. A deep result (the inverse Blaschke-Santalo inequality) states that vol(K)vol(K^*) > c^n/n! for any centrally symmetric n-dimensional convex body K, with an absolute positive constant c, and together with Theorem 13.2.1 this yields the appropriate lower bound for vol(K^*)/vol(K). This improvement is interesting because for symmetric convex bodies it almost matches the performance of the best known algorithm." (Noted by Nati Linial.) • P. 318, third line from below, "distance at most 1/n from S'" should be "distance at most 1/n from x". (Noted by Man-Cho A. So.) • P. 319, we should (and may) also assume that N exceeds 2n, say, so that the formula defining k yields a value below n. (Noted by Robert Samal.) • P. 320, 3rd paragraph of "Bibliography", line 6, the width of the slab should be 2/||u_i||, rather than 1/||u_i||. (Noted by Yoshio Okamoto.) • P. 323 middle, treatment of the general case, we need the opposite inequality for q: q=O(n/ln(N/n)). This is an equally easy calculation. (Noted by Jeonghyeon Park.) • P. 323, last line, the convex hull of the y^{(i)} is not a crosspolytope; one should speak about +y^{(i)} and -y^{(i)} or phrase the argument a bit differently. (Noted by Nati Linial.) • P.324 line 11, "n special points" (instead of "d special points"). (Noted by Nati Linial.) • P. 326, last line, y^{(i)} should be +- y^{(i)}. (Noted by Nati Linial.) • P. 328 Exercise 3: the last inequality in (b) should be vol(E) <= max(vol(E_1),vol(E_2)), rather than vol(E) >= min(vol(E_1),vol(E_2)). (Noted by Kasturi Varadarajan.) • P. 329 l. -12, the exponent of epsilon should be +2, rather than -2. • P. 332 l. 6 (2nd line in the displayed formula), "=" should be "less or equal" (noted by Rados Radoicic) • P. 333, Prop. 14.2.1, R^d should be R^n. (Noted by Nati Linial.) • P. 339, Prop. 14.3.4, the dimension in the formula should be one smaller (ie. "-2" instead of "-1"), for otherwise, the calculation doesn't quite work. (Noted by Josef Cibulka.) • P. 340, 2nd line of the last paragraph, "-lambda^2" in the exponent should be "lambda^2". • P. 343, above Lemma 14.4.2, "2k facets" should be "2n facets". (Noted by Robert Samal.) • P. 347, Exercise 1, k should be n (in R^k and B^k). (Noted by Yoshio Okamoto.) • P. 348, Exercise 1, "Hammer" should be "Hanner" (my apologies to him). • P. 350 lines 1-2, K should be \tilde K. (Noted by Yoshio Okamoto.) • For a new survey on the topics of Chapter 15 see a handbook chapter by Piotr Indyk and me. Many open problems, with updates on recent progress, are listed in Open problems on embeddings of finite metric spaces. • P. 361, notes to Section 15.2, 2nd paragraph from below, "16/epsilon^2" should be "4/epsilon^2". (Noted by Piotr Indyk.) • P. 363, above the picture: "...is a tree" is not correct; this subgraph contains the tree but there may be extra edges connecting the leaves (which doesn't affect the argument, though). • P. 371, bottom displayed formula, E_1 and F_1 should be interchanged in the right inequality. (Noted by Yoshio Okamoto.) • P. 374, formula (15.3), ||x|| at the end should be squared. (Noted by Petr Kolman.) • P. 376 l. 11, "\mu_2/n" should be "n/\mu_2". (noted by Rados Radoicic.) • P. 378 top, phi and eta are interchanged (in the definition, on the line below the 2nd displayed formula, and 4 lines below). Moreover, the D^2 factor in the definition of x_{uv} should be present in the second case and not in the first, and the inequality in the next line should be \rho^2(\eta)-D^2\rho^2(\varphi)\geq 0. Other ways of fixing the confusion with eta and phi are possible as well. (Noted by Mikhail Ostrovskii.) • P. 383 exercise 15.5.6, the sets A and B should form a partition of V, i.e., B=V\A. (Noted by Gabriel Katz.) • P. 385, 3rd paragraph, 3rd line, max should be min (in the definition of f_i(u)). (Noted by Yoshio Okamoto.) • P. 389, proof of Lemma 15.7.2: r_j should really be defined as the minimum of r_q and of the current definition. The argument as written works only for j with r_{j-1}< r_q, but for the other j we have Delta_j=0. (Noted by Uli Wagner.) • P. 400, Exercise 15.7.9(a) last line, min should be max. (Noted by Yoshio Okamoto.) • P. 407, Dvoretzky is misspelled (with "s" instead of "z"). • P. 409, hint to Exercise 1.3.5(c), the number should be \sqrt{d/(2(d+1))} instead of \sqrt{2d/(d+1)}.(Noted by Jie Gao.) • P. 409, hint to Exercise 2.1.4(c), n^{d+1} should be n^{-(d+1)}. (Noted by Sergio Cabello.)
{"url":"http://kam.mff.cuni.cz/~matousek/dg-err.html","timestamp":"2014-04-19T22:20:25Z","content_type":null,"content_length":"21361","record_id":"<urn:uuid:e0272df9-f1f6-42e5-8bff-9d4d232f9683>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00288-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by Tim on Wednesday, April 30, 2008 at 8:40pm. Can someone tell me if I am right? I need to find the equation of a parabola. I am not sure if this is the correct way or if I have the correct answer. Vertex at origin, focus is at (0,3) Equation of parabola vertex (0,0) focus (0,3) focus (h,k+p) directrix y=-3 a=(-1/12) h=0 k=0 my answer is: y=(-1/12)(x^2)+0 • Algebra - Reiny, Wednesday, April 30, 2008 at 9:09pm shouldn't your parabola open upwards ? your -1/12x^2.... makes it open down. I do it this way: let P(x,y) be any point on the parabola by definition, the distance from P to the focal point is equal to the distance to the directrix ---> √(x^2 + (y-3)^2) = √(y+3)^2) square both sides, and simplify to get y = (1/12)x^2 Related Questions algebra problem - I need to find the equation of a parabola. I am not sure if ... college algebra--need help please!! - solve the following exponential equation. ... Algebra II - In this set of eight problems, we have to match each equation with ... algebra - 6. Find the equation of each parabola described below. a) parabola ... Algebra - I'm trying this again, I'd like the quadratic form, so i can then work... Calculus - Great! I understand now that C0=f(a) c1=f'(a) c2=f"(a)/2 Now, If I am... Algebra: Parabolas - I have a math problem that requires me to find the focal ... Algebra - Could someone correct my answers and help with the other problems? 1. ... Albebra - Find the vertex of the function y = –x2 + 2x + 8. It is where the ... Algebra - Hi, Any help on this would be very much appreciated. The graph y=3x^2+...
{"url":"http://www.jiskha.com/display.cgi?id=1209602437","timestamp":"2014-04-20T05:13:40Z","content_type":null,"content_length":"8908","record_id":"<urn:uuid:e44e4648-afe2-4052-a041-213eba0dc911>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00178-ip-10-147-4-33.ec2.internal.warc.gz"}
Minnesota Math Standards - 1st Grade MathScore EduFighter is one of the best math games on the Internet today. You can start playing for free! Minnesota Math Standards - 1st Grade MathScore aligns to the Minnesota Math Standards for 1st Grade. The standards appear below along with the MathScore topics that match. If you click on a topic name, you will see sample problems at varying degrees of difficulty that MathScore generated. When students use our program, the difficulty of the problems will automatically adapt based on individual performance, resulting in not only true differentiated instruction, but a challenging game-like experience. Want unlimited math worksheets? Learn more about our online math practice software. View the Minnesota Math Standards at other levels. Mathematical Reasoning Apply skills of mathematical representation, communication and reasoning throughout the remaining four content strands. 1. Create and solve word problems using actions, objects, words, pictures or numbers. (Basic Word Problems ) 2. Estimate and check that answers are reasonable. 3. Explain to others how a problem was solved. Number Sense, Computation and Operations A. Number Sense Understand place value, ways of representing whole numbers and relationships among whole numbers. Understand the concept of one half. 1. Read, write numerals for, compare and order numbers to 120. (Counting Squares , Place Value to 1000 ) 2. Count by 2s to 30 and by 5s to 120. (Skip Counting ) 3. Count backwards from 30. 4. Demonstrate understanding of odd and even quantities up to 12. (Odd or Even ) 5. Represent whole numbers up to 20 in various ways, maintaining equality. 6. Identify one half of a set of concrete objects. B. Computation and Operation Add and subtract one-digit whole numbers in real-world and mathematical problems. 1. Use one-digit addition and subtraction to solve real-world and mathematical problems. (Fast Addition , Fast Addition Reverse , Fast Subtraction , Basic Word Problems ) 2. Find the sum of three one-digit numbers. (Addition Grouping ) Patterns, Functions and Algebra A. Patterns and Functions Sort, classify and compare objects based on their attributes. Understand repeating patterns. 1. Sort, classify, and compare objects in a set in more than one way. 2. Recognize, describe, and extend repeating patterns involving up to four elements. (Patterns: Shapes ) B. Algebra (Algebraic Thinking) (Standards under this heading may be locally determined.) Data Analysis, Statistics and Probability A. Data and Statistics Gather and record data in real-world and mathematical problems. 1. Gather and record data about classmates and their surroundings in a simple graph. 2. Identify patterns in simple graphs. (Line Graphs ) B. Probability (Standards under this heading may be locally determined.) Spatial Sense, Geometry and Measurement A. Spatial Sense Explore the concept of symmetry in real-world situations. 1. Explore symmetry of objects and designs through mirrors or paper folding. (Requires outside materials ) B. Geometry Use attributes of two- and three-dimensional shapes to identify them and distinguish between them. 1. Sort and describe two- and three-dimensional shapes according to their geometrical attributes. (Geometric Shapes ) C. Measurement Measure length, time, and money using appropriate tools or units to solve real-world and mathematical problems. 1. Estimate and measure length and capacity using non-standard units. 2. Tell time to hour and half-hour on analog and digital clocks. 3. Using a calendar, identify the date, day of the week, month, year, yesterday, today and tomorrow. 4. Combine pennies, nickels or dimes to equal one dollar. (Counting Money ) Learn more about our online math practice software.
{"url":"http://www.mathscore.com/math/standards/Minnesota/1st%20Grade/","timestamp":"2014-04-17T00:48:31Z","content_type":null,"content_length":"9799","record_id":"<urn:uuid:d76d6729-3c76-4f9d-86b3-1357203c66cc>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00404-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - DFT and Graphene Hello all, I have read a few papers lately that have used DFT based techniques to investigate metallic adatom adsorption on top of graphene (see for instance PRB 77, 235430 2008). I was under the impression that electrons in graphene are described by the Dirac equation and not the Schrodinger equation. How then can the electronic properties be accurately described using DFT techniques based in the Shcrodinger equation. Thanks in advance
{"url":"http://www.physicsforums.com/showpost.php?p=2120675&postcount=1","timestamp":"2014-04-18T15:52:02Z","content_type":null,"content_length":"8850","record_id":"<urn:uuid:44d96456-e025-4b79-a202-8c286b8247d4>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00226-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: What is the equation of the line that passes through the points (−2, 1) and (1, 10)? 3x − y = −7 3x − y = −5 3x − y = 5 x + 3y = −5 • one year ago • one year ago Best Response You've already chosen the best response. plug those points into each equation and see which one holds true. Best Response You've already chosen the best response. |dw:1350777908333:dw| you coud graph it, or get the slope -> 10-1/1--2 (or 1+2) and get 9/3 which is 3 so the slope is 3. Plug in the #s to the point slope equation--- y-1=3(x+2) and simplify to y-1=3x+6 and get y=3x+5 and put it into standard form-- -3x+y=5 and then multiply everything by -1 to get 3x-y=-5, so the second answer is correct! TAH DAH, MATH Best Response You've already chosen the best response. If you need a fast solution, yes, just test the equalities at each point. However, if you want to be more careful, you can just derive the equation of the line in question. \[m=\frac{10-1}{1--2}= \frac{9}{3}=3\\ y-y_0=m(x-x_0)\\ y-10=3(x-1)\\ -10=3x-3-y\\ 3x-y=-7 \] Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/508339dde4b0dab2a5ec1461","timestamp":"2014-04-18T23:18:20Z","content_type":null,"content_length":"48780","record_id":"<urn:uuid:627e4121-e2bd-42a6-967b-76255f73ec64>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00273-ip-10-147-4-33.ec2.internal.warc.gz"}
compute the depth value explicitly [Archive] - OpenGL Discussion and Help Forums Hi all, For some reason, I need to compute the depth values in the main memory and upload it into the Z buffer. Could anybody tell me how openGL computes the depth value given the z coordinate (world coordinate). I am using orthogonal projection. I know this question might have been asked before, but I really couldn't find what I need.It would be greatly appreciated if anyone could help me with this.
{"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-144107.html","timestamp":"2014-04-17T18:51:18Z","content_type":null,"content_length":"5382","record_id":"<urn:uuid:b0073dd3-f07a-44ee-aae9-d38d0fb82cc5>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00084-ip-10-147-4-33.ec2.internal.warc.gz"}
I want nothing more than to take calculus 06-01-2003 #31 Originally posted by Silvercord ygf I think I understand everything you've said in at least the broadest terms. That's what I intended. I'm not getting into specifics here... for sanity's sake. That stuff you can learn on your own. Are integrals used to find the area/volume of any weird forumla Yep. Instead of adding up rectangles, you add up very flat discs. Or squares. Or what-have-you. It seems that integrals are only an approximation. Sometimes. But think about this: f(t) = t + 3 The slope of that line, it's derivative, is 1. f'(t) = 1 f'(t) just means the derivative of f(t) Say I wanted to find out the area under f'(t) from t=0 to t=3. Since the integral is the area under the curve, and the integral is opposite the derivative: f(t) is the integral of f'(t) f(3) - f(0) = 6 - 3 = 3 That's exact. No approximation there. For polynomials you often don't need approximations because some formula will give you the exact answer. Sometimes you need to approximate because there is no other way. (Logs are approximations.) That's exact. No approximation there. For polynomials you often don't need approximations because some formula will give you the exact answer. Sometimes you need to approximate because there is no other way. (Logs are approximations.) Well, sort of. An answer in the form of (log x) or (sin x) or something of that nature is considered to be exact, though a numerical approximation may be more useful from a practical standpoint. There are, however, functions that have no closed form integral - that is, their integral cannot be expressed in terms of elementary functions, and therefore, the integrals can only be One that comes to mind in one-variable calculus is the integral of e^(x^2) dx. The word rap as it applies to music is the result of a peculiar phonological rule which has stripped the word of its initial voiceless velar stop. Is the product rule limited to two multiplying terms? Ok, suppose something like x^3 * y^3 can be expressed as xyxyxy. By using the product rule (without simplifying) directly onto it, can it be derived? I AM WINNER!!!1!111oneoneomne Is the product rule limited to two multiplying terms? Ok, suppose something like x^3 * y^3 can be expressed as xyxyxy. By using the product rule (without simplifying) directly onto it, can it be derived? Yes, the product rule is a binary operation. So you have to split xyxyxy up into x(yxyxy) and then split the part in the parentheses and so on, or something in the same nature. So your best bet is to just express it as x^3 * y^3. Oh i see, thanks. Another qu, what is the integral of 1/x dx ? I know its something like y = loge x or something (not sure what it is exactly) but i dunno why.. First of all, what is e. I know its used heaps in logs, but its just some number originating from an infinite series of 1/1 + 1/2! + 1/3! ... etc but what advantage does it have in terms of logs? and umm, yeah, the integral question. How does 1/x become loge x? I AM WINNER!!!1!111oneoneomne Yeah, I agree that the hardest part about calculus is knowing the trig identities... Well, to *do* differentiation/integration you certainly need to know trig identities. But just to *understand* the concept of a derivative or integral, you don't need that. Identities can always be looked up in a book, and if the frequency of your usage warrants it, you'll remember them without trying. IMHO it's wrong to just emphasize facts which can be looked up in a book. Originally posted by Panopticon Oh i see, thanks. Another qu, what is the integral of 1/x dx ? I know its something like y = loge x or something (not sure what it is exactly) but i dunno why.. First of all, what is e. I know its used heaps in logs, but its just some number originating from an infinite series of 1/1 + 1/2! + 1/3! ... etc but what advantage does it have in terms of logs? and umm, yeah, the integral question. How does 1/x become loge x? Well, what function has a derivative of 1/x? You already have 1/x therefore, you need to find the antiderivative of it. This is ln(x). Originally posted by Silvercord I'm starting from the beginning, and I'm reading about limits and how to find the slope at a point using lines that are secant to the curve. It's probably a typo, but you meant tangent lines. Originally posted by Major_Small if you can figure out: assuming x==489488436498899878648897898 then you're good for calculus... btw... the answer is around 28... (limits) There's another way to do this without using L'Hospital's rule (yeah, I believe you have to stick the s in his name when you don't have the funny accent over the o...darn french) X in his example is insanely large. When X gets insanely large in a polynomial divided by another polynomial like that, only the terms raised to the highest power really affect it at all. This *technically* only works with X=infinity, but it'll do well enough here if you're not looking for an exact answer. "Simplify" the given problem to (56x^6)/(2x^6). (x^6) divides out, and you are left with 56/2, which equals 28. Tada. Originally posted by Panopticon Oh i see, thanks. Another qu, what is the integral of 1/x dx ? I know its something like y = loge x or something (not sure what it is exactly) but i dunno why.. First of all, what is e. I know its used heaps in logs, but its just some number originating from an infinite series of 1/1 + 1/2! + 1/3! ... etc but what advantage does it have in terms of logs? and umm, yeah, the integral question. How does 1/x become loge x? OK, here's a simple proof that the differential of ln(x) is 1/x, which hence proves that the integral of 1/x is ln(x) (the integral is the antiderivative). First, we must define e^x. e^x is defined as the function which is its own derivative, so d(e^x)/dx = e^x. Now we notice that ln(x)is defined as the inverse function of e^x, so: ln(e^x) = x. Now we ask ourselves what the derivative of ln(x) is. Well to start, we will let x = e^y for some y. We also know that ln(e^y) = y, differentiating this on both sides with respect to y and we get: d(ln(e^y))/dy = d(y)/dy = 1 Then (x = e^y), d(ln(x))/dy = 1 d(ln(x))/dx * dx/dy = 1 But dx/dy = d(e^y)/dy = e^y = x, (by our original definition of y (x = e^y)). So d(ln(x))/dx * x = 1 and d(ln(x))/dx = 1/x. There's your proof. Richard Hayden. (rahaydenuk@yahoo.co.uk) Webmaster: http://www.dx-dev.com DXOS (My Operating System): http://www.dx-dev.com/dxos PGP: 0x779D0625 It's probably a typo, but you meant tangent lines. Well, you take a secant line, and let the points of intersection get closer and closer, and let them approach 0 (i.e. the limit). That limit is the tangent line, but you can't have the tangent line without the secants. There's another way to do this without using L'Hospital's rule (yeah, I believe you have to stick the s in his name when you don't have the funny accent over the o...darn french) X in his example is insanely large. When X gets insanely large in a polynomial divided by another polynomial like that, only the terms raised to the highest power really affect it at all. This *technically* only works with X=infinity, but it'll do well enough here if you're not looking for an exact answer. "Simplify" the given problem to (56x^6)/(2x^6). (x^6) divides out, and you are left with 56/2, which equals 28. Tada. Still L'Hospital's rule (the 's' is just terrible)... It just uses a bit of common sense to solve more quickly. The word rap as it applies to music is the result of a peculiar phonological rule which has stripped the word of its initial voiceless velar stop. Oh.... I don't get it cos I know jack about logs. But thanks anyway. I guess ill have to wait till my class is up to there or ask my tutor. I AM WINNER!!!1!111oneoneomne It's probably a typo, but you meant tangent lines. Nope, the book specifically says you use secant lines until you get to the limit, and the slope at the limit is the slope at that point (the tangent) oops, Zach already said this above : Well, you take a secant line, and let the points of intersection get closer and closer, and let them approach 0 (i.e. the limit). That limit is the tangent line, but you can't have the tangent line without the secants. The great thing about the limit is being able to simulate exactness without any of the nasty side-effects (like division by zero). When X gets insanely large in a polynomial divided by another polynomial like that, only the terms raised to the highest power really affect it at all. This *technically* only works with X= infinity, but it'll do well enough here if you're not looking for an exact answer. It is the exact answer. (AFAIK) One thing to keep in mind about limits is that they are not approximations. Even though they are not exact numbers, they are infinitely close. By "infinitely close" I mean that it can be proven that no matter how close to the number 'x' is, f(x) is also close to it. Well, it still is an approximation in that case since we're evaluating x at a ridiculously large number, not takeing the limit as it goes to infinity. Also remember that the limit of the function as the independent variable approaches a point is not necessarily equal to the function at that point (jump or point discontinuities for example). The word rap as it applies to music is the result of a peculiar phonological rule which has stripped the word of its initial voiceless velar stop. Well, it still is an approximation in that case since we're evaluating x at a ridiculously large number, not takeing the limit as it goes to infinity. The situation for that problem is for x as it approaches infinity. x != infinity because infinity is not a real number (for any real value infinity, there's a higher one that must be infinity There's no point to mess around with rediculously high values when you have limits. The whole point of using limits is to prove a problem for any value sufficiently large. Approximations can only guess at the answer. Another way of looking at that problem is to divide every term by x^6, the highest degree factor. Since anything divided by infinity is 0 (as far as limits go), you're left with 56/2 == 28. Also remember that the limit of the function as the independent variable approaches a point is not necessarily equal to the function at that point (jump or point discontinuities for example). 06-01-2003 #32 06-01-2003 #33 06-02-2003 #34 06-02-2003 #35 06-02-2003 #36 Registered User Join Date May 2003 06-02-2003 #37 ¡Amo fútbol! Join Date Dec 2001 06-02-2003 #38 06-02-2003 #39 06-02-2003 #40 06-02-2003 #41 06-03-2003 #42 Join Date Jan 2003 06-04-2003 #43 06-04-2003 #44 06-04-2003 #45
{"url":"http://cboard.cprogramming.com/brief-history-cprogramming-com/40059-i-want-nothing-more-than-take-calculus-3.html","timestamp":"2014-04-18T04:37:40Z","content_type":null,"content_length":"110221","record_id":"<urn:uuid:bd6bfb5a-71cb-4346-85c3-9ed53c9df913>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00206-ip-10-147-4-33.ec2.internal.warc.gz"}
The weighted words collector Jérémie du Boisberranger, Danièle Gardy, Yann Ponty We consider the word collector problem, i.e. the expected number of calls to a random weighted generator before all the words of a given length in a language are generated. The originality of this instance of the non-uniform coupon collector lies in the, potentially large, multiplicity of the words/coupons of a given probability/composition. We obtain a general theorem that gives an asymptotic equivalent for the expected waiting time of a general version of the Coupon Collector. This theorem is especially well-suited for classes of coupons featuring high multiplicities. Its application to a given language essentially necessitates knowledge on the number of words of a given composition/probability. We illustrate the application of our theorem, in a step-by-step fashion, on four exemplary languages, whose analyses reveal a large diversity of asymptotic waiting times, generally expressible as κ·mp ·(logm)q·(loglogm)r, for m the number of words, and p, q, r some positive real Full Text: PostScript PDF
{"url":"http://www.dmtcs.org/dmtcs-ojs/index.php/proceedings/article/viewArticle/dmAQ0120","timestamp":"2014-04-17T12:39:35Z","content_type":null,"content_length":"11229","record_id":"<urn:uuid:5f9b1afa-9277-4345-b246-e674430ada2f>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00225-ip-10-147-4-33.ec2.internal.warc.gz"}
Newcastle, WA Math Tutor Find a Newcastle, WA Math Tutor I am a former middle school and high school teacher. I began in the middle schools in the US teaching math, English, science, and PE. Then, I worked overseas for two years preparing high school juniors and seniors to take Cambridge O-Level Exams with math and English. 39 Subjects: including algebra 1, algebra 2, grammar, linear algebra ...Draw that many circles. Take the second number- put that many dots in each circle. Now count the dots and that's the answer.' In 1 hour- every kid could multiply, many kids figured out short cuts, and one girl even came in the next day with a page full of every combination of 1x 1 all the way to 20 x 20! 17 Subjects: including calculus, elementary (k-6th), special needs, college counseling ...When I work with students, I am aiming for more than just good test scores - I will build confidence so that my students know that they know the material. Math is my passion, not just what I majored in. I am strong in the following subjects: Basic Math, Pre-Algebra, Basic Algebra, Geometry, Advanced Algebra, Trigonometry, Pre-Calculus, Calculus, and Linear Algebra. 8 Subjects: including calculus, linear algebra, algebra 1, algebra 2 ...I like to give students practice and help them find the answers in the book and resources that have been given to them. I will also provide additional suggestions for websites where they can obtain additional information for their coursework. I can help students be successful at mastering their mathematics subjects.I have taught Algebra at the high school level and college level. 7 Subjects: including algebra 1, algebra 2, calculus, geometry ...I look forward to the point where students are able to teach me new things. I currently offer a reduced rate for grade-school subjects in math and science. Regards, GlennAlgebra, is one of the broadest parts of mathematics, along with geometry, trigonometry, number theory, calculus, etc. to work and study science. 45 Subjects: including geometry, differential equations, statistics, discrete math
{"url":"http://www.purplemath.com/Newcastle_WA_Math_tutors.php","timestamp":"2014-04-18T23:57:37Z","content_type":null,"content_length":"24122","record_id":"<urn:uuid:19a2f6bf-63bd-4044-9e12-28e465a5884b>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00488-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] arange and floating point arguments Robert Kern robert.kern@gmail.... Fri Sep 14 16:51:27 CDT 2007 Joris De Ridder wrote: >> the question is how to reduce user astonishment. > IMHO this is exactly the point. There seems to be two questions here: > 1) do we want to reduce user astonishment, and 2) if yes, how could > we do this? Not everyone seems to be convinced of the first question, > replying that in many cases linspace() could well replace arange(). > In many cases, yes, but not all. For some cases arange() has its > legitimate use, even for floating point, and in these cases you may > get bitten by the inexact number representation. If Matlab seems to > be able to avoid surprises, why not numpy? Here's the thing: binary floating point is intrinsically surprising to people who are only accustomed to decimal. The way to not be surprised is to not use binary floating point. You can hide some of the surprises, but not all of them. When you do try to hide them, all you are doing is creating complicated, ad hoc behavior that is also difficult to predict; for those who have become accustomed to binary floating point's behavior, it's not clear what the "unastonishing" behavior is supposed to be, but binary floating point's is well-defined. Binary floating point is a useful tool for many things. I'm not interested in making numpy something that hides that tool's behavior in order to force it into a use it is not appropriate for. Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2007-September/029215.html","timestamp":"2014-04-19T06:53:30Z","content_type":null,"content_length":"4386","record_id":"<urn:uuid:63cb9917-f779-4ebf-a1a0-f55cf1b438e1>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00252-ip-10-147-4-33.ec2.internal.warc.gz"}
Paul Langacker Theoretical Elementary Particle Physics Born 1946 B.S. M.I.T. (1968) Ph.D. University of California at Berkeley (1972) Emeritus Professor of Physics (University of Pennsylvania) Senior Scientist, Princeton University Visitor, Institute for Advanced Study School of Natural Sciences Einstein Drive Princeton, NJ 08540 Phone: (609) 734-8076 Email: pgl@ias.edu In the last 30 years, there has been a tremendous advance in our understanding of the elementary particles and their interactions. We now have a mathematically consistent theory of the strong, electromagnetic, and weak interactions-the standard model-most aspects of which have been successfully tested in detail at colliders, accelerators, and non-accelerator experiments. It also provides a successful framework and has been strongly constrained by many observations in cosmology and astrophysics. The standard model is almost certainly an approximately correct description of Nature down to a distance scale 1/1000^th the size of the atomic nucleus. However, nobody believes that the standard model is the ultimate theory-it is too complicated and arbitrary and leaves too many questions unanswered. Therefore, most current activity is directed towards discovering the new physics which must underly the standard model. One approach, exemplified by superstring theories, is to try to develop a "theory of everything". However, promising ideas involve incredibly short distance scales, and it is challenging to make contact with accessible energies at present. Another direction is to build larger accelerators to directly search for new particles and interactions, and much attention is focussed on the Large Hadron Collider (LHC), which is currently starting up. An important complementary approach is to subject the standard model to diverse high-precision tests to determine how good it is and where it might break down. Finally, there are probes from the important overlaps of particle physics with cosmology and astrophysics. All of these approaches are likely to be essential in understanding what underlies the standard model. It is my hope that within the coming decades we will be able to combine progress in these directions to develop and test the essential aspects of a new standard model of everything, which incorporates the microscopic interactions, quantum gravity, and the origin of the Universe, and describes Nature down to the Planck scale, ∼ 10^−33 cm. My research has been directed towards the theoretical interpretation of the various experimental and observational probes, and the phenomenological implications of fundamental theories-i.e., in connecting theory and experiment. For over three decades much of my effort has involved the interpretation of high-precision tests of the standard model, and has exploited the fact that the global analysis of many experiments often yields more information than the individual experiments alone. Such global analyses involve collecting the data, deriving uniform and accurate theoretical formulas to interpret it, developing expressions for the possible effects of new physics, and fitting the data to search for or set limits on new physics. A related effort has been a study of the existing constraints, physics and cosmological implications, and the discovery/ diagnostic potential at present and future colliders for classes of extensions of the standard model and its minimal supersymmetric extension, especially those types that frequently occur as theory-motivated remnants of specific superstring constructions. These include additional possible gauge bosons (especially heavy Z' s), extended Higgs/neutralino sectors, and exotic fermions (with non-standard weak interactions). Other recent work has involved possibilities for studying CP violation at the LHC and new methods for mediating supersymmetry breaking between the ordinary and a quasi-hidden sector. Other phenomenological areas in which I worked extensively in the past include chiral symmetry breaking and its consequences, and theoretical models for neutrino mass and their laboratory, astrophysical, and cosmological implications. In terms of more fundamental theory, I have had a long standing interest in grand unified theories and their consequences. More recently, I have become interested in the phenomenological consequences of semi-realistic superstring compactifications. My collaborators and I have examined classes of string constructions and specific models, with emphasis on their consequences for the masses of the quarks and charged leptons, standard and non-standard mechanisms for generating small neutrino masses, and the most likely types of observable new physics at the TeV scale. I have written an advanced textbook, The Standard Model and Beyond, published by the CRC press, December 2009. Professional Activities • Editorial Committee, Annual Reviews of Nuclear and Particle Science, 2012-. • Co-convener, Workshop on First Two Years of the LHC, Kavli Institute for Theoretical Physics in China, Beijing, Summer, 2012. • Co-convener, Workshop on Strings at the LHC and in the Early Universe, Kavli Institute for Theoretical Physics, Santa Barbara, Spring, 2010. • Associate Editor, Reviews of Modern Physics, 2007-. • Member, organizing committee for International Conference on High Energy Physics, Philadelphia, 2008 (ICHEP2008). • Member, Organizing Committee, String Phenomenology, 2008, Philadelphia, May, 2008. • Chair, Steering Committee for the LHC Theory Initiative, 2006-. • Member, HEPAP subpanel on the University Grants Program, 2006-2007. • LangackerFest, 60th birthday celebration, University of Pennsylvania, 2006 • Henry Primakoff Lecturer, University of Pennsylvania, 2006 • Fermilab Frontier Fellow, 2005 • Member, High Energy Physics Advisory Panel (HEPAP) of the Department of Energy and National Science Foundation (2002-2005). • Co-organizer, Aspen Winter Conference At the Frontiers of Particle Physics, 2003. • Keck Distinguished Visiting Professor, Institute for Advanced Study, 2001 • Chair, Department of Physics and Astronomy, University of Pennsylvania (1/96-6/01) • William Smith Term Professor of Physics, University of Pennsylvania (7/93-6/98) • Fellow, American Physical Society and AAAS • Divisional Associate Editor, Physical Review Letters (1998-2004) • Member, Particle Data Group (1998-) • Co-chair, SUSY 97, Philadelphia (1997) • Co-convener, Workshop on Unification: From the Weak Scale to the Planck Scale, Institute for Theoretical Physics, Santa Barbara (1995) • Editorial Board, Physical Review (1986-88, 1991-93) • Executive Committee, Division of Particles and Fields of the American Physical Society (1989-1991) • Scientific Director, Theoretical Advanced Study Institute (TASI) (1990 and 1998) • Alexander von Humboldt Prize (1988) Selected Publications 1. Light Quark Mass Spectrum in Quantum Chromodynamics (with H. Pagels), Phys. Rev. D19, 2070-2079 (1979). 2. Magnetic Monopoles in Grand Unified Theories (with S.-Y. Pi), Phys. Rev. Lett. 45, 1-4 (1980). 3. Grand Unified Theories and Proton Decay, Physics Reports 72, 185-385 (1981). 4. A Comprehensive Analysis of Data Pertaining to the Weak Neutral Current and the Intermediate Vector Boson Masses (with U. Amaldi, A. Böhm, L. S. Durkin, A. K. Mann, W. J. Marciano, A. Sirlin, and H. H. Williams), Phys. Rev. D36, 1385 (1987). 5. Unification of Two Fundamental Forces (with A. K. Mann), Physics Today 42, # 12, p. 22 (1989). 6. High Precision Electroweak Experiments: A Global Search for New Physics Beyond the Standard Model (with M. Luo and A. K. Mann), Rev. Mod. Phys. 64, 87 (1992). 7. Implications of Precision Electroweak Experiments for m[t], rho [0], sin^2theta [W] , and Grand Unification (with M. Luo), Phys. Rev. D44, 817 (1991). 8. W and Z Physics, in TeV Physics, ed. T. Huang et al., (Gordon and Breach, Philadelphia, 1991), p. 53. 9. Editor (with M. Cvetic) of Testing the Standard Model (Proceedings of TASI-90), (World, Singapore, 1991). 10. Constraints on Additional Z Bosons (with M. Luo), Phys. Rev. D45, 278 (1992). 11. Uncertainties in Coupling Constant Unification (with N. Polonsky), Phys. Rev. D47, 4028 (1993), hep-ph/9210235. 12. Five Phases of Weak Neutral Current Experiments From the Perspective of a Theorist, Discovery of Weak Neutral Currents: The Weak Interaction Before and After, ed. A. K. Mann and D. B. Cline, AIP Conference Proceedings 300 (AIP, New York, 1994), p. 289, hep-ph/9305255. 13. Astrophysical Solutions are Incompatible with the Solar Neutrino Data (with S. Bludman and N. Hata), Phys. Rev. D49, 3622 (1994), hep-ph/9306212. 14. Editor of Precision Tests of the Standard Electroweak Model, (World, Singapore, 1995), and articles on pp 1, 15, 883. 15. The Strong Coupling, Unification, and Recent Data (with N. Polonsky), Phys. Rev. D52, 3081 (1995), hep-ph/9503214. 16. Big Bang Nucleosynthesis in Crisis (with N. Hata, R. Scherrer, G. Steigman, D. Thomas, T. Walker, and S. Bludman), Phys. Rev. Lett. 75, 3977 (1995), hep-ph/9505319. 17. Implications of Abelian Extended Gauge Structures From String Models (with M. Cvetic), Phys. Rev. D54, 3570 (1996), hep-ph/9511378. 18. Phase Transitions and Vacuum Tunneling Into Charge and Color Breaking Minima in the MSSM (with A. Kusenko and G. Segre), Phys. Rev. D54, 5824 (1996), hep-ph/9602414. 19. Electroweak Breaking and the Mu Problem in Supergravity Models with an Additional U(1) (with M. Cvetic, D. A. Demir, J. R. Espinosa, and L. Everett), Phys. Rev. D56, 2861 (1997), hep-ph/9703317. 20. Solutions to Solar Neutrino Anomaly (with N. Hata), Phys. Rev. D56, 6107 (1997), hep-ph/9705339. 21. Z' Physics and Supersymmetry (with M. Cvetic), in Perspectives on Supersymmetry, ed. G. Kane (World, Singapore, 1998), p312, hep-ph/9707451. Revised version in the reprinted edition, 2010. 22. Classification of Flat Directions in Perturbative Heterotic Superstring Vacua with Anomalous U(1) (with G. Cleaver, M. Cvetic, J. R. Espinosa, and L. Everett), Nucl. Phys. B525, 3 (1998), hep-th/ 23. Editor (with M. Cvetic), SUSY '97, Proceedings of the Fifth International Conference on Supersymmetries in Physics, Nucl. Phys. B (Proc. Suppl.) 62 (1998) (North Holland, 1998). 24. Physics Implications of Flat Directions in Free Fermionic Superstring Models I: Mass Spectrum and Couplings (with G. Cleaver, M. Cvetic, J. R. Espinosa, L. Everett, and J. Wang), Phys. Rev. D59, 055005 (1999), hep-ph/9807479. 25. Physics Implications of Flat Directions in Free Fermionic Superstring Models II: Renormalization Group Analysis (with G. Cleaver, M. Cvetic, J. R. Espinosa, L. Everett, and J. Wang), Phys. Rev. D59, 115002 (1999), hep-ph/9811355. 26. Flavor Changing Effects in Theories with a Heavy Z' Boson with Family Non-Universal Couplings (with M. Pluemacher), Phys. Rev. D62, 013006 (2000), hep-ph/0001204. 27. Editor, Neutrinos in Physics and Astrophysics: From 10^−33 to 10^+28 cm, (Proceedings of TASI-98), (World, Singapore, 2000). 28. Implications of gauge unification for time variation of the fine structure constant (with G. Segre and M. J. Strassler), Phys. Lett. B 528, 121 (2002), hep-ph/0112233. 29. The Z−Z' Mass Hierarchy in a Supersymmetric Model with a Secluded U(1)' -Breaking Sector (with J. Erler and T. Li), Phys. Rev. D 66, 015002 (2002), hep-ph/0205001. 30. Phenomenology of A Three-Family Standard-like String Model (with M. Cvetic and G. Shiu), Phys. Rev. D 66, 066004 (2002), hep-ph/0205252. 31. No-go for detecting CP violation via neutrinoless double beta decay (with V. Barger, S. L. Glashow, and D. Marfatia), Phys. Lett. B 540, 247 (2002), hep-ph/0205290. 32. Primordial nucleosynthesis constraints on Z' properties (with V. Barger and H. S. Lee), Phys. Rev. D 67, 075009 (2003), hep-ph/0302066. 33. Dynamical supersymmetry breaking in standard-like models with intersecting D6-branes (with M. Cvetic and J. Wang), Phys. Rev. D 68, 046002 (2003), hep-th/0303208. 34. Electroweak baryogenesis in a supersymmetric U(1)' model, (with J. Kang, T. j. Li, and T. Liu), Phys. Rev. Lett. 94, 061801 (2005), hep-ph/0402086. 35. The Higgs sector in a U(1)' extension of the MSSM, (with T. Han and B. McElrath), Phys. Rev. D 70, 115006 (2004), hep-ph/0405244. 36. D6-brane splitting on type IIA orientifolds, (with M. Cvetic, T. J. Li, and T. Liu), Nucl. Phys. B 709, 241 (2005), hep-th/0407178. 37. Neutrino physics (theory), ICHEP 2004, ed. H. Chen et al., (World Singapore, 2005), p198, hep-ph/0411116. 38. Neutrino masses in supersymmetric SU(3)[C] ×SU(2)[L] ×U(1)[Y] ×U(1)' models, (with J. H. Kang and T. J. Li,), Phys. Rev. D 71, 015012 (2005), hep-ph/0411404. 39. Z' discovery limits for supersymmetric E(6) models, (with J. Kang), Phys. Rev. D 71, 035014 (2005), hep-ph/0412190. 40. Toward realistic intersecting D-brane models, (with R. Blumenhagen, M. Cvetic, and G. Shiu), ARNPS 55, 71 (2005), hep-th/0502005. 41. Massive neutrinos and (heterotic) string theory, (with J. Giedt, G. L. Kane, and B. D. Nelson), Phys. Rev. D 71, 115013 (2005), hep-th/0502032. 42. Elementary Particles in Physics (with S. Gasiorowicz), in Encyclopedia of Physics, Third Edition, ed. R. C. Lerner and G. L. Trigg, (Wiley-VCH, 2005), p671. 43. Higgs sector in extensions of the MSSM, (with V. Barger, H. S. Lee and G. Shaughnessy), Phys. Rev. D 73, 115010 (2006), hep-ph/0603247. 44. Neutralino signatures of the singlet extended MSSM (with V. Barger and G. Shaughnessy), Phys. Lett. B 644, 361 (2007), hep-ph/0609068. 45. Collider signatures of singlet extended Higgs sectors (with V. Barger and G. Shaughnessy), Phys. Rev. D 75, 055013 (2007), hep-ph/0611239. 46. TeV physics and the Planck scale (with V. Barger and G. Shaughnessy), New J. Phys. 9, 333 (2007), hep-ph/0702001. 47. A T-odd observable sensitive to CP violating phases in squark decay (with G. Paz, L. T. Wang and I. Yavin), JHEP 0707, 055 (2007), hep-ph/0702068. 48. LHC Phenomenology of an Extended Standard Model with a Real Scalar Singlet, (with V. Barger, M. McCaskey, M. J. Ramsey-Musolf and G. Shaughnessy), Phys. Rev. D 77, 035005 (2008), 0706.4311 49. Theory and Phenomenology of Exotic Isosinglet Quarks and Squarks, (with J. Kang and B. D. Nelson), Phys. Rev. D 77, 035003 (2008), 0708.2701 [hep-ph]. 50. Z' -mediated Supersymmetry Breaking, (with G. Paz, L. T. Wang and I. Yavin), Phys. Rev. Lett. 100, 041802 (2008), 0710.1632 [hep-ph]. 51. Dirac Neutrino Masses from Generalized Supersymmetry Breaking, (with D. A. Demir and L. L. Everett), Phys. Rev. Lett. 100, 091804 (2008), 0712.1341 [hep-ph]. 52. The Physics of Heavy Z' Gauge Bosons, Rev. Mod. Phys. 81, 1199 (2008), 0801.1345 [hep-ph]. 53. Aspects of Z' -mediated Supersymmetry Breaking, (with G. Paz, L. T. Wang and I. Yavin), Phys. Rev. D 77, 085033 (2008), 0801.3693 [hep-ph]. 54. D-Instanton Generated Dirac Neutrino Masses, (with M. Cvetic), Phys. Rev. D 78, 066012 (2008), 0803.2876 [hep-th]. 55. Electroweak Physics, (with J. Erler), Acta Phys. Polon. B 39, 2595 (2008), 0807.3023 [hep-ph]. 56. Complex Singlet Extension of the Standard Model, (with V. Barger, M. McCaskey, M. Ramsey-Musolf and G. Shaughnessy), Phys. Rev. D 79, 015018 (2009), 0811.0393 [hep-ph]. 57. Scalar Potentials and Accidental Symmetries in Supersymmetric U(1)' Models, (with G. Paz and I. Yavin), Phys. Lett. B 671, 245 (2009), 0811.1196 [hep-ph]. 58. Introduction to the Standard Model and Electroweak Physics, Proceedings of TASI-08, The Dawn of the LHC Era, ed. Tao Han (World, Singapore, 2010), p 3, 0901.0241 [hep-ph]. 59. Family Non-universal U(1)' Gauge Symmetries and b − > s Transitions, (with V. Barger, L. Everett, J. Jiang, T. Liu and C. Wagner), Phys. Rev. D 80, 055008 (2009), 0902.4507 [hep-ph]. 60. Improved Constraints on Z' Bosons from Electroweak Precision Data, (with J. Erler, S. Munir and E. R. Pena), JHEP 0908, 017 (2009), 0906.2435 [hep-ph]. 61. b − > s Transitions in Family-dependent U(1)' Models, (with V. Barger, L. Everett, J. Jiang, T. Liu and C. Wagner), JHEP 0912, 048 (2009), 0906.3745 [hep-ph]. 62. Six-lepton Z' resonance at the LHC, (with V. Barger and H. S. Lee), Phys. Rev. Lett. 103, 251802 (2009), 0909.2641 [hep-ph]. 63. The Physics of New U(1)' Gauge Bosons, Proceedings of SUSY09, AIP Conf. Proc. 1200, 55 (2010), 0909.3260 [hep-ph]. 64. Combining Anomaly and Z' Mediation of Supersymmetry Breaking, (with J. de Blas, G. Paz and L. T. Wang), JHEP 1001, 037 (2010), 0911.1996 [hep-ph]. 65. Electroweak Baryogenesis, CDM and Anomaly-free Supersymmetric U(1)' Models, (with J. Kang,T. Li and T. Liu), JHEP 1104, 097 (2011), 0911.2939 [hep-ph]. 66. Z' Physics at the LHC, 0911.4294 [hep-ph], in Hunt for New Physics at the Large Hadron Collider, ed. P. Nath and B. Nelson, Nucl. Phys. Proc. Suppl. 200-202, 185 (2010), 1001.2693 [hep-ph]. 67. Phenomenological Implications of Supersymmetric Family Non-universal U(1)' Models, (with L. L. Everett, J. Jiang, and T. Liu), Phys. Rev. D 82, 094024 (2010), 0911.5349 [hep-ph]. 68. The Standard Model and Beyond (CRC Press, New York, 2009). 69. The Weinberg Operator and a Lower String Scale in Orientifold Compactifications, (with M. Cvetic, J. Halverson, and R. Richter), JHEP 1010, 094 (2010), 1001.3148 [hep-th]. 70. Precision Constraints on Extra Fermion Generations, (with J. Erler), Phys. Rev. Lett. 105, 031801 (2010), 1003.3211 [hep-ph]. 71. Singlet Extensions of the MSSM in the Quiver Landscape, (with M. Cvetic and J. Halverson), JHEP 1009, 076 (2010), 1006.3341 [hep-th]. 72. Z' Bosons at Colliders: a Bayesian Viewpoint, (with J. Erler, S. Munir, and E. Rojas), JHEP 1111, 076 (2011), 1103.2659 [hep-ph]. 73. Impact of extra particles on indirect Z' limits, (with F. del Aguila, J. de Blas, and M. Perez-Victoria), Phys. Rev. D 84, 015015 (2011), 1104.5512 [hep-ph]. 74. A Higgsophilic s-channel Z' and the CDF W+2J Anomaly, (with J. Fan, D. Krohn, and I. Yavin), Phys. Rev. D 84, 105012 (2011), 1106.1682 [hep-ph]. 75. Requiem for an FCHAMP? Fractionally CHArged, Massive Particle, (with G. Steigman), Phys. Rev. D 84, 065040 (2011), 1107.3131 [hep-ph]. 76. Implications of String Constraints for Exotic Matter and Z' s Beyond the Standard Model, (with M. Cvetic and J. Halverson), JHEP 1111, 058 (2011), 1108.5187 [hep-ph]. 77. Neutrino Masses from the Top Down, Ann. Rev. Nucl. Part. Sci. 62, 215 (2012), 1112.5992 [hep-ph]. 78. Electroweak Model and Constraints on New Physics (with J. Erler), in 2012 WWW update for 2012 edition of Review of Particle Properties, (URL: http://pdg.lbl.gov/). Print edition: J. Beringer et al. [Particle Data Group], Phys. Rev. D 86, 010001 (2012). 79. Light Sterile Neutrinos and Short Baseline Neutrino Oscillation Anomalies, (with J. Fan), JHEP 1204, 083 (2012), 1201.6662 [hep-ph]. 80. Light Sterile Neutrinos: A White Paper, (K. Abazajian et al.), 1204.5379 [hep-ph]. 81. Review Of Particle Physics ( J. Beringer et al. [Particle Data Group]), Phys. Rev. D 86, 010001 (2012), (URL: http://pdg.lbl.gov/). 82. Ultraviolet Completions of Axigluon Models and Their Phenomenological Consequences, (with M. Cvetic and J. Halverson), JHEP 1211, 064 (2012), 1209.2741 [hep-ph]. 83. Grand unification, review article on Scholarpedia 7, 11419 (2012). 84. Exploring Quantum Physics at the ILC, (with A. Freitas, K. Hagiwara, S. Heinemeyer, K. Moenig, M. Tanabashi and G. W. Wilson), 1307.3962 [hep-ph]. 85. Diagnosis of a New Neutral Gauge Boson at the LHC and ILC for Snowmass 2013, (with T. Han, Z. Liu and L. -T. Wang), 1308.2738 [hep-ph]. CV: HTML, PDF, condensed HTML, condensed PDF. This page was last modified Oct 4, 2013 by Paul Langacker Address questions and comments about this server to webmaster@sns.ias.edu File translated from T[E]X by T[T]Hgold, version 4.00. On 4 Oct 2013, 07:36.
{"url":"http://www.sns.ias.edu/~pgl/","timestamp":"2014-04-20T16:45:38Z","content_type":null,"content_length":"39285","record_id":"<urn:uuid:da54852e-8226-4a14-8a8c-6b3c0dd15f4b>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00238-ip-10-147-4-33.ec2.internal.warc.gz"}
R-statistics blog In this post I will provide R code that implement’s the combination of repeated running quantile with the LOESS smoother to create a type of “quantile LOESS” (e.g: “Local Quantile Regression”). This method is useful when the need arise to fit robust and resistant [DEL:(Need to be verified):DEL] a smoothed line for a quantile (an example for such a case is provided at the end of this post). If you wish to use the function in your own code, simply run inside your R console the following line: I came a cross this idea in an article titled “High throughput data analysis in behavioral genetics” by Anat Sakov, Ilan Golani, Dina Lipkind and my advisor Yoav Benjamini. From the abstract: In recent years, a growing need has arisen in different fields, for the development of computational systems for automated analysis of large amounts of data (high-throughput). Dealing with non-standard noise structure and outliers, that could have been detected and corrected in manual analysis, must now be built into the system with the aid of robust methods. [...] we use a non-standard mix of robust and resistant methods: LOWESS and repeated running median. The motivation for this technique came from “Path data” (of mice) which is prone to suffer from noise and outliers. During progression a tracking system might lose track of the animal, inserting (occasionally very large) outliers into the data. During lingering, and even more so during arrests, outliers are rare, but the recording noise is large relative to the actual size of the movement. The statistical implications are that the two types of behavior require different degrees of smoothing and resistance. An additional complication is that the two interchange many times throughout a session. As a result, the statistical solution adopted needs not only to smooth the data, but also to recognize, adaptively, when there are arrests. To the best of our knowledge, no single existing smoothing technique has yet been able to fulfill this dual task. We elaborate on the sources of noise, and propose a mix of LOWESS (Cleveland, 1977) and the repeated running median (RRM; Tukey, 1977) to cope with these challenges If all we wanted to do was to perform moving average (running average) on the data, using R, we could simply use the rollmean function from the zoo package. But since we wanted also to allow quantile smoothing, we turned to use the rollapply function. R function for performing Quantile LOESS Here is the R function that implements the LOESS smoothed repeated running quantile (with implementation for using this with a simple implementation for using average instead of quantile):
{"url":"http://www.r-statistics.com/tag/boundary-estimation/","timestamp":"2014-04-20T23:27:16Z","content_type":null,"content_length":"37388","record_id":"<urn:uuid:324fc81c-555f-4be5-822f-120ac552b500>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00324-ip-10-147-4-33.ec2.internal.warc.gz"}
AW: st: Re: destringing values led to Stata recoding them as missing [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] AW: st: Re: destringing values led to Stata recoding them as missing From "Christian Holz" <statalist@krueschan.de> To <statalist@hsphsun2.harvard.edu> Subject AW: st: Re: destringing values led to Stata recoding them as missing Date Sat, 28 Aug 2004 12:40:28 +0200 before running the program with your data set you should turn .set more off or even better .set output error After running the program you should either turn again .set more on .set output proc -----Ursprüngliche Nachricht----- Von: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] Im Auftrag von Christian Holz Gesendet: 28 August 2004 12:27 An: statalist@hsphsun2.harvard.edu Betreff: AW: st: Re: destringing values led to Stata recoding them as Hi Suzy, first I think it is worth stressing that you should indeed very carefully consider the point with which Daniel came up recently: Although it is technically not very hard to remove nonnumeric characters from your string to allow the destring command to produce numbers (see below), you should be sure that it is that what you want. You should carefully thing whether your data is really numeric in the meaning of interval or ratio (or at least ordinal) level of measurement. You can of course for example perform a regression analysis with an RHS variable in which is a value 1002=diabetes and 1003=malaria and 1004=hernia or what ever and Stata will give you estimates for that regression, but interpreting those coefficients will not be very meaningful. But besides these objections, you may use the following code to remove everything which is not a number from your string variables. foreach varname of varlist xvar {; local i 1; while `i'<=_N {; local digit 1; local tempstring ""; while `digit' <= length(`varname'[`i']) {; local s_digit =substr(`varname'[`i'],`digit',1); if ("`s_digit'">="0"&"`s_digit'"<="9") local local digit=`digit'+1; replace `varname'="`tempstring'" in `i'; local i=`i'+1; destring(`varname'), replace; Please note that you have to wirte all the variable names of the variables which you want to convert into numeric instead of xvar in the first opening Please note further that the code will replace all the values in your original variables whith numeric ones. The program does as follows: Original (from your message): . d Contains data obs: 4 vars: 4 size: 100 (99.9% of memory free) storage display value variable name type format label variable label patient float %9.0g var1 str5 %9s var2 str6 %9s var3 str6 %9s Sorted by: Note: dataset has changed since last saved . l | patient var1 var2 var3 | 1. | 1001 1235- V2347 456 | 2. | 1002 1233 143135 E28950 | 3. | 1003 38568 05476- 89076 | 4. | 1004 126 333 v5678 | Will be as follows after running the code (which may take some time in your cases with 300k observations) . d Contains data obs: 4 vars: 4 size: 80 (99.9% of memory free) storage display value variable name type format label variable label patient float %9.0g var1 long %10.0g var2 long %10.0g var3 long %10.0g Sorted by: Note: dataset has changed since last saved . l | patient var1 var2 var3 | 1. | 1001 1235 2347 456 | 2. | 1002 1233 143135 28950 | 3. | 1003 38568 5476 89076 | 4. | 1004 126 333 5678 | Best wishes -----Ursprüngliche Nachricht----- Von: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] Im Auftrag von Suzy Gesendet: 28 August 2004 05:26 An: statalist@hsphsun2.harvard.edu Betreff: Re: st: Re: destringing values led to Stata recoding them as Dear Daniel, I used the destring option because I wasn't able to analyze the data as is - I would get error messages regarding not being able to analyze string. These values are codes that represent disorders, so you are correct. But since I am a fairly new user of Stata, I just figured that it couldn't read those values because of the dashes or the alpha-numeric since the datapoints that were only numbers were read and analyzed with no problem. Daniel Egan wrote: >Hi Suzy, >Just to be clear, are you sure you want to create numeric values? The usual >reason for destringing a variable is that it IS a numeric variable that has >typos which cause it to be regarded as text. Is this is a continuous >variable that does have a numeric (linear etc) relationship. If each of >these string variables represent different disorders, you should have a >methodological reason for making them numeric. Otherwise, keep them in an >"apples and oranges" arrangement of strings, i.e. diabetes (1003) is not >"one more than" malaria (1002)... >In essence, if you want to use each of these variables as categoricals, >are fine as is - as strings. You will be able to analyze them as strings, >a categorical or dummy variable sense. >I may be way off here, but just wanted to make sure you knew you could >analyze them as is..... >Apologies if I am being obvious. >----- Original Message ----- >From: "Suzy" <scott_788@wowway.com> >To: <statalist@hsphsun2.harvard.edu> >Sent: Friday, August 27, 2004 5:44 PM >Subject: st: destringing values led to Stata recoding them as missing >| Dear Statalisters; >| I have seven variables of over 300,000 observations each. Within each >| variable, I have over 2000 different values. These datapoints >| represent specific codes - for example : (72200 = intervertebral disc >| disorder). Within each of these seven variables, there are datapoints >| (values) with dashes or alphabets (Ie: 4109- or V2389). The majority >| of the values though, are purely numeric (23405). I used the destring >| option so that I could analyze the data and Stata treated all those >| datapoints that included dashes and alphabets as missing. Now there is a >| period . where there used to be a value. I have two questions: >| 1. Will the restring option restore the datapoints? >| 2. How can I successfully "destring" these values so that I can include >| them in my analysis? >| Any help and/or specific code would be very helpful as I am only >| marginally competent with Stata basics. >| Thank you! >| Suzy >| * >| * For searches and help try: >| * http://www.stata.com/support/faqs/res/findit.html >| * http://www.stata.com/support/statalist/faq >| * http://www.ats.ucla.edu/stat/stata/ >* For searches and help try: >* http://www.stata.com/support/faqs/res/findit.html >* http://www.stata.com/support/statalist/faq >* http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2004-08/msg00833.html","timestamp":"2014-04-16T10:52:44Z","content_type":null,"content_length":"14488","record_id":"<urn:uuid:2f96f4d7-5860-4fd1-91e2-ed7c39cfe0d2>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00014-ip-10-147-4-33.ec2.internal.warc.gz"}
Linux Blog Section: User Commands (1) Updated: 5 February 1993 Index Return to Main Contents pnmnlfilt - non-linear filters: smooth, alpha trim mean, optimal estimation smoothing, edge enhancement. pnmnlfilt alpha radius produces an output image where the pixels are a summary of multiple pixels near the corresponding location in an input image. This program works on multi-image streams. This is something of a swiss army knife filter. It has 3 distinct operating modes. In all of the modes each pixel in the image is examined and processed according to it and its surrounding pixels values. Rather than using the 9 pixels in a 3x3 block, 7 hexagonal area samples are taken, the size of the hexagons being controlled by the radius parameter. A radius value of 0.3333 means that the 7 hexagons exactly fit into the center pixel (ie. there will be no filtering effect). A radius value of 1.0 means that the 7 hexagons exactly fit a 3x3 pixel array. Alpha trimmed mean filter. (0.0 <= alpha <= 0.5) The value of the center pixel will be replaced by the mean of the 7 hexagon values, but the 7 values are sorted by size and the top and bottom alpha portion of the 7 are excluded from the mean. This implies that an alpha value of 0.0 gives the same sort of output as a normal convolution (ie. averaging or smoothing filter), where radius will determine the "strength" of the filter. A good value to start from for subtle filtering is alpha = 0.0, radius = 0.55 For a more blatant effect, try alpha 0.0 and radius 1.0 An alpha value of 0.5 will cause the median value of the 7 hexagons to be used to replace the center pixel value. This sort of filter is good for eliminating "pop" or single pixel noise from an image without spreading the noise out or smudging features on the image. Judicious use of the radius parameter will fine tune the filtering. Intermediate values of alpha give effects somewhere between smoothing and "pop" noise reduction. For subtle filtering try starting with values of alpha = 0.4, radius = 0.6 For a more blatant effect try alpha = 0.5, radius = 1.0 Optimal estimation smoothing. (1.0 <= alpha <= 2.0) This type of filter applies a smoothing filter adaptively over the image. For each pixel the variance of the surrounding hexagon values is calculated, and the amount of smoothing is made inversely proportional to it. The idea is that if the variance is small then it is due to noise in the image, while if the variance is large, it is because of "wanted" image features. As usual the radius parameter controls the effective radius, but it probably advisable to leave the radius between 0.8 and 1.0 for the variance calculation to be meaningful. The alpha parameter sets the noise threshold, over which less smoothing will be done. This means that small values of alpha will give the most subtle filtering effect, while large values will tend to smooth all parts of the image. You could start with values like alpha = 1.2, radius = 1.0 and try increasing or decreasing the alpha parameter to get the desired effect. This type of filter is best for filtering out dithering noise in both bitmap and color images. Edge enhancement. (-0.1 >= alpha >= -0.9) This is the opposite type of filter to the smoothing filter. It enhances edges. The alpha parameter controls the amount of edge enhancement, from subtle (-0.1) to blatant (-0.9). The radius parameter controls the effective radius as usual, but useful values are between 0.5 and 0.9. Try starting with values of alpha = 0.3, radius = 0.8 Combination use. The various modes of pnmnlfilt can be used one after the other to get the desired result. For instance to turn a monochrome dithered image into a grayscale image you could try one or two passes of the smoothing filter, followed by a pass of the optimal estimation filter, then some subtle edge enhancement. Note that using edge enhancement is only likely to be useful after one of the non-linear filters (alpha trimmed mean or optimal estimation filter), as edge enhancement is the direct opposite of smoothing. For reducing color quantization noise in images (ie. turning .gif files back into 24 bit files) you could try a pass of the optimal estimation filter (alpha 1.2, radius 1.0), a pass of the median filter (alpha 0.5, radius 0.55), and possibly a pass of the edge enhancement filter. Several passes of the optimal estimation filter with declining alpha values are more effective than a single pass with a large alpha value. As usual, there is a tradeoff between filtering effectiveness and loosing detail. Experimentation is encouraged. The alpha-trimmed mean filter is based on the description in IEEE CG&A May 1990 Page 23 by Mark E. Lee and Richard A. Redner, and has been enhanced to allow continuous alpha adjustment. The optimal estimation filter is taken from an article "Converting Dithered Images Back to Gray Scale" by Allen Stenger, Dr Dobb's Journal, November 1992, and this article references "Digital Image Enhancement and Noise Filtering by Use of Local Statistics", Jong-Sen Lee, IEEE Transactions on Pattern Analysis and Machine Intelligence, March 1980. The edge enhancement details are from pgmenhance(1), which is taken from Philip R. Thompson's "xim" program, which in turn took it from section 6 of "Digital Halftones by Dot Diffusion", D. E. Knuth, ACM Transaction on Graphics Vol. 6, No. 4, October 1987, which in turn got it from two 1976 papers by J. F. Jarvis et. al. Integers and tables may overflow if PPM_MAXMAXVAL is greater than 255. Graeme W. Gill
{"url":"http://www.thelinuxblog.com/linux-man-pages/1/pnmnlfilt","timestamp":"2014-04-19T14:37:41Z","content_type":null,"content_length":"27651","record_id":"<urn:uuid:0630a28a-d0b2-4786-80cd-d86c459f613d>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00028-ip-10-147-4-33.ec2.internal.warc.gz"}
Arnold Sommerfeld introduced the fine structure constant into physics in the 1920s in order to account for the splitting of atomic spectral lines. Over the years, the fine structure constant (α) was discovered to be a fairly ubiquitous constant, popping up everywhere in the physics of Atomic Scale systems and quantum electrodynamics. One of the strangest things about α, given its importance and ubiquity, is the fact that for over 80 years it had remained a mysterious enigma. The brilliant physicist Wolfgang Pauli famously quipped: “When I die, my first question to the devil will be: What is the meaning of the fine structure constant?” In July of 2007, 8 decades after its introduction into physics, analyses based on the SSCP led to the discovery of a unique and natural explanation for the origin of α. “Strong Gravity” on the Atomic Scale Throughout this website we have noted that nature’s dimensional “constants” must be discretely scaled and thus have different values on each Scale, when measured in terms of our conventional Stellar Scale units. For example, we determine a Stellar Scale value of the gravitational constant (G[Ψ][ = 0]) of 6.67 x 10^-8 cm^3/g sec^2, but the Atomic Scale value (G[Ψ][ = ][ -1]) is 2.18 x 10^31 cm^3/ g sec^2. Relative to our Scale in nature’s hierarchy, the gravitational interactions on the Atomic Scale are extremely strong. A Revised Planck Mass As discussed in the Technical Note entitled “A Revised Planck Scale”, the conventional Planck Scale was based on an incorrect use of G[Ψ][ = 0] in the calculations, instead of the correct gravitational coupling constant (G[Ψ][ = ][ -1]) for Atomic Scale systems. When the Planck scale is recalculated using G[Ψ][ = ][ -1], the length, mass, time and charge values for the revised Planck scale are all closely associated with the scale of the proton. Here we are primarily concerned with the revised Planck mass, and its value is 1.20 x 10^-24 g. Calculating α It has been known for many decades that the numerical value of α can be expressed by the conventional equation: α = e^2 / 4 π ε[0] ħ c = 7.297 x 10^-3 , (1) where e is the unit charge of the Atomic Scale, e[0] is the electromagnetic permittivity constant, ħ is Planck’s constant divided by 2π, and c is the velocity of light. Steps to the Meaning of α (a) Since α is dimensionless, it is natural to expect that it is a ratio of two quantities with the same dimensionality. (b) It seems very reasonable to group the constants in Eq. (1) as follows, α = [e^2 / 4 π ε[0]] / [ħ c] . (2) (c) From the derivation of the Planck mass (M[pl]), we know that: M[pl] = [ħ c / G[Ψ][ = ][ -1] ]^1/2 = 1.20 x 10^-24 g , (3) and that ħ c = G[Ψ][ = ][ -1] (M[pl])^2 . (4) (d) We can check the validity of Eq. (4) by showing that (1.05 x 10^-27 erg sec)(2.99 x 10^10 cm/sec) = (2.18 x 10^31 cm^3/g sec)(1.20 x 10^-24 g)^2 , given the small uncertainty in determining Λ^D empirically. (e) By substituting G[Ψ][ = ][ -1](M[pl])^2 for ħc in Eq. (1), we find that: α = [ e^2 / 4 π ε[0] ] / [G[Ψ][ = ][ -1] (M[pl])^2 ] . (5) The Meaning of α The numerator of Eq. (5) is the square of the unit electromagnetic charge and the denominator is the square of the unit gravitational “charge”, for Atomic Scale systems. Equivalently, the numerator can be viewed as the strength of the unit electromagnetic interaction and the denominator can be viewed as the strength of the unit gravitational interaction, for Atomic Scale systems. Without dying or enlisting the aid of the Pauli’s devil, we have come up with a highly compelling answer to the a enigma. The fine structure constant is the ratio of the strengths of these two fundamental interactions. α is Truly a Constant Since α is a dimensionless constant, its value is exactly the same on all Scales of nature’s discrete self-similar hierarchy.
{"url":"http://www3.amherst.edu/~rloldershaw/newdevyear/2007/July.htm","timestamp":"2014-04-18T18:25:30Z","content_type":null,"content_length":"11790","record_id":"<urn:uuid:6cba47a2-1cc8-4605-821b-ec3ed2513266>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00354-ip-10-147-4-33.ec2.internal.warc.gz"}
Game Rendering There are many occasions when the fragment position in world space needs to be reconstructed from a texture holding the scene depth (depth texture). One example of use is in deferred rendering when trying to decrease memory usage by not saving the position but instead only the depth. This will result in one channel of data, instead of three channels needed when saving the whole position. There are different ways to save the depth. The most popular are view space depth and screen space depth. Saving depth in view space instead of screen space gives two advantages. It’s faster, and it gives better precision because it’s linear in view space. This is how screen space depth can be rendered in HLSL: struct VS_OUTPUT float4 Pos: POSITION; float4 posInProjectedSpace: TEXCOORD0; // vertex shader VS_OUTPUT vs_main( float4 Pos: POSITION ) VS_OUTPUT Out = (VS_OUTPUT) 0; Out.Pos = mul(Pos,matWorldViewProjection); Out.posInProjectedSpace = Out.Pos; return Out; // pixel shader float4 ps_main( VS_OUTPUT Input ) : COLOR float depth = Input.posInProjectedSpace.z / Input.posInProjectedSpace.w; return depth; The HLSL pixel shader below shows how the position can be reconstructed from the depth map stored with the code above. Although this is one of the slowest ways of doing position reconstruction since it requires a matrix multiplication. float4 ps_main(float2 vPos : VPOS;) : COLOR0 float depth = tex2D(depthTexture,vPos*fInverseViewportDimensions + fInverseViewportDimensions*0.5).r; // scale it to -1..1 (screen coordinates) float2 projectedXY = vPos*fInverseViewportDimensions*2-1; projectedXY.y = -projectedXY.y; // create the position in screen space float4 pos = float4(projectedXY,depth,1); // transform position into world space by multiplication with the inverse view projection matrix pos = mul(pos,matViewProjectionInverse); // make it homogeneous pos /= pos.w; // result will be (x,y,z,1) in world space return pos; // for now, just render it out To reconstruct depth from view space, a ray from the camera position to the frustum far plane is needed. For a full screen quad, this ray can be precalculated for the four corners and passed to the shader. This is how the computer game Crysis did it [1] . But for arbitrary geometry, as needed in deferred rendering, the ray must be calculated in the shaders [2] . [1] “Finding next gen: CryEngine 2″ [2] “Reconstructing Position From Depth, Continued” Geometry Pipeline The geometry pipeline is the pipeline of transformations for geometry ( for example a face ) from object space to screen space. It looks like the following. Information about the different spaces: Screen Space The last step of transformations (world to screen) when rendering is to get the rendered objects to screen space. This is a 2D space with the origin in the middle of the screen. After the vertex shader but before the fragment shader the fragments will be normalized device coordinates and will automatically be transformed to screen space. The fragments x,y coordinates will be mapped to the x and y coordinates on the screen (the exact mapping is set by viewportbounds). And the z coordinates will be mapped to the depth buffer (z-buffer) as the following [-1] -> [near clip plane limit] and [1] -> [far clip plane limit]. The viewport mapping is the following [-1,1] in X is mapped to the left to right of the viewport range. The [-1,1] in Y is mapped to the bottom to top of the viewport range Framents will automatically be transformed to screen space before the fragment shader, you just need to set the viewport mapping with the following OpenGL call. glViewport(0, screenHeight/2, screenWidth/2, screenHeight/2); You can access the current fragment depth in a fragment shader with the variable gl_FragDepth which in default is the same as gl_FragCoord.z. The screen coordinates can be accessed in the variable Information about pixel shaders in DirectX (most is the same in OpenGL): A quick reference guide to GLSL: Description of the mapping in DirectX: Normalized Device Coordinates After projection to clip space the points will no longer be normalized. This can be restored by dividing all components of the vertex by the fourth component (w). This will (together with the projection transformation) result in that the projection volume will map to a cube with minimum corner in (-1,-1,-1) and maximum corner in (1,1,1). For example in OpenGL, the near clip plane will map to -1 ( in DirectX it will be mapped to 0) and the far clip plane will map to 1. The division by w (homogenization) is performed automatically between the vertex shader and the fragment shader in the procedure called Rasterization.
{"url":"http://www.gamerendering.com/category/cameras/","timestamp":"2014-04-18T15:38:29Z","content_type":null,"content_length":"38426","record_id":"<urn:uuid:9b72d5cd-9b12-4d8b-bc7d-5e7925e3bdde>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00256-ip-10-147-4-33.ec2.internal.warc.gz"}
This coming semester I will be teaching a couple of sections of Math 312, which is the introductory real analysis course at Penn State. The only prerequisite for this course is Calculus II (Math 141) and, in particular, students are not required to have taken an “introduction to proofs” course; though, in practice, many of them will have done so. I have long thought that in teaching a first or second proof-based course, especially in analysis with lots of quantifiers floating about, one should try to emphasize the “block structured” nature of proofs, analogous to the block-structured nature of a programming language like C. I had the impression that I came up with this idea for myself, but Dan Velleman wrote a whole beautiful book (How to Prove It) from this perspective, and I know I read the first edition of that book when I was in Oxford, so probably that is where I became aware of this point of view. Anyhow, I spent a day writing some TeX macros to format “block structured” proofs. Follow this link for a few examples. One of the pleasing things about structuring proofs this way is that one can describe how a proof is constructed, by compressing the lower-level data. Here for example are compressed versions of the three proofs above. The symbol \( \require{AMSsymbols}\blacktriangle\quad\blacksquare\quad\blacktriangledown \) for the omitted material is supposed to remind students that there are three ways to look: “up” for the givens at this point in the proof, “down” for the goals for which this part of the proof is reaching, and “across” to construct some argument linking the local givens to the local goals. Does anyone have experience using this sort of explicitly structured proof in Analysis I? How did it work out? Mathematics for Sustainability website Some readers will know that I have been thinking for a year or so about a course on Mathematics for Sustainability to teach at PSU early next year. I’ve written regularly about this at my Points of Inflection blog, for instance here, here and here. The official proposal for the course is wending its way through the Faculty Senate approval process. Assuming all goes according to plan, the course will be on the books this fall, and I will offer it for the first time in Fall 2014. I’ve created a website for this course-to-be at http://sites.psu.edu/mathforsust/. It’s pretty much of a skeleton right now, but I plan to fill it out over the year so that it becomes a full-fledged resource for the course by next year. So check back regularly. “Winding Around” now going up The website for my MASS course, “Winding Around” (Math 497C, Fall 2013) is now live. “Winding Around” is an introduction to topology using the winding number as a unifying theme. It’s intended to be different from most introductory topology courses because we’ll try to define the key concept (winding number) as economically as possible and then apply it in many different ways. One of the inspirations for this course is the classic expository paper Atiyah, M. F. “Algebraic Topology and Elliptic Operators.” Communications on Pure and Applied Mathematics 20, no. 2 (1967): 237–249. doi:10.1002/cpa.3160200202. and if things go according to plan I hope that we may get to discuss the Bott periodicity theorem at the end of the course, in the spirit of Atiyah’s article. A rigid framework Well, I am back from Yosemite, but not in quite the way I had hoped. I was climbing the Prow on Washington Column with Aaron McMillan (a grad student from Berkeley, student of Weinstein’s) and on our second day I took a fall resulting in a broken ankle and the end of our climbing vacation. If you are interested in the long version you can find the story here.
{"url":"http://sites.psu.edu/johnroe/category/teaching/","timestamp":"2014-04-18T05:31:22Z","content_type":null,"content_length":"39174","record_id":"<urn:uuid:fd651999-8872-40d2-a8da-c9632874b720>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00507-ip-10-147-4-33.ec2.internal.warc.gz"}
Geometry meaning of higher cohomology of sheaves? up vote 17 down vote favorite Let $X$ be a projective algebraic variety over a algebraic closed field $k$. Let $\mathcal{F}$ be a coherent sheaf on $X$. We know that $H^0(X, \mathcal{F})$ is the vector space of global sections of $\mathcal{F}$. This gives us a geometric illustration of $H^0$. For example, let $I_D$ be the ideal sheaf of a hypersurface $D$ of degree $>1$ in a projective space $\mathbb{P}^n$, then it is easy to see that $$H^0(\mathbb{P}^n,I_D\otimes\mathcal{O}_{\mathbb{P}^n}(1))=0.$$ In fact, there is no hyperplane containing $D$, which means that there is no global section of $\mathcal{O}_{\mathbb{P}^n}(1)$, which are hyperplanes, containing $D$. Hence $H^0(\mathbb{P}^n, I_D\ otimes\mathcal{O}_{\mathbb{P}^n}(1))=0$. However, my first question is how to understand higher cohomologies of sheaves in geometric ways. The following questions then come out: 1) How to understand Serre's vanishing theorem, i.e., is there a geometric way to think about the vanishing of $H^q(X, \mathcal{F}\otimes A^n)$ for $n\gg1$, where $\mathcal{F}$ is coherent and $A$ is 2) How to understan Kodaira's vanishing theorem geometrically. Maybe a concrete question will help, say $D$ a subvariety of $\mathbb{P}^n$, how to determine geometrically whether $H^0(\mathbb{P}^n, I_D\otimes\mathcal{O}_{\mathbb{P}^n}(1))$ vanishes or not. 1 Can you explain the "Hence" in "Hence $H^0(\mathbb{P}^n, I_D \otimes \mathcal{O}_{\mathbb{P}^n}(1)) = 0$."? – Timo Keller Jan 10 '10 at 22:02 1 Clearly any section in $H^0(\mathbb{P}^n, I_D\otimes\mathcal{O}_{\mathbb{P}^n}(1))$ is a section in $H^0(\mathbb{P}^n,mathcal{O}_{\mathbb{P}^n}(1))$, but not an arbitrary section, it is a section of $mathcal{O}_{\mathbb{P}^n}(1))$ which belongs to $I_D$, which means that the section vanishes along $D$. – Fei YE Jan 11 '10 at 2:13 For your last question -- this group vanishes if and only if D is not contained scheme theoretically in any hyperplane. If your subvariety reduced you replace "contained scheme theoretically" by just "contained". Is this answer satisfactory for you? – Dmitri Jan 12 '10 at 8:25 add comment 3 Answers active oldest votes Let's start way back. The invention of schemes moved algebraic geometry away from thinking about varieties as embedded objects. However, embedding an abstract scheme into projective space has a lot of advantages, so if we can do that, it's useful. And even if we cannot embed our scheme into projective space, but we can find a non-trivial map, that gives us some way to understand our abstract scheme. In order to find a non-trivial map we need a line bundle with sections. So we are interested in finding sections of various sheaves, but primarily line bundles (and of course for this sometimes we need to deal with other kind of sheaves). Thus we are interested in $H^0$. On the other hand, computing $H^0$ is non-trivial. There are no good general methods. One reason for this is that, for instance, $H^0$ is not constant in families, or put it another way it is not deformation invariant. On the other hand, $\chi(X,\mathscr F)$ behaves much better. It is constant in flat families and if $\mathscr F$ is a line bundle, then it is computable using Then, if we know that $H^i=0$ for $i>0$, then $H^0=\chi$ and we're good. Here is an explicit example for a typical use of Serre vanishing: Example 1 Suppose $X$ is a smooth projective variety and $\mathscr L$ is an ample line bundle on $X$. Then we know that $\mathscr L^{\otimes n}$ is very ample and $H^i(X, \mathscr L^{\ otimes n})=0$ for $i>0$ and $n\gg 0$. Then $\mathscr L^{\otimes n}$ induces an embedding $X\hookrightarrow \mathbb P^N$ where $N=\dim H^0(X,\mathscr L^{\otimes n})-1=\chi(X,\mathscr L^{\ otimes n})-1$ by Serre's vanishing and hence $N$ is now computable by Riemann-Roch. The only shortcoming of the above is that in general there is no way to tell what $n\gg0$ really means and so it is hard to get any explicit numerical estimates out of this. This is where Kodaira vanishing can help. Example 2 In addition to the above assume that $\mathscr L=\omega_X$, or in other words assume that $X$ is a smooth canonically polarized projective variety. There are many of these, for instance all smooth projective curves of genus at least $2$ or all hypersurfaces satisfying $\deg > \dim +2$. In particular, these are those of which we like to have a moduli space. Anyway, the way Kodaira vanishing changes the above computation is that now we know that already $H^i(X,\omega_X^{\otimes n})=0$ for $i>0$ and $n>1$! In other words, as soon as we know up vote 13 that $\omega_X^{\otimes n}$ is very ample and $n>1$, then we can compute the dimension of the projective space into which we can embed our canonically polarized varieties. In fact, perhaps down vote more importantly than that we can compute it, we know (from the above) without computation that this value is constant in families. So, once we have a boundedness result that says that accepted this happens for any $n\geq n_0$ for a given $n_0$, and Matsusaka's Big Theorem says exactly that, then we know that all such canonically polarized smooth projective varieties (with a fixed Hilbert polynomial) can be embedded into $\mathbb P^N$, that is, into the same space. This implies that then all of these varieties show up in the appropriate Hilbert scheme of $\mathbb P^N$ and we're on our way to construct our moduli space. Of course, there is a lot more to do to finish the whole construction and also this method works in other situations, so this is just an example. There is one more thing one might think regarding your question, that is, ask the more abstract question: "What does higher cohomology of sheaves mean (e.g., geometrically)?" This is arguable, but I think that the essence of higher cohomology is that it measures the failure of something we wish were true all the time, but isn't. More specifically, if you're given a short exact sequence of sheaves on $X$ $$ 0\to \mathscr F' \to \mathscr F \to \mathscr F'' \to 0 $$ then we know that even though $\mathscr F \to \mathscr F''$ is surjective, the induced map on the global sections $H^0(X,\mathscr F) \to H^0(X,\mathscr F'')$ is not. However, the vanishing of $H^1(X,\mathscr F')$ implies that for any surjective map of sheaves with kernel $\mathscr F'$ as above the induced map on global sections is also surjective. Since you already have a geometric interpretation of $H^0$, this gives one for $H^1$: it measures (or more precisel For a more detailed explanation of the same idea see this MO answer. In my opinion the best way to understand higher ($>1$) cohomology is that it is the lower cohomology of syzygies. In other words, consider a sheaf $\mathscr F$ and embed it into an acyclic (e.g., flasque or injective or flabby or soft) sheaf. So you get a short exact sequence: $$ 0\to \mathscr F\to \mathscr A\to \mathscr G \to 0 $$ Since $\mathscr A$ is acyclic, we have that for $i>0$ $$ H^{i+1}(X,\mathscr F)\simeq H^i(X,\mathscr G), $$ so if you understand what $H^1$ means, then $H^2$ of $\mathscr F$ is just $H^1$ of $\mathscr G$, $H^3$ of $\mathscr F$ is just $H^2$ of $\mathscr G$ and so on. add comment One way to think about higher cohomology groups of, say, holomorphic vector bundles is via the Dolbeault isomorphism $$ H^q(X, \mathcal O(E)) \cong H^{0,q}(X,E) $$ (and also the more general $H^{q}(\mathcal O( \Lambda^pT^*X\otimes E)) \cong H^{p,q}(E)$.) If we also choose a Kähler metric on X and Hermitian metric on E, then Hodge theory says that cohomology is represented by harmonic forms. So we can think of the q^th cohomology group of the sheaf of sections of E as the space of harmonic (0,q)-forms with values in E. One can interpret Kodaira vanishing from this point of view. Pick a holomorphic line bundle L with a Hermitian metric over a Kähler manifold X. Then one has a connection in L and so an up vote induced connection on L-valued forms. This gives a "rough" Laplacian $\nabla^* \nabla$. The Weitzenbock formula tells us how this differs from the standard Laplacian. On a (p,q)-form with 9 down values in L, $$ \Delta = \nabla^*\nabla + F $$ where F is an endomorphism of the bundle $\Lambda^{p,q}\otimes L$ where the L-valued forms take their values. F depends on (p,q) and on the vote metrics of both L and X. The hypothesis for Kodaira vanishing is that there is a Hermitian metric on L whose curvature is a Kahler form. If we use this metric on L and also this Kahler form on X then the operator F has a certain sign: when p+q>n, the dimension of X, F is a positive definite endomorphism of the bundle $\Lambda^{p,q}\otimes L$. From here we can prove Kodaira vanishing. A harmonic (p,q) form $\alpha$ with values in L has $\Delta \alpha = 0$ so, by integrating the Weitzenbock formula against $\alpha$, we see $$ \int |\nabla \alpha|^2 + \int (F(\alpha), \alpha) = 0 $$ Now positivity of $F$ means both terms here are non-negative and so must each vanish. This forces $\alpha$ to vanish and so $H^{p,q}(L)=0$ when $p+q >n$. add comment I) The most elementary, but yet quite useful, use of a vanishing theorem is via Riemann-Roch for a smooth complete curve $X$ over the algebraically closed field $k$. Riemann-Roch for a divisor $D$ on $X$ says that $$h^0(X,L(D)-h^1(X, L(D)=1-g+deg (D)$$ [Here $g$=genus of $X$, $h( )=dim_k H( )$] If $deg(D)>2g-2$, the $H^1$ term will vanish and the precise formula $dim L(D)=1-g+deg(D)$ drops out: it gives you the number of rational functions on $X$ with poles controlled by $D$. II) At a more advanced level an impressive use of vanishing cohomology is Mumford's m-regularity. A coherent sheaf $\mathcal F$ on $\mathbb P_n(k)$ is said to be m-regular if $H^i(\mathbb P_n, \mathcal F(m-i)))=0$ for all $i>0$ . If you find this definition strange, don't worry: Mumford himself calls it "apparently silly" (in his book "Lectures on Curves on an Algebraic Surface", Princeton university Press, 1966). But then he shows how to use this notion to construct hilbert schemes and (re-)derive several theorems on algebraic surfaces (index theorem, completeness of characteristic linear system). up vote 9 down vote III) An application which might help you understand and appreciate Kodaira's vanishing theorem is one of its corollaries: Lefschetz's "weak" theorem [ proof in Griffiths-Harris, pages It says essentially that in a projective manifold $X$, a smooth hypersurface $Y$ with positive associated line bundle $\mathcal O (Y)$ (e.g. a smooth hyperplane section) has the same singular cohomology (with coefficients in $\mathbb Q$) as the ambient manifold $X$, up to degree $dim(Y)-1$ and bigger cohomology in degree $dim(Y)$ . IV) etc. add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/11289/geometry-meaning-of-higher-cohomology-of-sheaves/11334","timestamp":"2014-04-18T13:54:24Z","content_type":null,"content_length":"70739","record_id":"<urn:uuid:ede7cc57-41d1-42bb-a2cb-b06cd67e32b7>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00389-ip-10-147-4-33.ec2.internal.warc.gz"}
Education news & jobs at the Times Higher Education Supplement The autumnal sadness of the Princeton ghost Ariel Rubinstein Published: 25 April 2003 Title: The Essential John Nash Editor: Harold W. Kuhn and Sylvia Nasar Reviewer: Ariel Rubinstien Publisher: Princeton University Press ISBN: 0 691 09527 2 Pages: 224 Price: £19.95 What is "the essential John Nash?" For many, he will forever be identified with the personality depicted in the film A Beautiful Mind , who at the end bows to the king of Sweden during the Nobel prize ceremony. For those who study game theory and economics, the essence of Nash is found in the so-called Nash equilibrium, a concept that jumps out at any student of game theory from almost any article in the field and from every single textbook. Nash won the prize for his fundamental contributions to game theory, a collection of formal models serving as tools to discuss the interaction of rational players; a theory that has moved in the past 20 years from the fringe of economics to its central strand. The photo of the handsome, smiling Nash that appears on the jacket of this book was taken in the 1950s, the period during which he wrote the seminal contributions that eventually won him the Nobel prize in 1994. The cracks on the picture provide a hint of an essential period in Nash's life. For more than 20 years, he appeared as a mysterious figure at Princeton University, sloppily dressed, hunched over a cafeteria table with piles of computer printouts with strange calculations, reading newspapers others had left on the tables. This was the period Nash describes as the "time of my change from scientific rationality of thinking into the delusional thinking characteristic of persons who are psychiatrically diagnosed as 'schizophrenic'". I believe that Nash would most appreciate it if we identified "the essential John Nash" with the collection of all his works, from the first to the last letter he will one day write. Thus, the book that contains reprints of Nash's eight most important papers, both in game theory and mathematics, fits his own spirit. What further makes the book "the essential John Nash" is the inclusion of two documents in which he talks about himself: the autobiography he was asked to prepare for the Nobel proceedings and a 45-line afterword that concludes the book. Nash's papers in economics are all extremely short, elegant and beautiful. So are these two documents, which convey his introspective personality better than anything else. Princeton University Press enriches us with one of the most beautifully designed economics book I have ever seen and at a low price. The book's editors are two key players in the "Nash game". Harold Kuhn is a prominent Princeton mathematics professor. His name is widely familiar in economics and mathematics in the context of the Kuhn-Tucker theorem that launched the theory of non-linear programming in 1950. Kuhn was not only one of Nash's fellow graduate students and a member of the circle of brilliant minds surrounding Nash during his Princeton graduate days, he was also a constant member of the network of friends and colleagues who gave both professional and emotional support to Nash in the decades in which he was Kuhn was also a key figure in the process that led the Nobel committee to granting the prize to a person suspected of being someone who might "embarrass the king". Kuhn's comments on Nash, and some of the articles, are written in his special personal style: short words that convey a message of care and depth. The second editor, Sylvia Nasar, adds to the book an introduction consisting of a wonderful summary of Nash's life. Her encounter with the life of Nash started from the Nobel announcement, when dozens of other journalists trumpeted game theory and tried to explain it. Nasar was the only journalist who recognised that the announcement was not merely a proclamation of the victory for game theory within economics but, more important, a human event. She published a long article in The New York Times in which she described Nash as a young and handsome genius, as a sick person who roams Princeton's idyllic campus like a ghost, and as a recovered man who wins the Nobel prize and returns to research. The moving article paved the way for her book A Beautiful Mind . Nasar is a masterly writer who ran to the far corners of the world to chase down bits of missing information in her efforts to get to the bottom of Nash's mind. A Beautiful Mind was a bestseller and contained the material that inspired the Oscar-winning film. Both the book and the film encouraged mentally ill people and fascinated millions all over the world. Why are we so intrigued by the story of John Nash? We are curious to understand a person who proves theorems we are unable to fathom. We imagine the voices from another world he has heard. We ask where he was for 30 years during which he walked among us but wasn't here. We are frightened and we are attracted by this combination of "crazy" and "genius", an invitation for visiting the edge of our own minds. The movie has a cliched, happy ending. I would opt for a sadder one. This is probably the reason that I am moved so much by The Essential John Nash , which I see as the missing, true end of the movie. Not tragic, not happy. A mixture of scholarly brilliance and autumnal sadness that is summarised so well by Nash himself in the book's afterword. He writes: "The point of view of the person whose work and history become the subject matter of a book is different than that of the readers of the book. In a person's total life experience there is really no "inessential" and no "essential". The big thing is that a human has the opportunity and the experience of existence and life, and he or she may hope for reincarnation or to go to heaven when the life is indeed over and history." Ariel Rubinstein is professor of economics, Tel Aviv University, Israel, and Princeton University, New Jersey, US.
{"url":"http://arielrubinstein.tau.ac.il/articles/THUS2003/THUS2003.html","timestamp":"2014-04-16T04:11:26Z","content_type":null,"content_length":"16740","record_id":"<urn:uuid:24c2e532-06b6-4b9d-8ab7-4c91c80b5d3c>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00443-ip-10-147-4-33.ec2.internal.warc.gz"}
Graph Traversal: Breadth First Seach and Depth First Search 03-13-2009, 08:31 PM #1 Registered User Join Date Mar 2009 Graph Traversal: Breadth First Seach and Depth First Search I don't understand BFS and DFS. How do I visit the vertex and mark it as visited? Not sure how to do DFS with an arraylist of iterators. Need to some how iterate through the adjacency list for each vertex. public class Graph // translate integer vertex to string name private final List<String> vertexName = new ArrayList<String>(); // translate name to integer form of vertex private final Map<String, Integer> nameToVertexMap = new HashMap<String, Integer>(); // represent adjacency list of the graph private final List<List<Integer>> adjacencyList = new ArrayList<List<Integer>>(); // Reads data from a scanner and builds the graph. public void buildGraph(Scanner sc) while (sc.hasNext()) // get data for the adjacency list of one vertex String initVertexName = sc.next(); int initVertex = this.getVertexForName(initVertexName); if (initVertex == -1) initVertex = this.addNewAssociation(initVertexName); List<Integer> adjList = this.getNeighbors(initVertex); // get the outdegree int outDegree = sc.nextInt(); // collect list of neighbors for this vertex for (int k = 0; k < outDegree; k++) // get the string name of the terminal vertex for an edge String finalVertexName = sc.next(); //get the vertex number for this vertex name int finalVertex = this.getVertexForName(finalVertexName); // if the vertex name has no number, get it a number if (finalVertex == -1) finalVertex = this.addNewAssociation(finalVertexName); // string representation of the adjacency list of the graph public String toString() StringBuilder sb = new StringBuilder(); for (int v = 0; v < this.adjacencyList.size(); v++) sb.append(this.vertexName.get(v) + " : "); List<Integer> adjL = this.adjacencyList.get(v); for (Integer i : adjL) sb.append(vertexName.get(i) + " "); return sb.toString(); * @param vertex: integer representation of a vertex. * @return the adjacency list for a specified vertex public List<Integer> getNeighbors(int vertex) return adjacencyList.get(vertex); // returns the number of vertices in this graph public int getNumberOfVertices() return adjacencyList.size(); // returns string name of the vertex to use for output. public String getVertexName(int vertex) return vertexName.get(vertex); * @param name: name of a vertex, read from an input stream or scanner * @return the integer (vertex) assigned to that name, or -1 if no integer * vertex has been assigned to that name. public int getVertexForName(String name) return vertexName.indexOf(name); * Adds a new vertex with a given name to the graph and associates * the string name of the vertex with an integer used to identify the vertex. * @param name: string name for a vertex * @return an integer value that will be used to identify the vertex. private int addNewAssociation(String name) int vertex = getVertexForName(name); if (vertex != -1) throw new IllegalStateException("There is already a " + "vector with name " + name); this.adjacencyList.add(new ArrayList<Integer>()); // The integer associated with the name is its index in the list vertexName vertex = vertexName.size() - 1; nameToVertexMap.put(name, vertex); return vertex; public class GraphTraversal * This version of BFS will stop as soon as it finds a path from the source * to the destination. * @param source: the vertex where the search should begin. * @param dest: the vertex where the search should end. * @param graph: the graph to be searched. * @param visited: a boolean array used to mark vertices when they are * visited. * @param pred: an integer array that assigns to each vertex its predecessor * in the path that is found. static void breadthFirstSearch(int source, int dest, Graph graph, boolean [] visited, int [] pred) visited = new boolean [graph.getNumberOfVertices()]; pred = new int [graph.getNumberOfVertices()]; Queue<Integer> q = new LinkedList<Integer>(); // visit source // mark source int f = q.peek(); List<Integer> w = graph.getNeighbors(f); for(int i : w) // visit w // mark w // This version of DFS will stop as soon as it finds a path from the source to the destination. static void depthFirstSearch(int source, int dest, Graph graph, boolean [] visited, int [] pred) ArrayList<Iterator> i = new ArrayList<Iterator>(); * breadthFirst performs a BFS of a graph. * @param source: the initial vertex for the search * @param graph: This is the graph to be traversed * @param visited: this array indicates which vertices have been visited. * BFS will set visited[v] = for only those vertices that were * visited. Here v is the index that identifies a vertex in * the graph. * @param pred: for each vertex v that is visited, pred[v] will be set to its * predecessor, that is, the vertex that led to v in the BFS. static void breadthFirstSearch(int source, Graph graph, boolean [] visited, int [] pred) breadthFirstSearch(source, graph.getNumberOfVertices(), graph, visited, static void breadthFirstSearch(String s_source, Graph graph, boolean [] visited, int [] pred) int source = graph.getVertexForName(s_source); breadthFirstSearch(source, graph.getNumberOfVertices(), graph, visited, static void breadthFirstSearch(String s_source, String s_dest, Graph graph, boolean [] visited, int [] pred) int source = graph.getVertexForName(s_source); int dest = graph.getVertexForName(s_dest); breadthFirstSearch(source, dest, graph, visited, pred); static void depthFirstSearch(String source, Graph graph, boolean[] visited, int [] pred) int sourceVertex = graph.getVertexForName(source); depthFirstSearch(sourceVertex, graph.getNumberOfVertices(), graph, visited, pred); static void depthFirstSearch(int source, Graph graph, boolean [] visited, int [] pred) depthFirstSearch(source, graph.getNumberOfVertices(), graph, visited, static void depthFirstSearch(String source, String dest, Graph graph, boolean [] visited, int [] pred) int sourceVertex = graph.getVertexForName(source); int destVertex = graph.getVertexForName(dest); depthFirstSearch(sourceVertex, destVertex, graph, visited, pred); * getPath finds the path that goes from a given start vertex to a * given destination vertex * @param source: the start vertex for a path. * @param dest: the end vertex for a path. * @param pred: an array that specifies the predecessor of each vertex in * a path. If a vertex has no predecessor in the path, its * predecessor is null. * @return a list of vertices that form a path from a source vertex to * a destination vertex. public static List<Integer> getPath(int source, int dest, int [] pred) List<Integer> path = new LinkedList<Integer>(); int current = dest; while (current != source) path.add(0, current); current = pred[current]; if (current == -1) return path; path.add(0, source); return path; * getPath finds a path from a given start vertex to a given destination vertex. * It allows the vertices to be specified by name instead of by numeric values. * @return a list of vertex names for the vertices on the path. static List<String> getPath(String source, String dest, int [] pred, Graph graph) int sourceVertex = graph.getVertexForName(source); int destVertex = graph.getVertexForName(dest); List<Integer> path = getPath(sourceVertex, destVertex, pred); List<String> sPath = new ArrayList<String>(); for (int i : path) return sPath; have you thought of the vertices as objects of a class? if this were so, what could you do to mark an object as "visited"? To keep track of visited vertices: have a data structure which is easily searched so you can look-up that vertex. DFS: An easy implementation is using a stack. As you you visit a vertex, store all of the adjacent edges in the stack. When you've finished adding edges, now pop off the edge at the top [or if you are using a list, add to the back of the list and then remove from the back of the list]. Is the terminus of the edge that has been popped been visited? If so, discard that edge and pop another. If it has not been visited, mark it as visited and then visit it. This is what I had written for BFS. I'm supposed to add the source to the queue. Then visit source and mark it as visited by adding it to an array of booleans. After I visit the source I'm supposed to check it's neighbors and add them to the queue. I keep checking the neighbors of each vertex in the queue. What I don't get is how do I visit the vertex and it's neighbors and how do I add it to an array of booleans. Also how do I use pred? I'm not sure where to put that in the code. * This version of BFS will stop as soon as it finds a path from the source * to the destination. * @param source: the vertex where the search should begin. * @param dest: the vertex where the search should end. * @param graph: the graph to be searched. * @param visited: a boolean array used to mark vertices when they are * visited. * @param pred: an integer array that assigns to each vertex its predecessor * in the path that is found. static void breadthFirstSearch(int source, int dest, Graph graph, boolean [] visited, int [] pred) visited = new boolean [graph.getNumberOfVertices()]; pred = new int [graph.getNumberOfVertices()]; Queue<Integer> q = new LinkedList<Integer>(); // visit source // mark source int f = q.peek(); List<Integer> w = graph.getNeighbors(f); for(int i : w) // visit w // mark w You read the first edge in the queue. You "visit" the terminus of the destination of that edge by: 1. Look in your boolean array to see if this vertex has been visited (is the value at visited[new vertex] TRUE or FALSE). If true, then it has been visited and you do nothing more with this vertex, remove the first edge in the queue, and start over. 2. If it has not been visited, store the value of the start vertex for this edge in pred[new vertex]; test to see if this is your destination [if so, you're done]; read each of the vertices in the adjacency list for the vertex you're visiting, adding each edge to the end of the queue. Set the value of visited[new vertex] as TRUE. You're done, so now remove this edge from the queue and start over. Last edited by nspils; 03-16-2009 at 08:13 AM. I'm still confused. So is this what I'm supposed to do: - Create an empty visited and pred arrays - Create an empty queue - Add source (vertex) to the queue - Visit the source (vertex) - Not sure how to do this. What does it mean to visit a vertex? Does visiting the vertex mean you peek at it in the queue? - Mark it as visited - If the visited arrays are empty how do I know if it has been visited. Can I say that if it has been visited set it to true in the visited array? - While the queue is not empty get the neighbors of the first element in the queue. - Visit/Mark the neighbor - Keep visiting/marking neighbors of the first element in the queue until the first element is the destination Last edited by justinsbabe5000; 03-16-2009 at 04:03 PM. the arrays and the queue exist before you do your traversal. Don't create new ones. Visit means to do work "at" that vertex. So when you "visit" the source, you do all your processes for that vertex. Then you "discard" that vertex and move to the next you are going to visit - if it has not been visited yet. When you visit, check if the destination; check if visited; record pred for this node; add all "neighbors" to the queue ... you don't move on to any, yet. Mark this vertex as having been visited. think of this as your graph (written as adjacency list form) 0 - - 1, 2, 3, 4 1 - - 5 2 - - 6, 7, 8 3 - - null 4 - - 9, 10 5 -- 11, 12 6 -- null 7 -- 13 8 -- 14, 15, 16 9 -- null 10 -- null 11 -- 17, 18 12 -- null 13 -- null 14 -- null 15 -- null 16 -- null 17 -- null 18 -- null 0 is the source, 9 is your destination Now, sketch out your processes as you visit - what will the values of the arrays and the queue before you visit 0? How do the arrays and the queue change as you visit 0? Now, move to the first neighbor. What does the queue look like? What are the values of the arrays? Now, ask and answer your questions. Plus: how best do we store the value of both pred and destination of each "edge" as an Integer? How do you mark the value for pred? How do you initialize the arrays so that the values you want as default values are there? Thanks. I kinda got it to work. I did some search online and found an example of how to do both BFS and DFS and I took what you said and I figured it out. 03-13-2009, 11:12 PM #2 Senior Member Join Date Dec 2004 San Bernardino County, California 03-14-2009, 10:49 AM #3 Senior Member Join Date Dec 2004 San Bernardino County, California 03-15-2009, 10:58 AM #4 Registered User Join Date Mar 2009 03-16-2009, 08:09 AM #5 Senior Member Join Date Dec 2004 San Bernardino County, California 03-16-2009, 04:00 PM #6 Registered User Join Date Mar 2009 03-17-2009, 12:07 AM #7 Senior Member Join Date Dec 2004 San Bernardino County, California 03-17-2009, 12:40 AM #8 Registered User Join Date Mar 2009
{"url":"http://forums.devx.com/showthread.php?171461-Graph-Traversal-Breadth-First-Seach-and-Depth-First-Search&p=518208","timestamp":"2014-04-20T19:49:42Z","content_type":null,"content_length":"108955","record_id":"<urn:uuid:6fbc4386-448e-4f83-b31c-9fb7797f215b>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00062-ip-10-147-4-33.ec2.internal.warc.gz"}
trivial topology in domain April 3rd 2009, 10:21 AM #1 Junior Member Feb 2008 trivial topology in domain Correction to title: Trivial topology in RANGE Let Y ave the trivial topology and X be an arbitrary topological space. Show that every function f: X->Y is continuous In Y the only closed sets are Y and the empty set. All others are open. Do you know the open/closed set formulation of continuity? It says that if the preimage of open (closed) sets are open (closed), then f is continuous. If you can show that the preimages of Y and the empty set are closed, then you are done. In Y the only closed sets are Y and the empty set. All others are open. Do you know the open/closed set formulation of continuity? It says that if the preimage of open (closed) sets are open (closed), then f is continuous. If you can show that the preimages of Y and the empty set are closed, then you are done. If A is open set in Y then it must be empty set or all of Y. Preimage of empty set is empty set and preimage of all of Y is entire domain space X right? I think that is all because there are no more open sets in Y to consider and this must be true of any function from X to Y. For (a), The reason why the preimage of the empty set under f is the empty set involves a little bit of set theory. The definition of a function is "A function is a relation F such that for each x in dom F there is only one y such that xFy". Consider a nonempty set A such that $h : A \rightarrow \emptyset$. We cannot pick an element from an empty set, so h does not exist. Now consider an empty function g such that $g : \emptyset \rightarrow \emptyset$. By definition of a function, g can be described as, "g is a relation F such that for each x in the domain $\emptyset$, there is a unique y in $\emptyset$ satisfying xFy." The above is vacuously true since there are not any x in the $\emptyset$. So g exists. Now going back to a function f between topological spaces, the preimage of an empty set in Y is uniquely determined, which is an empty set in the topological space X. No other subset of X except the empty set can be mapped to the empty set in the topological space Y. The empty set is open in every topological space. For (b), the preimage of Y is X, since f is defined as $f:X \rightarrow Y$. We know that X is open in the topological space X. By definition of a continuous function in topology, f is continuous. A similar example is If X has the discrete topology, then any map $h:X \rightarrow Y$ is continuous. April 3rd 2009, 10:55 AM #2 Oct 2008 April 7th 2009, 01:06 AM #3 Apr 2009 April 7th 2009, 02:54 AM #4 Senior Member Nov 2008
{"url":"http://mathhelpforum.com/differential-geometry/82123-trivial-topology-domain.html","timestamp":"2014-04-18T10:04:50Z","content_type":null,"content_length":"41512","record_id":"<urn:uuid:bf7e2f82-25c7-4a7d-8d13-fa56459b35bb>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00016-ip-10-147-4-33.ec2.internal.warc.gz"}
Western Area, CO Algebra Tutor Find a Western Area, CO Algebra Tutor ...Please see my "profile" for description. I am able to tutor in English (Learning as a second language, High School L.A. Creative Writing, Composing an Essay, Research). I am currently a facilitator of Sustained Silent Reading (it is called D.E.A.R. at the elementary school level). Reading and writing are essential skills for academic success and as an adult in any career! 13 Subjects: including algebra 1, Spanish, English, writing ...Throughout my high school and college careers I was constantly assisting friends, peers, and my younger sister with math and physics homework. I enjoyed helping other students to understand the subject material they were struggling with. Through helping others, my excitement for learning was enhanced, which inspired me to pursue tutoring opportunities. 13 Subjects: including algebra 1, algebra 2, calculus, physics ...From there, we work together to develop a custom tutoring plan that consists of math concept review and test-taking strategies tailored to the student. To gain mastery of new skills learned, I encourage my students to practice via assigned homework. My students typically see a dramatic score increase! 26 Subjects: including algebra 1, algebra 2, chemistry, physics ...I am confident that I can provide an approach which will improve performance for test takers in many different subjects including Algebra, Psychology, Statistics, Research Methods, Biology, American/World History, and English (reading, writing). Consider my services if you want to explore learnin... 31 Subjects: including algebra 1, algebra 2, English, writing ...I am truly passionate about teaching and tutoring mathematics or physics. I believe that to get strong in a challenging subject such as mathematics, a student needs one-on-one tutoring. I see tutoring as a partnership between a tutor and a student to reach the latter grade expectation. 26 Subjects: including algebra 1, Arabic, ACT Math, GMAT Related Western Area, CO Tutors Western Area, CO Accounting Tutors Western Area, CO ACT Tutors Western Area, CO Algebra Tutors Western Area, CO Algebra 2 Tutors Western Area, CO Calculus Tutors Western Area, CO Geometry Tutors Western Area, CO Math Tutors Western Area, CO Prealgebra Tutors Western Area, CO Precalculus Tutors Western Area, CO SAT Tutors Western Area, CO SAT Math Tutors Western Area, CO Science Tutors Western Area, CO Statistics Tutors Western Area, CO Trigonometry Tutors Nearby Cities With algebra Tutor Cadet Sta, CO algebra Tutors Castle Pines, CO algebra Tutors Chipita Park, CO algebra Tutors Crystola, CO algebra Tutors Denver algebra Tutors Dupont, CO algebra Tutors Falcon, CO algebra Tutors Hoyt, CO algebra Tutors Keystone, CO algebra Tutors Lakeside, CO algebra Tutors Rockrimmon, CO algebra Tutors Roxborough, CO algebra Tutors Silvercreek, CO algebra Tutors Tarryall, CO algebra Tutors Wattenburg, CO algebra Tutors
{"url":"http://www.purplemath.com/western_area_co_algebra_tutors.php","timestamp":"2014-04-18T16:09:26Z","content_type":null,"content_length":"24290","record_id":"<urn:uuid:96e93f03-6658-45a0-aa7f-827e045c45b1>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00196-ip-10-147-4-33.ec2.internal.warc.gz"}
Log Law Intuition May 27th 2011, 08:12 AM #1 May 2011 Log Law Intuition What is the intuition for this logarithm law? logbx = (logax) / (logab) There is intuition for other log laws. For example: log(100) + log (1000) = 2 factors of 10 + 3 factors of 10 = 5 factors of 10 = 5 = log (100x1000) = (2 + 3) factors of 10 = 5 factors of 10 = 5 So more generally, log(a) + log(b) = log(ab). We're just counting the factors of 10 in a and b, so it doesn't matter whether we count them separately or together. It's equal either way. Is there intuition for the equation in red above? Although it's trivial to prove, seeing the equation (and using it) in isolation seems more mechanical (e.g. plugging numbers into the Pythagorean Theorem because I see a right triangle) than logical (e.g. solving x + 5 = 24 by subtracting 5 from both sides). \displaystyle \begin{align*}x &= b^y \\ \log_b{x} &= y \end{align*} \displaystyle \begin{align*} x &= b^y \\ \log_a{x} &= \log_a{\left(b^y\right)} \\ \log_a{x} &= y\log_a{b} \\ \frac{\log_a{x}}{\log_a{b}} &= y\end{align*} Therefore $\displaystyle \log_b{x} = \frac{\log_a{x}}{\log_a{b}}$. I originally wrote just what Prove It did (only slower!). However, you seem to be using "intuition" to mean something different from a regular proof. Perhaps this is the kind of thing you mean: $u= log_a(x)$ means that x has u factors of a. $v= log_b(x)$ means x has v factors of b. $w= log_b(a)$ means that a has w factors of b. Well, if x has u factors of a and each a has w factors of b, then x has uw factors of b. That is, v= uw so u= v/w which translates as $log_a(x)= \frac{log_b(x)}{log_b(a)}$ Last edited by HallsofIvy; May 27th 2011 at 08:43 AM. May 27th 2011, 08:19 AM #2 May 27th 2011, 08:25 AM #3 MHF Contributor Apr 2005
{"url":"http://mathhelpforum.com/algebra/181803-log-law-intuition.html","timestamp":"2014-04-18T19:25:30Z","content_type":null,"content_length":"37844","record_id":"<urn:uuid:25ee251c-22e0-4127-927e-cb730ad02131>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00238-ip-10-147-4-33.ec2.internal.warc.gz"}
Explaining the Determinant Date: 11/16/97 at 21:52:11 From: Jeremy Carlino Subject: Explaining the determinant I am trying to understand what the determinant of a matrix actually is. I have a degree in mathematics and am currently teaching Algebra II to gifted and talented students. I know how to find the determinant and how to teach the process of finding the determinant, but I haven't been able to explain what it is. Please help. Interestingly, I found in a History of Math text, <i>From Five Fingers to Infinity,</i> that before Auther Cayley "created" matrix theory, Liebniez and a Chinese or Japanese mathematician simultaneously and independently discovered the determinant. How could the determinant have been discovered before the matrix? I guess I have two questions. Thanks in advance for any help. Jeremy C. Date: 11/17/97 at 19:57:10 From: Doctor Tom Subject: Re: Explaining the determinant Hello Jeremy, I always think of it geometrically. Let's look in two dimensions, at the determinant of the following: | x0 y0 | = x0*y1 - x1*y0 | x1 y1 | Now imagine the two vectors (x0, y0) and (x1, y1) drawn in the x-y plane from the origin. If you consider them to be two sides of a parallelogram, then the determinant is the area of the parallelogram. Well, not exactly the area, the "signed" area, in the sense that if you sweep the area clockwise, you get one sign, and the opposite sign if you sweep it in the other direction. It's just as useful a concept as considering area below the x-axis as negative in your calculus Swapping the vectors swaps the sign, in the same way that swapping the rows of the determinant swaps the sign. In one dimension, the determinant is just the number, but if you "plot" that number on a number line, it's the (signed) length of the line. If it goes in the positive direction from the origin, it's positive, and negative otherwise. In three dimensions, consider three vectors (x0,y0,z0), (x1,y1,z1), and (x2,y2,z2). If you draw them from the origin, they form the principle edges of a parallelepiped, and the determinant of: | x0 y0 z0 | | x1 y1 z1 | | x2 y2 z2 | is the volume of that parallelepiped. In higher dimensions, its just the 4D (or 5D, or 6D ...) signed "hypervolumes" of the hyper-parallelepipeds. With this view, it's easy to see why the determinant's properties make sense. Swapping two rows changes the order of sweeping out the volume, and will hence turn a positive volume to negative or vice-versa. Multiplying all the elements of a row by a constant (say 2) stretches the parallelepiped by a factor of 2 in one direction, and hence doubles the volume. Adding a row to another just skews the parallelepiped parallel to one of its faces, and hence (Cavallari's principle) leaves the volume unchanged. (If you can't see this, plot it in two dimensions for a couple of examples.) Check the other allowed determinant manipulations to see how they relate to the geometry. Because a determinant is a fundamental geometric property of a collection of N N-dimensional vectors, it's not too surprising that different folks would stumble across it, even without knowing what a matrix is. I hope this helps. -Doctor Tom, The Math Forum Check out our web site! http://mathforum.org/dr.math/ Date: 11/17/97 at 20:48:17 From: Jeremy Carlino Subject: Re: Explaining the determinant Thanks for your time and explanation. Jeremy
{"url":"http://mathforum.org/library/drmath/view/51439.html","timestamp":"2014-04-18T10:38:55Z","content_type":null,"content_length":"8556","record_id":"<urn:uuid:89c23f9c-5797-46b9-9f73-386f8c3bfb82>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00009-ip-10-147-4-33.ec2.internal.warc.gz"}
Confusion about Direct Limit of R-Modules November 4th 2011, 01:29 PM #1 Confusion about Direct Limit of R-Modules Hello, I just have a quick question about direct limits. Let $\{ A_s | s \in S \}$ be a directed system of $R$-modules, where $S$ is a directed set. I want to show that $\displaystyle\lim_{\longrightarrow} A_s := \displaystle\coprod_{s\in S} A_s/ \sim$ is an $R$-module, where $a \in A_s \sim b \in A_t$ if there exists $u \ge s,t$ such that $f_{su}(a) = f_{tu}(b) \in A_u$. Now in my notes we define addition as follows: The sum of $a \in A_s$ and $b \in A_t$ is given by choosing an element $u \in S$ with $u \ge s,t$ and $a+b := f_{su}(a) + f_{tu}(b)$ in the $R$-module $A_u$. But my question is, surely this thing is not well defined because there might be some $u' \ge s,t$, where $u' ot= u$! So this addition would be in a different $R$-module. Does this limit module only make sense if in our directed set $S$ we make a specific choice of element $u$ for every $(s,t)$ pair in $S$? Thanks for any help. Last edited by slevvio; November 4th 2011 at 01:48 PM. Re: Confusion about Direct Limit of R-Modules Just after posting this I think I've worked it out. If we supposed we have $u'$ such that $u' \ge s,t$, then choose $T \ge u,u' \ge s,t$. Then $f_{uT}(f_{su}(a)) = f_{sT}(a) = f_{u' T}(f_{su'} (a))$. Therefore $f_{su}(a) \in A_u \sim f_{su'}(a) \in A_{u'}$. By a similar argument we have $f_{tu}(b) \in A_u \sim f_{tu'}(b) \in A_{u'}$. Therefore in the direct limit, $a+b$ is the same no matter which $u$ we pick. $\Box$ November 4th 2011, 01:54 PM #2
{"url":"http://mathhelpforum.com/advanced-algebra/191199-confusion-about-direct-limit-r-modules.html","timestamp":"2014-04-20T11:19:44Z","content_type":null,"content_length":"38756","record_id":"<urn:uuid:e7f0bde6-df63-4d87-a8dc-8cd4b469d974>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00092-ip-10-147-4-33.ec2.internal.warc.gz"}
Categorical Data Analysis Using the SAS System, 2nd Edition ISBN: 978-0-471-22424-2 648 pages December 2001 Along with providing a useful discussion of categorical data analysis techniques, this book shows how to apply these methods with the SAS System. The authors include practical examples from a broad range of applications to illustrate the use of the FREQ, LOGISTIC, GENMOD, and CATMOD procedures in a variety of analyses. They also discuss other procedures such as PHREG and NPAR1WAY. See More Preface to the Second Edition. Chapter 1. Introduction. Chapter 2. The 2 x 2 Table. Chapter 3. Sets of 2 x 2 Tables. Chapter 4. Sets of 2 x r and s x 2 Tables. Chapter 5. The s x r Table. Chapter 6. Sets of s x r Tables. Chapter 7. Nonparametric Methods. Chapter 8. Logistic Regression I: Dichotomous Response. Chapter 9. Logistic Regression II: Polytomous Response. Chapter 10. Conditional Logistic Regression. Chapter 11. Quantal Bioassay Analysis. Chapter 12. Poisson Regression. Chapter 13. Weighted Least Squares. Chapter 14. Modeling Repeated Measurements Data with WLS. Chapter 15. Generalized Estimating Equations. Chapter 16. Loglinear Models. Chapter 17. Categorized Time-to-Event Data. See More Maura E. Stokes is Senior Manager of Statistical Applications Research and Development at SAS Institute. She received her PhD in Biostatistics from the University of North Carolina at Chapel Hill and has taught and written about categorical data analysis for over fifteen years. Charles S. Davis is Professor of Biostatistics at the University of Iowa. He received his PhD in Biostatistics from the University of Michigan. His research and teaching interests include categorical data analysis and methods for the analysis of repeated measures. Gary Koch is Professor of Biostatistics and Director of the biometrics Consulting Laboratory at the University of North Carolina at Chapel Hill. He has had a prominent roll in the field of categorical data analysis for the last thirty years. He teaches classes and seminars in categorical data analysis, consults in areas of statistical practice, conducts research, and trains many Biostatistics students. See More
{"url":"http://www.wiley.com/WileyCDA/WileyTitle/productCd-0471224243.html","timestamp":"2014-04-20T18:51:30Z","content_type":null,"content_length":"39230","record_id":"<urn:uuid:3df8dc4f-e57c-4f34-9f77-76ab5ad37080>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00139-ip-10-147-4-33.ec2.internal.warc.gz"}
C.; Applications of the L-functions ratios conjecture Results 1 - 10 of 11 "... We use the conjecture of Conrey, Farmer and Zirnbauer for averages of ratios of the Riemann zeta function [11] to calculate all the lower order terms of the triple correlation function of the Riemann zeros. A previous approach was suggested by Bogomolny and Keating [6] taking inspiration from semi ..." Cited by 4 (2 self) Add to MetaCart We use the conjecture of Conrey, Farmer and Zirnbauer for averages of ratios of the Riemann zeta function [11] to calculate all the lower order terms of the triple correlation function of the Riemann zeros. A previous approach was suggested by Bogomolny and Keating [6] taking inspiration from semi-classical methods. At that point they did not write out the answer explicitly, so we do that here, illustrating that by our method all the lower order terms down to the constant can be calculated rigourously if one assumes the ratios conjecture of Conrey, Farmer and Zirnbauer. Bogomolny and Keating [4] returned to their previous results simultaneously with this current work, and have written out the full expression. The result presented in this paper agrees precisely with their formula, as well as with our numerical computations, which we include here. We also include an alternate proof of the triple correlation of eigenvalues from random U(N) matrices which follows a nearly identical method to that for the Riemann zeros, but is based on , 802 "... Abstract. A fast new algorithm is used compute the zeros of 10 6 quadratic character L-functions for negative fundamental discriminants with absolute value d> 10 12. These are compared to the 1-level density, including various lower order terms. These terms come from, on the one hand the Explicit Fo ..." Cited by 2 (0 self) Add to MetaCart Abstract. A fast new algorithm is used compute the zeros of 10 6 quadratic character L-functions for negative fundamental discriminants with absolute value d> 10 12. These are compared to the 1-level density, including various lower order terms. These terms come from, on the one hand the Explicit Formula, and on the other the L-functions Ratios Conjecture. The latter give a much better fit to the data, providing numerical evidence for the conjecture. 1. Introduction. Predictions. Standard conjectures [5] predict that the low lying zeros of quadratic Dirichlet L-functions should be distributed according to a symplectic random matrix model. To make this more precise, we’ll introduce some notation. Let χd be a real, primitive character modulo d, , 2011 "... Abstract. We propose a random matrix model for families of elliptic curve L-functions of finite conductor. A repulsion of the critical zeros of these L-functions away from the center of the critical strip was observed numerically by S. J. Miller in 2006 [50]; such behaviour deviates qualitatively fr ..." Cited by 1 (0 self) Add to MetaCart Abstract. We propose a random matrix model for families of elliptic curve L-functions of finite conductor. A repulsion of the critical zeros of these L-functions away from the center of the critical strip was observed numerically by S. J. Miller in 2006 [50]; such behaviour deviates qualitatively from the conjectural limiting distribution of the zeros (for large conductors this distribution is expected to approach the one-level density of eigenvalues of orthogonal matrices after appropriate rescaling). Our purpose here is to provide a random matrix model for Miller’s surprising discovery. We consider the family of even quadratic twists of a given elliptic curve. The main ingredient in our model is a calculation of the eigenvalue distribution of random orthogonal matrices whose characteristic polynomials are larger than some given value at the symmetry point in the spectra. We call this sub-ensemble of SO(2N) the excised orthogonal ensemble. The sieving-off of matrices with small values of the characteristic polynomial is akin to the discretization of the central values of L-functions implied by the formulæ of Waldspurger and Kohnen-Zagier. The cut-off scale "... 1.1. Background. L-functions and modular forms underlie much of twentieth century number theory and are connected to the practical applications of number theory in cryptography. The fundamental importance of these functions in mathematics is supported by the fact that two of the seven Clay Mathemati ..." Add to MetaCart 1.1. Background. L-functions and modular forms underlie much of twentieth century number theory and are connected to the practical applications of number theory in cryptography. The fundamental importance of these functions in mathematics is supported by the fact that two of the seven Clay Mathematics Million Dollar Millennium Problems [20] deal with properties of these functions, namely "... Dedicated to Professor Akio Fujii on his retirement. Abstract. The derivative of the Riemann zeta function was computed numerically on several large sets of zeros at large heights. Comparisons to known and conjectured asymptotics are presented. 1. ..." Add to MetaCart Dedicated to Professor Akio Fujii on his retirement. Abstract. The derivative of the Riemann zeta function was computed numerically on several large sets of zeros at large heights. Comparisons to known and conjectured asymptotics are presented. 1. "... Abstract. The family of symmetric powers of an L-function associated with an elliptic curve with complex multiplication has received much attention from algebraic, automorphic and p-adic points of view. Here we examine this family from the perspectives of classical analytic number theory and random ..." Add to MetaCart Abstract. The family of symmetric powers of an L-function associated with an elliptic curve with complex multiplication has received much attention from algebraic, automorphic and p-adic points of view. Here we examine this family from the perspectives of classical analytic number theory and random matrix theory, especially focusing on evidence for the symmetry type of the family. In particular, we investigate the values at the central point and give evidence that this family can be modeled by ensembles of orthogonal matrices. We prove an asymptotic formula with power savings for the average of these L-values, which reproduces, by a completely different method, an asymptotic formula proven by Greenberg and Villegas–Zagier. We give an upper bound for the second moment which is conjecturally too large by just one logarithm. We also give an explicit conjecture for the second moment of this family, with power savings. Finally, we compute the one level density for this family with a test function whose Fourier transform has limited support. It is known by the work of Villegas – Zagier that the subset of these L-functions which have even functional equations never vanish; we show to what extent this result is reflected by our analytic results. , 2009 "... In the past dozen years random matrix theory has become a useful tool for conjecturing answers to old and important questions in number theory. It was through the Riemann zeta function that the connection with random matrix theory was first made in the 1970s, and although there has also been much re ..." Add to MetaCart In the past dozen years random matrix theory has become a useful tool for conjecturing answers to old and important questions in number theory. It was through the Riemann zeta function that the connection with random matrix theory was first made in the 1970s, and although there has also been much recent work concerning other varieties of L-functions, this article will concentrate on the zeta function as the simplest example illustrating the role of random matrix theory. 1 , 806 "... Abstract. Assuming the Riemann Hypothesis, we obtain an upper bound for the 2kth moment of the derivative of the Riemann zeta-function averaged over the non-trivial zeros of ζ(s) for every positive integer k. Our bounds are nearly as sharp as the conjectured asymptotic formulae for these moments. ..." Add to MetaCart Abstract. Assuming the Riemann Hypothesis, we obtain an upper bound for the 2kth moment of the derivative of the Riemann zeta-function averaged over the non-trivial zeros of ζ(s) for every positive integer k. Our bounds are nearly as sharp as the conjectured asymptotic formulae for these moments. , 803 "... Abstract. Using the mollifier method, we show that for a positive proportion of holomorphic Hecke eigenforms of level one and weight bounded by a large enough constant, the associated symmetric square L-function does not vanish at the central point of its critical strip. We note that our proportion ..." Add to MetaCart Abstract. Using the mollifier method, we show that for a positive proportion of holomorphic Hecke eigenforms of level one and weight bounded by a large enough constant, the associated symmetric square L-function does not vanish at the central point of its critical strip. We note that our proportion is the same as that found by other authors for other families of L-functions also having symplectic symmetry type. 1. , 803 "... 2. Background and notation 3 ..."
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=9021787","timestamp":"2014-04-18T19:36:36Z","content_type":null,"content_length":"33295","record_id":"<urn:uuid:8f1fad0e-a5c8-4db8-bfad-bcaecdd68aed>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00521-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help November 17th 2009, 08:47 PM #1 Sep 2009 Zero content A set z in R is said to have zero content if for any epsilon > 0, there is a finite collection of intervals I1, ...IL such that z is a subset of the union of those intervals (meaning all points in z are contained in the union of all the intervals) AND the sum of the lengths of the intervals is less than epsilon. Thus, I was wondering if I could clarify something about zero content. So, for example, the set z = {y : y = x^2, x is part of [0, 10]} would not have zero content because for all the points z to be contained in intervals, the length of union of the intervals must be 10 and epsilon can be found which is less than 10. However, am I right in thinking that the set z = {y : y = x^2, x is part of integers} has zero content? This is because arbitarily small intervals can be chosen around those points, and the union of the intervals can be made less than epsilon. So basically, if f is continuous function, f(S) has zero content only when S is disconnected? Like sequences mapped by continuous functions have zero content because they are indexed by integers which form a disconnected set? If $f$ is not constant then yes, because if $f(s)$ has two elements say $x<y$ then, since $f$ is continous and $S$ is connected then $f(S)$ is connected which means $(x,y) \subset f(S)$ which means the content of $f(S)>0$. November 19th 2009, 12:19 PM #2 Super Member Apr 2009
{"url":"http://mathhelpforum.com/differential-geometry/115316-zero-content.html","timestamp":"2014-04-16T07:48:26Z","content_type":null,"content_length":"33057","record_id":"<urn:uuid:ceb2ef74-f8d9-433b-b0c0-38405b9dbdf7>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00227-ip-10-147-4-33.ec2.internal.warc.gz"}
Section: BLAS routine (3) Updated: 16 October 1992 Local index Up ZHPR2 - perform the hermitian rank 2 operation A := alpha*x*conjg( y' ) + conjg( alpha )*y*conjg( x' ) + A, ( UPLO, N, ALPHA, X, INCX, Y, INCY, AP ) COMPLEX*16 ALPHA INTEGER INCX, INCY, N CHARACTER*1 UPLO COMPLEX*16 AP( * ), X( * ), Y( * ) ZHPR2 performs the hermitian rank 2 operation where alpha is a scalar, x and y are n element vectors and A is an n by n hermitian matrix, supplied in packed form. UPLO - CHARACTER*1. On entry, UPLO specifies whether the upper or lower triangular part of the matrix A is supplied in the packed array AP as follows: UPLO = 'U' or 'u' The upper triangular part of A is supplied in AP. UPLO = 'L' or 'l' The lower triangular part of A is supplied in AP. Unchanged on exit. N - INTEGER. On entry, N specifies the order of the matrix A. N must be at least zero. Unchanged on exit. ALPHA - COMPLEX*16 . On entry, ALPHA specifies the scalar alpha. Unchanged on exit. X - COMPLEX*16 array of dimension at least ( 1 + ( n - 1 )*abs( INCX ) ). Before entry, the incremented array X must contain the n element vector x. Unchanged on exit. INCX - INTEGER. On entry, INCX specifies the increment for the elements of X. INCX must not be zero. Unchanged on exit. Y - COMPLEX*16 array of dimension at least ( 1 + ( n - 1 )*abs( INCY ) ). Before entry, the incremented array Y must contain the n element vector y. Unchanged on exit. INCY - INTEGER. On entry, INCY specifies the increment for the elements of Y. INCY must not be zero. Unchanged on exit. AP - COMPLEX*16 array of DIMENSION at least ( ( n*( n + 1 ) )/2 ). Before entry with UPLO = 'U' or 'u', the array AP must contain the upper triangular part of the hermitian matrix packed sequentially, column by column, so that AP( 1 ) contains a( 1, 1 ), AP( 2 ) and AP( 3 ) contain a( 1, 2 ) and a( 2, 2 ) respectively, and so on. On exit, the array AP is overwritten by the upper triangular part of the updated matrix. Before entry with UPLO = 'L' or 'l', the array AP must contain the lower triangular part of the hermitian matrix packed sequentially, column by column, so that AP( 1 ) contains a( 1, 1 ), AP( 2 ) and AP ( 3 ) contain a( 2, 1 ) and a( 3, 1 ) respectively, and so on. On exit, the array AP is overwritten by the lower triangular part of the updated matrix. Note that the imaginary parts of the diagonal elements need not be set, they are assumed to be zero, and on exit they are set to zero. Level 2 Blas routine. -- Written on 22-October-1986. Jack Dongarra, Argonne National Lab. Jeremy Du Croz, Nag Central Office. Sven Hammarling, Nag Central Office. Richard Hanson, Sandia National Labs. This document was created by man2html, using the manual pages. Time: 21:59:24 GMT, April 16, 2011
{"url":"http://www.makelinux.net/man/3/Z/zhpr2","timestamp":"2014-04-18T10:41:22Z","content_type":null,"content_length":"10701","record_id":"<urn:uuid:153a27fb-3b71-4f2c-afd4-4af1cc2c36c5>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00394-ip-10-147-4-33.ec2.internal.warc.gz"}
prove it is a subgroup October 13th 2008, 06:12 AM #1 Feb 2008 prove it is a subgroup Let G be a group and H be a subgroup of G. Define N(H)={x is a member of G|xHx^-1=H} Prove that N(H) (called normalizer of H) is a subgroup of G. This is a straightforward problem. Just use what it means is a definition of a subgroup. If $x,y\in N(H)$ can you prove $xy\in H$? Once you prove closure the others ones should be even simpler. so would this work a^2=e and e^2=e so H is nonempty x,y is a member of N(H) x^2=e and y^2=e Prove (xy^-1)^2=e (ab^-1)^2= ab^-1 * ab^-1= a^2(b^-1)^2=a^2(b^2)^-1=ee^-1=e So xy^-1 belongs to N(H) So N(H) is a subgroup of G October 13th 2008, 07:20 AM #2 Global Moderator Nov 2005 New York City October 13th 2008, 10:41 AM #3 Feb 2008
{"url":"http://mathhelpforum.com/advanced-algebra/53410-prove-subgroup.html","timestamp":"2014-04-17T18:50:13Z","content_type":null,"content_length":"35404","record_id":"<urn:uuid:bcaefa2b-e1b5-479e-8500-75643174a5f7>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00204-ip-10-147-4-33.ec2.internal.warc.gz"}
Generalized eikonal approximation for fast retrieval of particle size distribution in spectral extinction technique In retrieving particle size distribution from spectral extinction data, a critical issue is the calculation of extinction efficiency, which affects the accuracy and rapidity of the whole retrieval. The generalized eikonal approximation (GEA) method, used as an alternative to the rigorous Mie theory, is introduced for retrieval of the unparameterized shape-independent particle size distribution (PSD). To compute the extinction efficiency more efficiently, the combination of GEA method and Mie theory is adopted in this paper, which not only extends the applicable range of the approximation method but also improves the speed of the whole retrieval. Within the framework of the combined approximation method, the accuracy and limitations of the retrieval are investigated. Moreover, the retrieval time and memory requirement are also discussed. Both simulations and experimental results show that the combined approximation method can be successfully applied to retrieval of PSD when the refractive index is within the validity range. The retrieval results we present demonstrate the high reliability and stability of the method. By using this method, we find the complexity and computation time of the retrieval are significantly reduced and the memory resources can also be saved effectively, thus making this method more suitable for online particle sizing. © 2012 Optical Society of America OCIS Codes (290.2200) Scattering : Extinction (290.5820) Scattering : Scattering measurements (300.6360) Spectroscopy : Spectroscopy, laser ToC Category: Original Manuscript: October 24, 2011 Revised Manuscript: January 4, 2012 Manuscript Accepted: January 10, 2012 Published: May 18, 2012 Virtual Issues Vol. 7, Iss. 7 Virtual Journal for Biomedical Optics Li Wang, Xiaogang Sun, and Feng Li, "Generalized eikonal approximation for fast retrieval of particle size distribution in spectral extinction technique," Appl. Opt. 51, 2997-3005 (2012) You do not have subscription access to this journal. Citation lists with outbound citation links are available to subscribers only. You may subscribe either as an OSA member, or as an authorized user of your institution. Contact your librarian or system administrator Log in to access OSA Member Subscription You do not have subscription access to this journal. Cited by links are available to subscribers only. You may subscribe either as an OSA member, or as an authorized user of your institution. Contact your librarian or system administrator Log in to access OSA Member Subscription You do not have subscription access to this journal. Figure files are available to subscribers only. You may subscribe either as an OSA member, or as an authorized user of your institution. Contact your librarian or system administrator Log in to access OSA Member Subscription You do not have subscription access to this journal. Article level metrics are available to subscribers only. You may subscribe either as an OSA member, or as an authorized user of your institution. Contact your librarian or system administrator Log in to access OSA Member Subscription « Previous Article | Next Article »
{"url":"http://www.opticsinfobase.org/vjbo/abstract.cfm?URI=ao-51-15-2997","timestamp":"2014-04-16T20:10:06Z","content_type":null,"content_length":"57352","record_id":"<urn:uuid:047e630b-eb01-46e3-af61-84f6345956db>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00076-ip-10-147-4-33.ec2.internal.warc.gz"}
Sunday FunctionSunday Function There’s an interesting book I’m working on called The Fall of Rome: And the End of Civilization You can make a similar argument in a positive direction about the history of mathematics. Math has existed more or less forever, but I’d say there’s a definite new era that started at roughly the end of the Renaissance about the time Newton and Leibniz independently developed calculus. Simply because calculus was so brilliantly effective at solving problems, subsequent mathematics tended to focus on whatever worked rather than whatever could be rigorously proved. In math this is a dangerous thing. If your reasoning is unclear, you won’t be able to tell under what circumstances your results no longer work. Eventually the sketchier concepts of modern math were examined formally and were put into a logically sound and rigorous framework. But today, we’re going to do this old school and assert a whopper of a supposition with no other support. I’ll tell you when we get there. First, our Sunday Function. Unlike most other Sunday Functions, I’m not going to write this one down in terms of f(x) = stuff, because this one is just piecewise continuous straight lines. You can easily write it in terms of mx + b for each piece, but it will save space to define it just in terms of the graph. It’s a triangle wave of period 2, and we’re only going to deal with it on the interval [0, 2]. Therefore, here simultaneously is the graph and definition: Now here comes our bold and utterly unsupported assertion. We know many functions can be written in terms of a power series. What if it were possible to write this in sort of a trigonometric series? In other words, are there constants a0, a1, a2, and so on such that this is true for our particular f(x): Well, maybe. First we have to figure out what the various coefficients are in this series (which happens to be called a Fourier series, after the mathematician Jean Baptiste Joseph Fourier). First let’s dispatch a0 by means of a handwaving but nonetheless valid argument. Here goes. It’s clear that the average value of our function on this interval is 0 by virtue of its symmetry about the x-axis. It’s also clear that all of the sine terms on the right have an average value of 0 as well for the same reason. But any nonzero constant term will have a nonzero average value. The average of a constant is just that constant, after all. Therefore in order to preserve symmetry the constant term a0 must itself be equal to zero. The other terms look harder at first. Fortunately there is a very convenient property of the sine function that can help us out with the rest. We call it orthogonality*: The delta on the right side is the Kronecker delta, which is just a symbol that means “equals 1 if m = n, equals 0 otherwise”. So we have that weird sine series, and we just want to find (for instance) a2. We can just multiply both sides of the series equation by sin(2 pi x) and integrate. Every single one of those infinite terms will integrate out to 0 due to orthogonality, except for the one with n = 2, which is attached to the a2 term. Doing that, we can see that a2 (and in general if we had picked a different numbered “a”) is equal to: Sweet! We can evaluate those. I won’t bore you by actually doing it (you can try it yourself or take a look at the result). Once you do evaluate them, you can actually plot the first few terms and see what you get. It’s going to turn out that all the even terms will equal zero. Let me give you numerical values for the first few odd terms: a1 = 0.810569469 a3 = -0.0900632743 a5 = 0.0324227788 Plug those in and plot, with the original overlaid for comparison. Only including the a1 term: With the first two nonzero terms: With the first three nonzero terms: So it sure looks like our wild supposition was justified and our function can legitimately be written in terms of trig functions. As indeed it can be, and as indeed pretty much any well-behaved periodic function can be. (Crucially, you must also include cosines in general. In the interests of simplicity I’ve picked an odd function as our example. These have the property that all the cosine coefficients are zero. Conversely, even functions have all the sine terms equal to zero.) This property of the set of sine/cosine functions is called completeness, and unlike orthogonality it’s way out of Sunday Function’s league to prove from scratch. Way out of my league, to be honest. But that’s what mathematicians are for. They’ve showed that a very broad class of different types of orthogonal functions are complete, and showing that a particular set of functions is in that broad class isn’t actually so hard. We might even do it one of these days. *Here I’m writing the statement of orthogonality in a slightly nonstandard way. More usually it’s written with respect to the interval from [-pi, pi] or [0, 2*pi]. Our problem is on the interval [0, 2], so just like on the cooking shows I’m pulling this equation out of the oven “pre-cooked”, appropriately scaled to the interval in our problem. 1. #1 CCPhysicist August 16, 2009 Nicely done. Your example of Fourier (early 1800s) is particularly apt as regards the history of analysis (calculus with actual proofs), which developed throughout the 1800s. Apparently, according to my analysis prof, there was quite a shakeup in the later 1800s when examples of functions were developed that were continuous everywhere but nowhere differentiable. It was the efforts of people to explore strong claims like his about what could be done with “every well-behaved function” that exposed weaknesses in the general notions about what functions were and how they behaved. It should also interest all of us that Fourier invented this idea so he could solve a PHYSICS problem: the heat flow equation. So we can exaggerate a little bit and say that modern analysis started with a physics problem, just as the calculus started with a physics problem. That might be a good reason (home insulation being the other) for always teaching that topic in intro physics. 2. #2 CCPhysicist August 16, 2009 The exaggeration is because I think Euler’s invention of the power series in the 1700s, not to mention the very concept of “function”, might have been the key first step even though it lacked 3. #3 Comrade PhysioProf August 16, 2009 4. #4 Douglas August 17, 2009 CCPhys: What was even more surprising to me was that Cantor’s work started with Fourier series, too! 5. #5 David August 17, 2009 Yes, nice. I recall coming across a recent analysis book that introduces the subject using an example very similar to this one and then using it to motivate the discussion of all the usual analysis topics. I forgot the title and author, does anyone know ? 6. #6 --bill August 17, 2009 (a) That’s not an odd function. Odd functions are defined on intervals of the form [-a,a]. You have calculated the Fourier series for the odd extension of the given function. There is a Fourier series for the even extension that has only cosine terms. (b) The question of what is a proof and what has to be proved is something that’s been part of mathematics since Greek times. Euler, for example, had what he would view as proofs for his use of infinitesimals, although we would not accept these proofs. (c) In response to CCPhysicist, power series were known well before Euler; they were common in the 1600′s. Also, Newton’s work on calculus predates his work on physics. 7. #7 Matt Springer August 17, 2009 Fair point, Bill. The triangle wave is of course an odd function if you take this to be one period worth of a function that extends to infinity (or at least symmetrically about the y-axis) in both directions. We could also say the cosine terms are all equal to zero by virtue of the fact that cos(0) = 1, and since f(0) = 0 the cosine coefficients all have to be identically zero to fit the boundary condition. 8. #8 CCPhysicist August 18, 2009 Thanks, –bill. So it is more accurate to say that Euler popularized or generalized the use of power series for functions? What motivated Newton to develop the calculus if it wasn’t calculating motion or gravitation problems? Matt, you are still missing a key point about extending the function. Specifically, “symmetrically about the y-axis” produces an even function (equal to +1 at -0.5, etc) that is not periodic and hence messier to deal with. 9. #9 Matt Springer August 18, 2009 I mean a symmetric interval, ie, [-a, a]. The function would be odd and thus antisymmetric on the interval. 10. #10 kiwi August 18, 2009 Extending the function to be even still gives a periodic function, only now the period is 4 rather than 2. The Fourier series in this case will indeed be a cosine series, but you need half-integer multiples of pi … cos(n pi/2 x) to do it (in fact only the odd n’s are required). Note also that the sum of the cosine series is zero at x=0 even thought the terms themselves are 11. #11 Neil B ☺ August 18, 2009 Here’s a problem with Fourier decomposition: the infinite series is supposed to constitute the final result like a true triangular wave. But if I “sample” the triangular wave along a given interval that doesn’t include the whole thing, that’s just a straight slope. There isn’t any way in principle to find all the ever-smaller harmonics that correspond to a particular triangular wave frequency. But if I had some mathematically perfect “filter” or frequency analyzer (*not* to be confused with actual physical instruments) I should be able to examine the portion and find the “real” harmonics for the actual frequency. This is contradictory. 12. #12 Tercel August 18, 2009 There is nothing contradictory about this, Neil. It is not necessary that the Fourier transform of a signal be equal to the transform over finite sub-sections of that signal. In fact, many signal processing toolkits will allow you to perform a Fourier transform over a sequence of small sections of the signal. This is (often) called the “Short Time Fourier Transform.” 13. #13 N e i l B ♪ ♫ August 18, 2009 Thanks Tercel, but my point is: a superposition of many ever-higher frequency sine waves gives a “triangular wave.” So that means, the composition should be contained within a small interval. If I had an ideal (not necessarily what any “real” one could do) frequency spectrometer and the triangular wave was moving past, it should show me all those frequencies that were of shorter period than the sample interval. But those frequencies are geared to the particular period of the specific triangular wave, which is “not knowable” to anyone watching it for a small interval. I think the trouble is, the use of true infinities here. Any finite superposition will have a structure that is actually represented in the wiggles etc. that can be seen. But when we add an infinite series, we can get a “sum” by the convergence, but it is not IMHO a logically consistent function for the sake of finding derivatives. 14. #14 Avi Steiner August 18, 2009 A heck of a lot easier way to get a0 is to notice that by definition, f(0)=0. But since sin(0)=0, all the terms in the fourier sine series disappear at x=0 but a0. Thus, a0=f(0)=0. 15. #15 Sue VanHattum August 21, 2009 I don’t know how trackbacks work, so I just wanted to tell you I put this in the Math Teachers at Play #14, over at Math Mama Writes.
{"url":"http://scienceblogs.com/builtonfacts/2009/08/16/sunday-function-43/","timestamp":"2014-04-21T12:51:13Z","content_type":null,"content_length":"67988","record_id":"<urn:uuid:c7453234-139b-4298-8c19-cbcacd828ea2>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00340-ip-10-147-4-33.ec2.internal.warc.gz"}
Montclair, NJ Algebra 1 Tutor Find a Montclair, NJ Algebra 1 Tutor Hello,My goal in tutoring is to develop your skills and provide tools to achieve your goals. My teaching experience includes varied levels of students (high school, undergraduate and graduate students).For students whose goal is to achieve high scores on standardized tests, I focus mostly on tips a... 15 Subjects: including algebra 1, chemistry, calculus, algebra 2 ...I believe that through the use of manipulatives, games, and other hands-on methods, every student can learn, and come to enjoy doing so. There is nothing I find more rewarding than watching students grow, build confidence, and find success academically where there used to be frustration and stru... 18 Subjects: including algebra 1, physics, writing, elementary (k-6th) ...My philosophy of tutoring, and teaching in general, is that the student should always be in the process of learning TWO things: the subject at hand, of course, but even more importantly he or she should be learning how best to approach new and unfamiliar material. What makes tutoring an ideal en... 50 Subjects: including algebra 1, chemistry, calculus, physics ...My technique is always to break down complicated concepts into simple components, whether it's the steps of a complicated statistical technique, the components of a complex equation, or the complexities of an action potential. I have 10 years of experience successfully teaching and tutoring statistics. I have a Bachelor's degree in Psychology and a Masters and PhD in Behavioral 5 Subjects: including algebra 1, statistics, psychology, Microsoft Excel ...I have taken an advanced genetics course along with an extensive review since I was tested on my dental admissions test. Genetics was also a subject in my pathology class taken last semester, so to say the least I have been exposed to it a lot the last 4 years. From mutations to meiosis, I would be great tutoring in this subject! 20 Subjects: including algebra 1, reading, chemistry, algebra 2
{"url":"http://www.purplemath.com/montclair_nj_algebra_1_tutors.php","timestamp":"2014-04-17T04:06:34Z","content_type":null,"content_length":"24348","record_id":"<urn:uuid:49f09917-af24-4149-a36a-6cb36fa48ba3>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00497-ip-10-147-4-33.ec2.internal.warc.gz"}
A004102 - OEIS F. Harary and R. W. Robinson, Exposition of the enumeration of point-line-signed graphs, pp. 19 - 33 of Proc. Second Caribbean Conference Combinatorics and Computing (Bridgetown, 1977). Ed. R. C. Read and C. C. Cadogan. University of the West Indies, Cave Hill Campus, Barbados, 1977. vii+223 pp. Harary, Frank; Palmer, Edgar M.; Robinson, Robert W.; Schwenk, Allen J.; Enumeration of graphs with signed points and lines. J. Graph Theory 1 (1977), no. 4, 295-308. R. W. Robinson, personal communication. R. W. Robinson, Numerical implementation of graph counting algorithms, AGRC Grant, Math. Dept., Univ. Newcastle, Australia, 1976. N. J. A. Sloane and Simon Plouffe, The Encyclopedia of Integer Sequences, Academic Press, 1995 (includes this sequence). R. W. Robinson, Table of n, a(n) for n = 1..22 J. Cummings, D. Kral, F. Pfender, K. Sperfeld et al., Monochromatic triangles in three-coloured graphs, Arxiv preprint arXiv:1206.1987. 2012. - From N. J. A. Sloane, Nov 25 2012 Harald Fripertinger, The cycle type of the induced action on 2-subsets Vladeta Jovovic, Formulae for the number T(n,k) of n-multigraphs on k nodes
{"url":"http://oeis.org/A004102","timestamp":"2014-04-20T21:00:43Z","content_type":null,"content_length":"16575","record_id":"<urn:uuid:3e6436ae-11dd-455a-a7ba-8dae8c0d23b9>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00504-ip-10-147-4-33.ec2.internal.warc.gz"}
Passive Crossover Slopes Passive Crossovers Previously on this site, high-pass and low-pass crossovers were briefly covered. This page will go more in-depth. It will cover 1st-4th order crossovers and their characteristics. It will also cover various RC, RL and RLC networks that are used to tame peaks/dips in frequency response and impedance. High Pass Crossovers Below there are 4 different crossover configurations. The graph in the middle of the 4 systems shows the slopes for 6dB (first order), 12dB (second order), 18dB (third order) and 24dB (fourth order) per octave crossovers. The crossover components' colors match its corresponding curve on the graph. In this crossover, the capacitor blocks the lower frequencies while allowing the higher frequencies to pass. In this crossover, the capacitor does the same thing as in the previous diagram. The inductor shunts, to ground, some of the low frequencies that are allowed to pass through the capacitor. This causes a higher roll off rate. The following is one example of a second order high-pass crossover. Same as above, steeper slope. See above. Low Pass Crossovers Below there are 4 different crossover configurations. The graph in the middle of the 4 systems shows the slopes for 6dB (first order), 12dB (second order), 18dB (third order) and 24dB (fourth order) per octave crossovers. The crossover components' colors match its corresponding curve on the graph. In this crossover, the inductor (coil) blocks the higher frequencies while allowing the lower frequencies to pass. In this crossover, the inductor does the same thing as in the previous diagram. The capacitor shunts, to ground, some of the higher frequencies that are allowed to pass through the inductor. This causes a higher roll off rate. Same as above, steeper slope. See above. You can see that the higher order crossovers are more complex and harder to build but they also provide a steeper slope. It is not necessary to use a high order crossover to have a great sounding speaker system. Every system has unique requirements to achieve the best sound quality. The installer must determine which type will work best (that's why they get the big bucks :-). Comparing Different Filter Types The previous calculator determines the values for a 2nd order (12dB/octave slope) Linkwitz-Riley crossover. This type of crossover is very common because of it's flat response when the high and low frequency power outputs are summed. This is important to prevent a dip or peak in the acoustic power response of the speakers (it provides a flat frequency response at the crossover point when driving both high and low frequency drivers). • Basics: • When you double the cone area (add a second speaker with equal properties) while keeping the power constant, you gain 3dB of output. If you reduce the number of speakers by 1/2, you lose 3dB. • If you double the power to a driver, you gain 3dB. If you cut the power by 1/2 you lose 3dB. • If you double the cone area and the power (by paralleling the second speaker on the amplifier), you gain 6dB. • The 'Q' of a filter (crossover) indicates the shape of the curve. For a second order crossover, it can be calculated with the formula: Q=[(R^2C)/L]^1/2 Where R is the speaker's impedance. C is the capacitor used in the filter. And L is the inductor used in the filter. Chebychev: Q = 1 Butterworth: Q = .707 Bessel: Q = .58 Linkwitz-Riley: Q = .49 • Notes: • The Green slope above is the same slope you get with a first order Butterworth alignment. • This section does not deal with phase response or group delay. Both are important when building high quality crossovers. If you were building a pair of home speakers, phase and delay would be important but a car audio environment has so many problems (speakers not equidistant from all listeners, multiple drivers reproducing the same signal...) that I don't think it would be of use to enough people to cover it here. 2nd order Linkwitz-Riley: In the following graph, you can see the response for both the high and low frequency drivers. You can also see that the crossover point is 150hz. As previously noted, the on-axis acoustic output of a L-R crossover has a flat response at the crossover frequency. To do this, the crossover point has to be 6 dB down. Since there are 2 drivers (a midrange and a woofer) operating at the crossover point and they presumably have a comparable output and they are receiving the same power (both 6dB down from full power), the output (their summed on-axis acoustic output) will be as if a single driver were playing at the crossover point. This will provide a flat overall frequency response at the crossover point. In this section, you will see that there are red and yellow curves (in-phase and reverse-phase curves). If the speakers are wired with the same polarity (the positive speaker wire connected to the positive terminal of each speaker) the connection is in-phase. For 12dB/octave crossovers, this connection will result in a deep dip in the output response at the crossover point. To get a flat response (or close to flat for alignments other than Linkwitz-Riley), you must wire one speaker out of phase (typically, the tweeter is wired out of phase). Crossover Type Capacitor Inductor Second Order 133 microfarad 8.5 millihenry 2nd order Bessel: On the next graph, you'll see the response of a 2nd order Bessel crossover. You can see that the crossover point is 5dB down from the pass band. The summed response will give you a slight peak at the crossover point. The high pass and low pass curves have a Q of 0.58. Crossover Type Capacitor Inductor Second Order 152 microfarad 7.4 millihenry 2nd order Butterworth: On the next graph, you'll see the response of a 2nd order Butterworth crossover. The crossover point is 3dB down from the pass band. The summed response will give you a 3dB peak at the crossover point. The high pass and low pass curves have a Q of 0.707. Crossover Type Capacitor Inductor Second Order 188 microfarad 6.0 millihenry 2nd order Chebychev: On the next graph, you'll see the response of a 2nd order Chebychev crossover. The crossover point is at the same level as the pass band. The summed response will give you a 6dB peak at the crossover point. The high pass and low pass curves have a Q of 1.0. Crossover Type Capacitor Inductor Second Order 265 microfarad 4.2 millihenry Open Crossover Output Warning You may damage your amplifier if you drive a second (or higher) order crossover when the speaker's voice coil is open (the speaker is blown) or if no speaker is connected to the crossover's output. When the speaker is removed (or the voice coil opens), the circuit becomes a resonant circuit. This circuit will, at the crossover frequency (or some multiple of the crossover frequency), present a 0 ohm load to the amplifier. The actual resistance will be only the resistance in the speaker wire and the inductor. Any time that there is audio at the resonant frequency, the amplifier will be stressed the same as if the speaker wires were shorted together. This will drive some amplifiers into protection. Others will blow a fuse or die a horrible painful death. The following graph shows how the impedance of the normal circuit (violet line) never drops below 4 ohms (the speaker's impedance). It also shows how the impedance of the circuit without a speaker (yellow line) drops to 0 ohms at the crossover frequency (for a 2nd order crossover, the resonant frequency is the same as the crossover frequency). Correcting Passive Crossover Problems The frequency response plots 'with a speaker' are computer simulations. If we measured the output of an actual speaker with a microphone, the response would not be nearly as smooth. There would be many small dips and peaks. Projected Impedance: This graph is the projected impedance plot of an ideal 4 ohm speaker and a 12dB/octave Linkwitz-Riley crossover. This is actually how the impedance would look with a resistor but a speaker has a dynamic impedance that changes with frequency and will not be as flat. The rise in impedance at the high frequency end of the spectrum is normal and is what causes the speaker's output to roll off. Projected Response: This is the projected frequency response for the ideal 4 ohm driver (speaker) and a 12dB/octave Linkwitz-Riley Crossover. You can see how the roll off is nice and smooth (no bumps or dips). Impedance plot using speaker's varying impedance: This graph is the impedance plot for the driver and the Linkwitz-Riley crossover only. You can see the peak in impedance around 64 hertz (at the speaker's resonant frequency in its .55 ft^3 enclosure) and the rising impedance above ~500hz. This rise in impedance will cause the frequency response to deviate from the projected response. Response with Crossover and Driver: As I said before, the rising impedance (due mostly to the voice coil's inductance) will cause the frequency response to be less than perfect. If the crossover point fell on a 'flat' part of the impedance plot, there would be no bump in the slope. Conjugate Networks: These networks use resistors, capacitors and inductors to correct for impedance variations in speakers. In the following diagram, you see the Linkwitz-Riley crossover components (a capacitor and an inductor). You can also see a few other networks (marked 'A', 'B' and 'C'). Impedance Correction Network: This type of network (marked 'A' in previous diagram) is also called a Zobel network. It uses a resistor and a capacitor to counteract the effect of the voice coil's inductance and allow the crossover to follow the proper slope. In the following graph you can see a smoother impedance curve near the crossover frequency. This is due to the RC (resistor/capacitor) network. And here you can see the smoother slope: Other Problems: For most systems, the following crossover and single conjugate network would be enough. Some situations require more extensive impedance compensation. In the previous impedance plots, you should have noticed the rising impedance above ~5000hz. Some amplifiers may have trouble with this type of reactive load. To correct it, you can use another RC network to simulate a more resistive load. In the following impedance plot, network 'B' is employed. You can see how the impedance at the high frequency end of the plot is now almost flat. On the following graph, you can see that it had virtually no effect on the frequency response while it flattened the impedance. Reducing Resonant Peak: If you're using an amplifier with a low damping factor (this would generally be tube amplifiers), the impedance rise at resonance may cause audible frequency response anomalies. If we use the RLC network, we can flatten the impedance of the system (as seen in the following diagram). The inductor and capacitor create a resonant bandpass filter and the resistor limits the current through the filter. Without the resistor, the impedance would drop to 0 ohms. And again we can see that we fixed the problem without effecting the frequency response.
{"url":"http://www.bcae1.com/xoorder.htm","timestamp":"2014-04-21T02:00:05Z","content_type":null,"content_length":"42449","record_id":"<urn:uuid:efd3768b-5b46-4bd9-bf37-29d1bbef4165>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00282-ip-10-147-4-33.ec2.internal.warc.gz"}
scilab call C function up vote 0 down vote favorite I would like to call C function within scilab. I am trying to follow "call" convention but it does not work for me somehow therefore I am looking for help. C function: f1=['void add1(double a,double b,double *c) { *c=a+b; }']; f1 function should just add two real numbers and pass it by reference. The result is always out of range. Can you please help me point what I am doing wrong and how to correctly call C function within scilab? Thank you. c call scilab add comment 1 Answer active oldest votes If you make all arguments pointers, it works fine. Also don't forget to load the function by executing loader.sce Working example: f1=['void add1(double* a,double* b,double *c){ *c=*a+*b; }']; up vote 0 down vote a=1.1; exec loader.sce [c] = call('add1', a, 1,'d', b,2,'d', 'out',[1,1],3,'d'); add comment Not the answer you're looking for? Browse other questions tagged c call scilab or ask your own question.
{"url":"http://stackoverflow.com/questions/14703036/scilab-call-c-function","timestamp":"2014-04-18T11:00:17Z","content_type":null,"content_length":"61308","record_id":"<urn:uuid:ea4e503f-7058-423f-bc87-1b8ce0ed3e6e>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00270-ip-10-147-4-33.ec2.internal.warc.gz"}
MESON : thm list -> term -> thm Attempt to prove a term by first-order proof search. A call MESON[theorems] `tm` will attempt to prove tm using pure first-order reasoning, taking theorems as the starting-point. It will usually either solve the goal completely or run for an infeasible length of time before terminating, but it may sometimes fail quickly. Although MESON is capable of some fairly non-obvious pieces of first-order reasoning, and will handle equality adequately, it does purely logical reasoning. It will exploit no special properties of the constants in the goal, other than equality and logical primitives. Any properties that are needed must be supplied explicitly in the theorem list, e.g. LE_REFL to tell it that <= on natural numbers is reflexive, or REAL_ADD_SYM to tell it that addition on real numbers is symmetric. Will fail if the term is not provable, but not necessarily in a feasible amount of time. A typical application is to prove some elementary logical lemma for use inside a tactic proof: # MESON[] `!P. P F /\ P T ==> !x. P x`;; val it : thm = |- !P. P F /\ P T ==> (!x. P x) To prove the following lemma, we need to provide the key property of real negation: # MESON[REAL_NEG_NEG] `(!x. P(--x)) ==> !x:real. P x`;; val it : thm = |- (!x. P (--x)) ==> (!x. P x) If the lemma is not supplied, MESON will fail: # MESON[] `(!x. P(--x)) ==> !x:real. P x`;; Exception: Failure "solve_goal: Too deep". MESON is also capable of proving less straightforward results; see the documentation for MESON_TAC to find more examples. Generating simple logical lemmas as part of a large proof. ASM_MESON_TAC, GEN_MESON_TAC, MESON_TAC.
{"url":"http://www.cl.cam.ac.uk/~jrh13/hol-light/HTML/MESON.html","timestamp":"2014-04-20T16:15:23Z","content_type":null,"content_length":"2838","record_id":"<urn:uuid:09cb0998-1ff2-49f9-9475-b2e6e7840e0d>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00600-ip-10-147-4-33.ec2.internal.warc.gz"}
Simultaneous Equation x-3y=14 -(1) 4x+2y=0 -(2) What is x & y? thanks. Which part are you having trouble with? From the first equation you can see that $x=14+3y$ So plug $14+3y$ into the second equation in the place of $x$ then solve that for $y$. Once you have the value of $y$ plug that back into either equation to get the value of $x$. Let us know if you have trouble with this. Last edited by mrmohamed; December 1st 2009 at 02:10 AM. Wouldn't be useful here. Elimination method is only useful when the fact of adding or substracting together both equations gets rid of an unknown (like $-2x$ in the first one and $2x$ in the second). Here you wouldn't get anything useful [I never ever use the elimination method, oddly enough I prefer using substitution in all problems ...] but if you are asked to do the Elimination method ... $\left\{\begin{matrix}<br /> x-3y=14\\ <br /> 4x+2y=0<br /> \end{matrix}\right.$$\Leftrightarrow \left\{\begin{matrix}<br /> x-3y=14\\ <br /> 2x+y=0<br /> \end{matrix}\right.$ Gauss Elimination, $\ Rightarrow \left\{\begin{matrix}<br /> x-3y=14\\ <br /> 14y=-56<br /> \end{matrix}\right.$ Therefore,the solution is $[x=2,y=-4]$.
{"url":"http://mathhelpforum.com/algebra/117771-simultaneous-equation.html","timestamp":"2014-04-17T15:55:03Z","content_type":null,"content_length":"45328","record_id":"<urn:uuid:d4717dc5-c31e-4c50-8671-6b0eab5b5849>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00346-ip-10-147-4-33.ec2.internal.warc.gz"}
Daily Calorie Intake Calculator How Many Calories Do You Need? - Daily Calorie Requirement Several factors like age, health, body type and size, weight, fitness goal and activity level dictate the amount of calories you should eat in a day. Before determining how many calories are needed to gain or lose weight, let’s start with how many calories it takes to maintain your current weight. In order to get a general idea of how many calories you burn in a day, multiply your current weight by the daily amount of energy used: inactive persons should multiply weight times 14 to 16 calories; somewhat active 16 to 18 calories; and extremely active 18 to 20 calories. 110 lbs. somewhat active: 110 x 18 = 1980 calories 180 lbs. inactive: 180 x 14 = 2520 calories 250 lbs. extremely: 250 x 20 = 5000 calories If you want a more precise number of how many calories to consume in a day, you must take into consideration the constant rate at which calories are burned at a state of rest. This is called your basal metabolic rate (BMR). Combining your BMR with amount of work you body does when not at rest (computer work, lifting weights, cooking, jogging etc.) will give you your Total Daily Energy Expenditure (TDEE). Everyone’s BMR and TDEE varies: it takes less fuel to maintain a moderately active, petite forty-something female than it does an active, voracious teenage male. Here’s how to determine your BMR and TDEE: Males: 66 + [6.23 x weight (lbs.)] + [12.7 x height (in.)] – [6.8 x age (years)] Females: 655 + [4.35 x weight (lbs.)] + [4.7 x height (in.)] – [4.7 x age (years)] BMR + Work = Total Daily Energy Expenditure [TDEE] If you BMR is 1500 calories a day and you spent 45 minutes on a treadmill and burned 500 calories and needed another 400 calories to complete an eight-hour work day at the office, then your TDEE would be 2400. Harris-Benedict Formula The Harris Benedict Formula calculates daily calorie intake by multiplying your BMR by your activity level. The more active you are, the higher the number you multiply your BMR by: Little to no exercise BMR x 1.2 Light exercise (1-3 times per week) BMR x 1.375 Moderate exercise (3-5 times per week) BMR x 1.55 Heavy exercise (6-7 times per week) BMR x 1.725 Extremely heavy exercise (more than 7 times per week + physical job) BMR x 1.9 But of course we all have different body types, and one shortcoming of the Harris-Benedict Formula is that it treats all bodies the same. This formula fails to consider leaner, more muscular individuals by underestimating the number of calories needed. Similarly, and on the opposite end of the spectrum, the amount of calories needed for overweight individuals is overestimated. The Harris-Benedict Formula is most beneficial to the average person trying to maintain a current acceptable body weight of shed a few pounds. Katch-McArdle Formula The Katch-McArdle formula uses a different equation to calculate your BMR. It does not take sex into consideration, but readily adjusts to varying body types. If you know your lean body mass percentage (or your percentage of body fat), then you can apply this information to get a more accurate BMR. For example: A 170 lb. person with 20% body fat would have 136 lbs. of lean muscle mass and 34 lbs. of fat: [170 lbs. x 20% body fat= 34] , then [170 lbs. – 34= 136 lbs. of lean muscle mass] Then calculate your BMR by using the following equation: 370 + [9.79759519 x lean muscle mass] = BMR 370 + [9.79759519 x 136] = 1702.47295 Once you have established your BMR using the Katch-McArdle formula, apply the same numbers used in the Harris-Benedict table to incorporate the amount of weekly physical activity to get your TDEE. Little to no exercise BMR x 1.2 Light exercise (1-3 times per week) BMR x 1.375 Moderate exercise (3-5 times per week) BMR x 1.55 Heavy exercise (6-7 times per week) BMR x 1.725 Extremely heavy exercise (more than 7 times per week + physical job) BMR x 1.9 Below are a few more ways for you to figure out just how many calories your body needs to maintain your current weight. Resting Metabolic Rate Step 1: Multiply your body weight by 10. Step 2: Determine what your overall activity level is - very active=add 60% - 80% to your RMR; moderately active=add 40% - 60%; generally sedentary, add 20% - 40%. For example, if you weigh 150-pounds and consider yourself to be moderately active, then the following equation - 1500 + (1500 x 50%) = 1500 + 750, or 2250 calories - is would apply to you. Be sure to enter your particular numbers into this equation in order to find out exactly what your caloric intake should be each and every day. Mifflin-St Jeor Equation This equation is one of the most accurate methods to use when it comes to figuring out you’re your particular caloric needs should be for the day. For men: RMR = 9.99 x wt (kg) + 6.25 x ht (cm) – 4.92 x age (yrs) + 5 For women: RMR = 9.99 x wt (kg) + 6.25 x ht (cm) -4.92 x age (yrs) – 161 Step 1: First convert pounds to kilograms by dividing by 2.2. Step 2: Next, convert the inches to centimeters by multiplying by 2.545. For the most part, all moderately active people are generally advised to consume about 1.5 to 1.7 times the calculated resting metabolic rate (RMR). Setting Caloric Goals Once you know what the recommended amount of daily calories are for you, you can easily figure out how to adjust this number in order to either lose or gain weight. Given that 3,500 calories is equal to 1 pound, you are able to take your daily allotment of calories and figure how many less or more calories you will need to get you to the weight that's just right for you! Counting Calories to Lose Weight When counting calories to lose weight, there are a few key pieces of information to know before starting a plan. • Don’t try to lose weight too fast. Fad diets claiming you will lose more than 2 pounds per week are usually unhealthy, and your body is quick to reclaim the lost weight once off the diet. Eliminating 500 calories from your body per day, whether achieved by eating less, exercising more, or a combination of the two, will result in losing one pound per week. A pace for healthy weight loss is two pounds per week. • Second, should your calorie counter determine eating less than 1000-1200 calories per day for a woman and 1300-1500 for a man, forgo what the calorie counter says and consume the aforementioned minimums. Eating less can sometimes cause the body to slow your metabolic rate; the slowing of the metabolism is sometimes referred to as “starvation mode.” Whether improving your self-image or overall health, people who lose weight feel better. Starving your body will leave you feeling just the opposite: sluggish, run-down, and sickly. So remember to stay informed of what you eat and stay active! Sources: http://www.cordianet.com/calculator.htm http://www.multivitamin.co.uk/bmr-results/ http://www.bmi-calculator.net/bmr-calculator/harris-benedict-equation/ http://www.roth.net/Articles/Misc/ Calories/ http://healthylifejournal.org/articles/what-are-calories-and-how-do-they-affect-you/ Matthew Johnson ( ) is a certified personal trainer, nutrition expert and an on-line fitness consultant that started ChangingShape.com back in 2001.
{"url":"http://www.changingshape.com/calculators/daily-calorie-intake/","timestamp":"2014-04-19T14:29:49Z","content_type":null,"content_length":"36981","record_id":"<urn:uuid:fa2b88e9-6750-4d9a-9b45-533fb853fac2>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00153-ip-10-147-4-33.ec2.internal.warc.gz"}
Finitely constrained classes of homogeneous directed graphs , 1996 "... A class of finite tournaments determined by a set of "forbidden subtournaments" is wellquasi -ordered if and only if it contains no infinite antichain (a set of incomparable elements). It is not known if there is an algorithm which decides whether or not a class of finite tournaments determined by a ..." Cited by 5 (2 self) Add to MetaCart A class of finite tournaments determined by a set of "forbidden subtournaments" is wellquasi -ordered if and only if it contains no infinite antichain (a set of incomparable elements). It is not known if there is an algorithm which decides whether or not a class of finite tournaments determined by a finite set of forbidden subtournaments has this property. We prove a noneffective finiteness theorem bearing on the problem. We show that for each fixed k, there is a finite set of infinite antichains, 3 k , with the following property: if any class defined by k forbidden subtournaments contains an infinite antichain, then a cofinite subset of an element of 3 k must be such an antichain. By refining this analysis and using an earlier result giving an explicit algorithm for the case k = 1, we show that there exists an algorithm which decides whether or not a class of finite tournaments determined by two forbidden subtournaments is well-quasi-ordered. Keywords: antichain, decidability, forb... , 2002 "... The tournament N 5 can be obtained from the transitive tournament on . . . , 5}, with the natural order, by reversing the edges between successive vertices. Tournaments that do not have N 5 as a subtournament are said to omit N 5 . We describe the structure of tournaments that omit N 5 and use th ..." Cited by 2 (1 self) Add to MetaCart The tournament N 5 can be obtained from the transitive tournament on . . . , 5}, with the natural order, by reversing the edges between successive vertices. Tournaments that do not have N 5 as a subtournament are said to omit N 5 . We describe the structure of tournaments that omit N 5 and use this with Kruskal's Tree Theorem to prove that this class of tournaments is well-quasi-ordered. The proof involves an encoding of the indecomposable tournaments omitting N 5 by a finite alphabet. - Edinburgh, 2001), Linear Algebra Appl , 2001 "... We survey computers systems which help to obtain and sometimes provide automatically conjectures and refutations in algebraic graph theory. ..." , 1999 "... This paper will describe the basic architecture of the system and illustrate its flexibility with several examples. These descriptions will be accompanied by commentary on the associated design decisions, but will certainly not be exhaustive. The LINK manual fills in We would like to acknowledge the ..." Cited by 1 (0 self) Add to MetaCart This paper will describe the basic architecture of the system and illustrate its flexibility with several examples. These descriptions will be accompanied by commentary on the associated design decisions, but will certainly not be exhaustive. The LINK manual fills in We would like to acknowledge the support of DIMACS and NSF grant CCR-9214487. DIMACS is a cooperative project of Rutgers University, Princeton University, AT&T Laboratories, Lucent Technologies/Bell Laboratories Innovations, and Bellcore. DIMACS is an NSF Science and Technology Center, funded under contract STC-91-19999; and also receives support from the New Jersey Commission on Science and Technology. , 2002 "... Tournament embedding is an order relation on the class of finite tournaments. An antichain is a set of finite tournaments that are pairwise incomparable in this ordering. We say an A # B. Those Add to MetaCart Tournament embedding is an order relation on the class of finite tournaments. An antichain is a set of finite tournaments that are pairwise incomparable in this ordering. We say an A # B. Those , 2009 "... We discuss two combinatorial problems concerning classes of finite or countable structures of combinatorial type, constrained to omit a specified finite number of constraints given by forbidden substructures. These constitute decision problems, taking the finite set of constraints as input. While th ..." Add to MetaCart We discuss two combinatorial problems concerning classes of finite or countable structures of combinatorial type, constrained to omit a specified finite number of constraints given by forbidden substructures. These constitute decision problems, taking the finite set of constraints as input. While these problems have been studied in a number of natural contexts, it remains far from clear whether they are decidable (perhaps in polynomial time) in their general form. Among the open problems of the final section, we point to Problem 17 as a particularly concrete one; we have given a translation of that problem into completely explicit graph theoretic terms.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1493323","timestamp":"2014-04-16T12:30:25Z","content_type":null,"content_length":"25710","record_id":"<urn:uuid:7a5f340a-cf79-4d07-8156-395cd4f577b6>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00032-ip-10-147-4-33.ec2.internal.warc.gz"}
Proof of convergence How can we prove that [tex]n^s-(n-1)^s[/tex] converge to zero as [tex]n \to \infty[/tex] where s as a real number satisfies [tex]0<s<1[/tex]? I am specifically looking for a more or less elementary proof for this for real s. I think we can use the infinite binomial expansion, but I am looking for something that does not require more than elementary calculus.
{"url":"http://www.physicsforums.com/showthread.php?t=354520","timestamp":"2014-04-20T06:01:15Z","content_type":null,"content_length":"25201","record_id":"<urn:uuid:af235ff8-0bc9-4339-b5d8-9e5ea75dc2c9>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00405-ip-10-147-4-33.ec2.internal.warc.gz"}
About SA(X)S If you are not familiar with small-angle scattering, this may be a good place to start. update: check out this review paper, which discusses most aspects of SAXS data collection, all possible data correction steps and has a few remarks on the subsequent data fitting steps. Besides this introductory text, there is also a good write-up here, which is a little more in-depth and expansive. Welcome to small-angle scattering. Techniques such as light scattering (LS), small-angle x-ray scattering (SAXS) and small-angle neutron scattering (SANS) are all small angle scattering techniques. These techniques are used to analyse the shape and size of objects much smaller than can be seen by eye, by analysing the way that radiation (light, x-rays or neutrons) is scattered by the objects. The sizes we are talking about range from 1 micrometer to several angstrom (1 angstrom is 1x10^-10 meters, or 0.0000000001 meter). Unlike microscopic techniques, in small-angle scattering, the smaller the size, the easier it is to measure. There are other marked differences with microscopy, which will be discussed after explaining what the technique is. All aforementioned scattering techniques operate on the principle of wave interference when contrasting features are encountered in the sample. No photon energy loss is assumed in most cases, so that all scattering effects can be considered purely elastic. And can be treated as a pure wave-interference effect. For the technique to work, a beam of parallel ‘light’ is shone upon a sample. Small contrasting features cause a fraction of the light to deviate from its path, thus scattering. This scattered light is then collected on an angle-discriminating detector. As mentioned, scattering occurs when contrasting features are present. This has implications to the type of samples that can be measured. Among the samples that can be analysed with small-angle scattering are proteins in solution, polydisperse particles, pore structures and density differences in single-phase samples. What is important is that there is a contrast between the particle or pore, and the surrounding material in the sample. The type of contrast varies depending on the type of radiation used. In small-angle x-ray scattering this is an electron density contrast, for small-angle neutron scattering this is a neutron scattering cross-section, and for small-angle light scattering this is a transmittance contrast. This difference in contrast type means that if there is a contrast problem while measuring a sample with one technique (i.e. too little or too much contrast), by switching to a different radiation type you may be able to solve your problems. Like its sister technique, diffraction, small-angle scattering charcterises lengths in a sample. Simply said, small objects scatter a little to larger angles, and large objects scatter a lot to small angles. Since the scattering behaviour is highly dependent on the lengths in the sample, this length information can (in most cases) be retrieved. A variety of analysis methods can be used to interpret the scattering behaviour. These analysis methods revolve around a mathematical construct known as the Fourier transform. Effectively, the scattering can be calculated as the Fourier transform of the real-space structure. In other words, a measurement is a physical Fourier transform of your sample. Unfortunately, most detectors can only measure the intensity of the Fourier transform, and not its complementary phase information, and thus vital information is lost, complicating data interpretation. A marked differences from diffraction techniques is that the small-angle scattering pattern does not contain much information. Indeed, without information on the sample morphology obtained from other techniques such as microscopy, the small-angle scattering information is open to a multitude of interpretations (technically, this is true of diffraction as well, but methods have been developed to get around this problem for some diffraction data). So why would we choose to use this technique as opposed to, say, microscopic methods? Comparison with microscopy techniques Microscopy methods are the first techniques that come to mind when faced with a problem involving the analysis of small particles, and for good reasons. Microscopy allows for visualisation of objects of virtually any size below a centimeter or so. Optical and confocal microscopes can image objects to sub-micron resolution. Scanning electron microscopes image down to several hundred nanometers, and transmission electron microscopes go even further down, sometimes to near-atomic resolution. Scanning-probe microscopes can probe along the entire size-range. (For sizes comparable to those probed by small-angle scattering techniques, optical microscopy is often not an option and will not be considered beyond this point.) But if you talk to microscopists, you will find that one of the bigger challenges in the successful application of microscopy lies in the sample preparation. The further down you want to go in size, the more complex the preparation can become, and many (soft) samples can therefore not be measured properly. The sample preparation method may also induce a structure in the sample, for example due to stresses imposed during cutting, grinding or polishing. This can make it difficult to distinguish the sample structure from the induced artefacts. But even if the sample preparation is done perfectly, you can only observe a small fraction of the complete sample at a time. In order to get a good statistical average, you need to prepare many samples and measure each sample at multiple locations. This is far from impossible, but it makes you start to think if there would be another solution to your problem. Finally, if you are interested in observing your sample in-vivo or in-situ, e.g. under reaction conditions or high pressure, microscopic techniques ofen fail to offer solutions. With small-angle scattering, these problems are substituted for others (which we will come to in a minute). Firstly, small-angle scattering requires only very little, if any, sample preparation. Liquid samples can be measured in capillaries, for example, and fibre samples can be measured simply by suspending the fibre in the beam. Many in-situ reactors can also be used so that reactions can be probed as they proceed. An additional benefit is that the information present in the scattering pattern pertains to all objects in the irradiated volume. This means that if you want to get an average particles parameter over billions of particles in a suspension, all you need to do is make sure the irradiated sample volume is large enough to contain a billion particles. In small-angle x-ray scattering, volumes of about one to several cubic millimeters are commonly probed. For neutron scattering techniques, this irradiated volume is usually a cubic centimeter or more! So there are certainly advantages of using small-angle scattering methods over microscopes methods, yet microscopes dominate laboratories. Why is that? That is related to the challenges of analysis of small-angle scattering patterns, and the challenges of collecting a good scattering pattern. Challenges with small angle x-ray scattering Please excuse my focus in this case on x-ray scattering, it is simply the form of scattering I am most familiar with. Some challenges, but not all, of x-ray scattering can be translated to the other scattering techniques. There are many fine details about the challenges of SAXS. Besides the challenges of collecting a near-perfect scattering pattern (which in many cases can be overcome through hard work and understanding), the main issue lies with the data analysis. If you have managed to collect the perfect background-subtracted scattering pattern in the right angular region on an absolute scale with uncertainties for each data point, the road ahead may still be unclear. The problem lies mainly in the ambiguity of the data, which has become ambiguous since we have lost information in the scattering process (as we have detected only the scattering intensity and have lost the phase information of the Fourier transform). The most clear example of this loss is that particle shape and polydispersity can no longer both be retrieved. This is another important concept in scattering; you can only determine the polydispersity if you know the particle shape, and you can only determine the particle shape if you have information on the polydispersity (for example if you have a monodisperse sample of proteins in solution, or if you have information on the distribution shape). This means experimentally that, in order to get convincing results of your SAXS data analysis, you need complementary microscopy information. Even with all the information, data fitting is difficult at best. In short then, the technique is still (a hundred years after conception) in dire need of development, mostly in terms of data analysis. This weblog aims to collect some of the information on the web for other users and developers, and adding a bit more of my own research, in the hope that we can get closer to making small-angle scattering easy. under construction About SA(X)S You can follow all the replies to this entry through the comments feed 1. Thank you for sharing your knowledge! 2. Thanks for sharing the information. Also, do you know how to analyze the SAXS data? I am wondering about a method to find partical sizes and structures? 3. Hey. There are methods for determining particle sizes and structures, but you can only determine one of them with SAXS. To be more precise, you can only determine one of the following, you have to know the other two: Size distribution, shape, packing. As soon as you know two (from microscopy, for example) you can determine the third. For a general introduction, check out this talk: http://www.youtube.com/watch?v=_SLhMXjzmIk To “see” what is in a scattering pattern, I recommend this: http://www.youtube.com/watch?v=ym42jYPM34Y&feature=plcp There is some more work coming later about data analysis, but that will take some more time. 4. Hey Brian, I was wondering if you know anything about calibration of SAXS instrument with AgBh? Heaving a bit of struggle here with interpreting it and explaining the process step by step … 5. Sure, no problem. Check your e-mail in a minute… 6. Hi Brian, Are you using a Mac to make your presentations, such as the one on SAXS? 7. Yup, a Mac, Keynote, wiimote (for advancing slides), a healthy dose of recalcitrance, plenty of practice and plenty of hints and tips from http://www.presentationzen.com/ Why do you ask? 8. Hey Brian, In my research i use SANS measurments, do you know how can i obtain the scattering length density profile ? (Psld(r)) ?
{"url":"http://www.lookingatnothing.com/index.php/about-saxs","timestamp":"2014-04-19T04:20:09Z","content_type":null,"content_length":"42414","record_id":"<urn:uuid:05f9dc7f-37cd-4381-a908-10014577990b>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00009-ip-10-147-4-33.ec2.internal.warc.gz"}
Synthesizer: Test Analysis If you?re in AP calculus now, now you?re probably one of the Math champions. No, not probably you are one of the champions. So it?s most likely you get really good study skills. But you know, you can always use some extra tricks. Especially, when you?re taking an AP test where there?s really heavy time pressure. I?m going to explain an activity to you that will help speed up your working on the AP test. Will help organize your thoughts. But first I you need you to go and download three different things. The first thing is a synthesizer that?s called test analysis. You?ll find that in the bonus materials. The next item is a little cheat sheet I put together for you. It?s a list of all the episodes in our course, along with all the sub-topics. The purpose of this exercise, is to use the worksheet that I developed for you, and identify test problems according to episode and sub-topic. Finally, you need to follow the link to the official College Board website, where you will find the link to the May 2009, official course description for AP calculus. Now within that one, you?re going to find two different sections you need to download. One of them is the sample free response questions. The other is the sample multiple choice questions. We?re going to look at the sample multiple choice questions first, and I?m going to stop for a minute while you download all that stuff. Got all that stuff ready? Now again, look at the multiple choice. I?d like you to look at multiple question number one. Have your test analysis worksheet ready. Take a minute to read it. Take your time because right not you?re not on a time pressure. Take your time, read it and try to think about when we covered it in the course. Look for some key words. This one doesn?t have a lot of writing in it, but it says limit, so limit. Pretty darn good chance that it?s in the episode where we talked about doing limits. Right now, next thing you know says what?s got Cosines in it? So maybe it?s that special cosine limit that I asked you to memorize. Look through the follow along and see if it?s in there. No it?s not quite that. But wait, look how long that thing is. Remind you of something? Yeah the definition of a derivative that's what it is. It?s one of the problems where you?re being asked to recognize the definition of a derivative. And look at that problem and just decide what derivative it is. So, let?s write that down. The episode name was manual limits. The sub-topic is recognizing the definition of a derivative. Now over in the notes section, you might want to write down anything that occurs to you, you might actually want to do the problem there. And if you?re not remembering how to do this kind of a problem very well, you?ll probably want to check out the box for review. You?re probably in the last stages of reviewing for AP test now, and you?re time is limited. So one thing you need you need to keep in mind is that, you might not actually do all of these problems. The goal right now is just to go through quickly and analyze all of the problems. Then go back later and as you have time, then actually try doing the problems. Right now just analyze them. Your teacher might have told you by now, go back and study for the test. Well I hear this from students all time, ?Oh I can?t study for math test you just know it or you don?t.? Nonsense, but you do need to know how to structure what you?re doing to conserve your time. So if you?re going to go back and study, you probably have a stuck of notes about this big and you can?t possibly read it all. You need to prioritize your time. The purpose of this synthesizer is to help prioritize your study. The first thing I would like you to do, is download a copy of the synthesizer that is entitled 'review your old test'. Then go ahead and get all your old test. You might want to print a few copies of this synthesizer. When you?re prioritizing, what you need to do is rule out the stuff that you don?t need to spend any study time on. Now this activity again takes you again from our course and has you do a broad review of everything that you did in AP calculus. The first thing to do is to look for your old test. Now any problem that got right the first time, you might not need to review. Look at them again though quickly because, there might have been that you new a few months ago that you have forgotten. Make that a quick scan. If there is something that you have forgotten, well write it down. Let?s see, I?m looking through one of my old test, and oh no there?s an integral, I completely forgotten this thing. Oh no the problem asked me to do the integral of the secant of u, du, I?ve forgotten that thing done. So look it up write it down. This is one of the ones that you might need to memorize. They do show up on the AP test from time to time. They?re not supper common but you might see it. This one is the natural logarithm of the secant of u plus the tangent of u, plus of course your old friend the constant of integration. So I wrote that down because I?m probably going to be needing that one. This one was maybe your chapter 4 and it was problem number 12 on the test. So what you?re writing down is a list of things that you?ll want to review now. And maybe briefly look at again the day before the test. Next, and very important, is to look at any problem that you got wrong the first time. If you did, you definitely want to circle that one and take a close at it. You might have gotten it down since then, but you want to be sure. The second page of this synthesizer asks you to research additional AP topics. Now, we?ve done our best to make this course complete as possible. I?ve included practically every kind of odd AP problem that I could think of, that I could find. Still, we couldn?t cover quite everything. We skipped a lot of the basic topics. The point of this course was to speed you up on the on the more complex topics. And there is one additional complex topic that you?ll see from time to time, exponentials ,that I didn?t really cover very well. When you download this, you?ll see a list of the topics that I?ve identified as ones that we haven?t covered thoroughly. Here is the one I just mentioned. Compare relative magnitudes of functions such as exponential, polynomial and logarithmic growth and the relative rates of change. So here is a possible example problem that you could do. Your goal is to look through the list of additional topics that I made for you. Find where it is in your textbook or within your notes that will be great too, and try some examples. You?ve probably covered them in your course, but I?m sure you can use the review. Analyze AP test problems to guide studying
{"url":"https://www.brightstorm.com/test-prep/ap-calculus-ab/ap-calculus-review/synthesizer-test-analysis/","timestamp":"2014-04-17T04:14:21Z","content_type":null,"content_length":"54568","record_id":"<urn:uuid:ee45672b-e866-4858-a708-0bcadbcf5c75>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00041-ip-10-147-4-33.ec2.internal.warc.gz"}
[R-SIG-Finance] [R-sig-finance] Plese help me to understand this Bogaso bogaso.christofer at gmail.com Tue Jan 27 18:25:39 CET 2009 Hi all, Can anyone please help me to understand, whether there is any difference between "Zero rate" and "Zero Yield". I thought they are synonymous and both are used in Zero-coupon bond context. However I got different results in my textbook while those two are used in different places. Here is the problem : Case 1. Suppose 5-year spot yield is 7.6 p.a.. Now assume there is a receivable after 5 year worth $100. Then current value of this receivable is $100*(1+7.6*5/100)^-1. Am I correct? However in my textbook, it is reported as $100*(1+7.6/100)^-5. Case 2. Suppose 6 month spot rate 6.39 p.a.. Now assume there is a receivable after 6 months worth $100. Then current value of this receivable is $100*(1+6.39/100/2)^-1. Here my understanding is matching with my Please forgive me as I understand, this question is too fundamental. But still I think I am missing something important. Can anyone please help me? View this message in context: http://www.nabble.com/Plese-help-me-to-understand-this-tp21690051p21690051.html Sent from the Rmetrics mailing list archive at Nabble.com. More information about the R-SIG-Finance mailing list
{"url":"https://stat.ethz.ch/pipermail/r-sig-finance/2009q1/003583.html","timestamp":"2014-04-16T07:24:07Z","content_type":null,"content_length":"3946","record_id":"<urn:uuid:e6bfabe0-dccb-4524-a0d2-19066626641e>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00539-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Factor completely: x2 + 6x + 8 • one year ago • one year ago Best Response You've already chosen the best response. do u know how to factorize? Best Response You've already chosen the best response. Kinda Sorta Best Response You've already chosen the best response. How do u factorise a quadratic equation ( ax2 +bx +c form)? Best Response You've already chosen the best response. Take c and find common factors then see if they add up to b? Best Response You've already chosen the best response. ik i got the medal! LOL look Best Response You've already chosen the best response. Given: First list the factors of 8. They are: 1,2,4,8 Look at the factors of 8 and see what ones will give you the 6 that you need. 2 + 4 = 6 So we will use 2 7 4 to factor the equation. Notice that the six is negative while the eight is positive and that the six is larger than both of the factors that we want to use. This means that both factors are negative. so: Check the answer: Combine like terms The answer was right becasue we are back to where we started. Best Response You've already chosen the best response. Best Response You've already chosen the best response. x2+4x+2x+8 x(x+4)+2(x+4) x+4)(x+2) are factors Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5149b718e4b05e69bfabb5d8","timestamp":"2014-04-21T02:31:22Z","content_type":null,"content_length":"44593","record_id":"<urn:uuid:380428fa-c1a0-4739-802a-1c89548068bf>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00540-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts about RSS on Xi'an's Og Last evening, I attended the RSS Midlands seminar here in Warwick. The theme was chain event graphs (CEG), As I knew nothing about them, it was worth my time listening to both speakers and discussing with Jim Smith afterwards. CEGs are extensions of Bayes nets with originally many more nodes since they start with the probability tree involving all modalities of all variables. Intensive Bayesian model comparison is then used to reduce the number of nodes by merging modalities having the same children or removing variables with no impact on the variable of interest. So this is not exactly a new Bayes net based on modality dummies as nodes (my original question). This is quite interesting, esp. in the first talk illustration of using missing value indicators as a supplementary variable (to determine whether or not data is missing at random). I also wonder how much of a connection there is with variable length Markov chains (either as a model or as a way to prune the tree). A last vague idea is a potential connection with lumpable Markov chains, a concept I learned from Kemeny & Snell (1960): a finite Markov chain is lumpable if by merging two or more of its states it remains a Markov chain. I do not know if this has ever been studied from a statistical point of view, i.e. testing for lumpability, but this sounds related to the idea of merging modalities of some variables in the probability tree…
{"url":"http://xianblog.wordpress.com/tag/rss/","timestamp":"2014-04-20T00:37:19Z","content_type":null,"content_length":"81114","record_id":"<urn:uuid:b44d0848-2724-4c8a-8f3a-cd58d6cb2d93>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00114-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics Multiple interior peak solutions for some singularly perturbed Neumann problems. (English) Zbl 1061.35502 Summary: We consider the problem ${ϵ}^{2}{\Delta }u-u+f\left(u\right)=0,$ $u>0$, in ${\Omega },\partial u/\partial u =0$ on $\partial$, where ${\Omega }$ is a bounded smooth domain in ${ℝ}^{N},ϵ>0$ is a small parameter, and $f$ is a superlinear, subcritical nonlinearity. It is known that this equation possesses boundary spike solutions that concentrate, as $ϵ$ approaches zero, at a critical point of the mean curvature function $H\left(P\right),\phantom{\rule{4pt}{0ex}}P\in \partial {\Omega }$. It is also proved that this equation has single interior spike solutions at a local maximum point of the distance function $d\left(P,\partial {\Omega }\right),P\in {\Omega }$. In this paper, we prove the existence of interior $K$-peak $\left(K\ge 2\right)$ solutions at the local maximum points of the following function: $\varphi \left({P}_{1},{P}_{2},\cdots ,{P}_{K}\right)={min}_{i,k,l=1,\cdots ,K;ke l}\left(d\left({P}_{i},\partial {\Omega }\right),\frac{1}{2}|{P}_{k}-{P}_{l}|\ right)$. We first use the Lyapunov-Schmidt reduction method to reduce the problem to a finite-dimensional problem. Then we use a maximizing procedure to obtain multiple interior spikes. The function $\varphi \left({P}_{1},\cdots ,{P}_{K}\right)$ appears naturally in the asymptotic expansion of the energy functional. 35J25 Second order elliptic equations, boundary value problems 35B25 Singular perturbations (PDE) 47J30 Variational methods (nonlinear operator equations) 47N20 Applications of operator theory to differential and integral equations
{"url":"http://zbmath.org/?q=an:1061.35502","timestamp":"2014-04-20T09:05:40Z","content_type":null,"content_length":"24345","record_id":"<urn:uuid:533ba05a-9a50-4666-b6c8-3bc32893a00c>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00131-ip-10-147-4-33.ec2.internal.warc.gz"}
Logical promblem with "or" and "Not" operator December 29th 2010, 04:55 AM #1 Dec 2010 Logical promblem with "or" and "Not" operator I have gotten into a problem in a application the search uses logical operators "or" and "not" now when i have the following search "1 'or' 1 'not' 1" the result always comes up with 1 but there are two ways to simplify the equation the first one is "1 'or' (1 'not' 1)" this would end up with 1 as the answer, the second one is "(1 'or' 1) 'not' 1" this gives me an answer 0. In reality when this logic is used the second one is more suitable than the first one. Please let me know if i am wrong or the applications precedence is wrong PS: please let me know the precedence for logical operator. The connective 'not' is unary, not binary, so "1 'or' (1 'not' 1)" does not look right. Perhaps what is meant is 'and not'. Though this has to be always double-checked, usually 'and' (an analog of multiplication) binds tighter than 'or' (an analog of addition), and unary 'not' binds tighter than both of them. So "1 'or' 1 'and not' 1" would be interpreted as "1 'or' (1 'and not' 1)". Thanks for the quick reply so the 'not' has a higher precedence than 'and'/'or. The precedence go like 'not>and>or' please correct me if i'm wrong. The precedence go like 'not>and>or' In most, if not all, places that I've seen, yes. Thanks for your help. December 29th 2010, 06:33 AM #2 MHF Contributor Oct 2009 December 29th 2010, 07:49 AM #3 Dec 2010 December 29th 2010, 08:26 AM #4 MHF Contributor Oct 2009 December 29th 2010, 04:41 PM #5 Dec 2010
{"url":"http://mathhelpforum.com/discrete-math/167068-logical-promblem-not-operator.html","timestamp":"2014-04-20T01:51:55Z","content_type":null,"content_length":"41655","record_id":"<urn:uuid:f5e389b8-841b-4235-b223-10ddb4024776>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00137-ip-10-147-4-33.ec2.internal.warc.gz"}
Thornton, IL Geometry Tutor Find a Thornton, IL Geometry Tutor ...The college counseling process has many important steps, all of which are crucial to student success. The first step in this process, of course, is working with the student to identify their interests and goals. Once these interests and goals are identified, families can methodically sort the college brochures, based on whether they highly competitive, competitive, and "safety 28 Subjects: including geometry, English, writing, algebra 1 ...I done this with dozens of students. I have taught basic discrete/finite math (probability, combinatorics, naive set theory, basic linear algebra, linear programming) at the college level. I also wrote a study guide covering all the above topics. 24 Subjects: including geometry, calculus, physics, GRE ...I have prepared students in the math portion of the ACT test. I have worked with students who were failing math very late in the year and helped them attain a final grade of B or C. However, my greatest thrill is seeing the change in a student’s confidence in themselves as they go from failure to success. 14 Subjects: including geometry, algebra 1, algebra 2, GED ...So I am here to serve you. I am very skilled at one-on-one tutoring, small group tutoring, and online math course help. I tutor the way that I teach--present the material in a way that it can understood, practice with my clients, and give them opportunities to try themselves with me right there to troubleshoot. 11 Subjects: including geometry, GRE, algebra 1, GED ...With practice, I find students become particularly good at those, along with the more enjoyable, conceptual aspects of Algebra 2. Developing vocabulary is a vitally important component in developing critical reading. My approach to developing vocabulary is through word study, to help students better identify common suffixes, prefixes and root words. 20 Subjects: including geometry, reading, English, writing
{"url":"http://www.purplemath.com/thornton_il_geometry_tutors.php","timestamp":"2014-04-21T13:01:19Z","content_type":null,"content_length":"24227","record_id":"<urn:uuid:1c65aa62-56bb-440e-bc8a-801dde71c3ab>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00339-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Label-based addressing Replies: 4 Last Post: Apr 9, 2013 3:04 PM Messages: [ Previous | Next ] kj Label-based addressing Posted: Apr 8, 2013 7:05 PM Posts: 179 Registered: 8/17/09 Let me start with a hypothetical (but representative) use-case. Suppose I have data stratified over 5 brands of beer, 8 yearly quarters, 4 age groups, and 50 states. Furthermore, let's say that for each possible combination of beer brand, yearly quarter, age group, and state, I have computed a heterogenous k-vector of statistics (e.g. total consumption, consumption per capita, etc.). Therefore, all told, I'm talking about a four-dimensional (5 x 8 x 4 x 50) array of k-vectors. I would like to store all this data in an object that would allow me to address subsets of it using *labels* instead of numeric indices. By "labels" I mean the names of the "dimensions" (in this case "brand", "quarter", "age", "state"), the names of the possible values for each dimension, (e.g. for state, I'd have "AK", "AL", "AR", ..., "WI", "WY"), and the components of the vector of data associated with each combination of factors, (in this case these would include "total consumption", "consumption per capita", etc.). For example, supposing that the variable D held such a data structure, label-based addressing would allow me to use something like E = D.extract(state="AK", age="30-45") to store in E the subset of the data in D corresponding to the specified values. (The value in E, by the way, would be a similar data structure, but it would have different shape, since two of its dimensions now have the minimum depth of 1.) One can imagine many elaborations of this theme, but I think the above gives a flavor of what I have in mind. It is my understanding that MATLAB does not have this functionality. (But please correct me if I'm wrong!) Therefore I'd have to implement it myself. My naive idea is to define an object (class, that is) that internally stores one or more n-dimensional "boxes" for the data (as standard MATLAB arrays), as well as hash tables to map between labels and numeric indices. Methods like "extract" above would convert its arguments to numeric addresses, and would use these numeric addresses to fetch the data from the internal data boxes. I am fairly new to MATLAB, so I'd appreciate your comments on the above, and any words of wisdom you may give me on how best to do In particular, are there any existing packages I could use as *models* for this sort of project? Thanks in advance! Date Subject Author 4/8/13 kj 4/8/13 Re: Label-based addressing dpb 4/9/13 Re: Label-based addressing Loren Shure 4/9/13 Re: Label-based addressing Yair Altman 4/9/13 Re: Label-based addressing Matt
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2445580","timestamp":"2014-04-20T01:26:40Z","content_type":null,"content_length":"22852","record_id":"<urn:uuid:40697dc8-d81e-4dc2-8def-c4313682fab8>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00250-ip-10-147-4-33.ec2.internal.warc.gz"}
Eventually, both Cormack and Hounsfield were recognized with a Nobel Prize for their contributions to the development of medical imaging. But it is clear that the ideas and mathematics were independently discovered by these and other scientists mentioned in this example. The key to the application of computed tomography in hospitals was the computer. In 1970, when practical disc operating systems first became available, it was immediately recognized that by using the fast Fourier transform and algorithms based on the work of Radon, Bracewell, and Cormack, a practical medical X-ray CAT scanner could be manufactured. In the early 1970s, a number of mathematician-scientist pairs commenced a series of discoveries that led to modern medical three-dimensional imaging. Noteworthy contributions circa 1974 came from three teams. First there were the contributions of Lawrence Shepp to the practical implementation of reconstruction. That work was clearly the result of his partnership with physicist Jerome Stein and physician Sadek Hilal in the filtered back projection method of reconstruction now used. Then, the team of Robert Marr, Paul Lauterbur, and Lawrence Shepp showed the power of arithmetic and Fourier techniques in three-dimensional reconstruction from projections in MRI. Grant Gullberg, Ronald Huesman, and Thomas Budinger, another team of mathematician, physicist, and physician, showed solutions to the attenuated Radon transform problem in SPECT. Currently, MRI and SPECT are being applied by teams of mathematicians, computational scientists, and statisticians to problems ranging from earthquake prediction to understanding and treating mental disorders, heart disease, and cancer. The message from this historical synopsis is that mathematicians and scientists working in separate locations and on seemingly unrelated scientific objectives related to the mathematical inverse problem made slow and generally unrecognized progress. But when both mathematicians and scientists worked together, as did the three teams cited above, progress was rapid and almost immediately SOURCE: Based on Herman (1979); Natterer (1986); and Deans (1983).
{"url":"http://books.nap.edu/openbook.php?record_id=9813&page=13","timestamp":"2014-04-21T09:53:11Z","content_type":null,"content_length":"36175","record_id":"<urn:uuid:54284fb0-b878-4929-bdf3-43e59af662c9>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00026-ip-10-147-4-33.ec2.internal.warc.gz"}
tan (A + B) If A = arctan(2/3) and B = arctan(1/2), find tan (A + B). No. What he's saying is that $\tan\!\left(\arctan\!\left(\tfrac{2}{3}\right)\rig ht)=\tfrac{2}{3}$ and $\tan\!\left(\arctan\!\left(\tfrac{1}{2}\right)\rig ht)=\tfrac{1}{2}$. Your $A$ and $B$ values are still those arctangent values, but when you substitute them into your identity, you take into consideration what ProveIt told you (and what I elaborated above).
{"url":"http://mathhelpforum.com/trigonometry/87737-tan-b-print.html","timestamp":"2014-04-19T17:15:34Z","content_type":null,"content_length":"13767","record_id":"<urn:uuid:db1fc5d5-c106-4ae5-8e03-74e64f7a7545>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00375-ip-10-147-4-33.ec2.internal.warc.gz"}
Non-Euclidean Geometries Welcome to E-Books Directory This page lists freely downloadable books. Non-Euclidean Geometries E-Books for free online viewing and/or download e-books in this category Non-Euclidean Geometry: A Critical and Historical Study of its Development by Roberto Bonola - Open Court Publishing Company , 1912 Examines various attempts to prove Euclid's parallel postulate - by the Greeks, Arabs and Renaissance mathematicians. It considers forerunners and founders such as Saccheri, Lambert, Legendre, Gauss, Schweikart, Taurinus, J. Bolyai and Lobachewsky. (983 views) Hyperbolic Geometry by J.W. Cannon, W.J. Floyd, R. Kenyon, W.R. Parry - MSRI , 1997 These notes are intended as a relatively quick introduction to hyperbolic geometry. They review the wonderful history of non-Euclidean geometry. They develop a number of the properties that are particularly important in topology and group theory. (1414 views) The Elements of Non-Euclidean Plane Geometry and Trigonometry by Horatio Scott Carslaw - Longmans, Green and co. , 1916 In this book the author has attempted to treat the Elements of Non-Euclidean Plane Geometry and Trigonometry in such a way as to prove useful to teachers of Elementary Geometry in schools and colleges. Hyperbolic and elliptic geometry are covered. (1327 views) The Elements of Non-Euclidean Geometry by D.M.Y. Sommerville - G.Bell & Sons Ltd. , 1914 Renowned for its lucid yet meticulous exposition, this text follows the development of non-Euclidean geometry from a fundamental analysis of the concept of parallelism to such advanced topics as inversion and transformations. (2315 views) The Elements Of Non-Euclidean Geometry by Julian Lowell Coolidge - Oxford At The Clarendon Press , 1909 Chapters include: Foundation For Metrical Geometry In A Limited Region; Congruent Transformations; Introduction Of Trigonometric Formulae; Analytic Formulae; Consistency And Significance Of The Axioms; Geometric And Analytic Extension Of Space; etc. (3254 views) Neutral and Non-Euclidean Geometries by David C. Royster - UNC Charlotte , 2000 In this course the students are introduced, or re-introduced, to the method of Mathematical Proof. You will be introduced to new and interesting areas in Geometry, with most of the time spent on the study of Hyperbolic Geometry. (3206 views) The Eightfold Way: The Beauty of Klein's Quartic Curve by Silvio Levy - Cambridge University Press , 1999 Felix Klein discovered in 1879 that the surface that we now call the Klein quartic has many remarkable properties, including an incredible 336-fold symmetry. This volume explores the rich tangle of properties surrounding this multiform object. (3693 views) Non-Euclidean Geometry by Henry Manning - Ginn and Company , 1901 This book gives a simple and direct account of the Non-Euclidean Geometry, and one which presupposes but little knowledge of Mathematics. The entire book can be read by one who has taken the mathematical courses commonly given in our colleges. (5545 views) More Sites Like This Science Books Online Books Fairy Maths e-Books Programming Books
{"url":"http://www.e-booksdirectory.com/listing.php?category=462","timestamp":"2014-04-21T15:38:19Z","content_type":null,"content_length":"12460","record_id":"<urn:uuid:24a1007c-b44c-4e56-a62c-a20e1415f61b>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00067-ip-10-147-4-33.ec2.internal.warc.gz"}
If 4<(7-x)/3, which of the following must be true? Author Message If 4<(7-x)/3, which of the following must be true? [#permalink] 12 Aug 2010, 14:05 This post received 45% (medium) Question Stats: (02:12) correct 53% (01:21) Joined: 21 Jun 2010 Posts: 113 based on 228 sessions Schools: Tuck, Duke, Cambridge, Said If 4<(7-x)/3, which of the following must be true? Followers: 3 I. 5<x Kudos [?]: 12 [4] , II. |x+3|>2 given: 0 III. -(x+5) is positive A) II only B) III only C) I and II only D) II and III only E) I, II and III I am confused about statement II ???? Spoiler: OA Last edited by on 12 Aug 2010, 14:44, edited 1 time in total. Re: Quant Review 2nd Edition DS#156, Inequality [#permalink] 12 Aug 2010, 14:39 This post received Expert's post mn2010 wrote: If 4<[(7-x)/3], which of the following must be true? I. 5<x II. |x+3|>2 III. -(x+5) is positive A) II only B) III only C) I and II only D) II and III only E) I, II and III I am not confused about statement II ???? Good question, +1. Note that we are asked to determine which MUST be true, not could be true. . So we know that , it's given as a fact. Now, taking this info we should find out which of the following inequalities will be true OR which of the following inequalities will be true for the Basically the question asks: if which of the following is true? --> not true as Bunuel |x+3|>2 Math Expert , this inequality holds true for 2 cases, (for 2 ranges): 1. when Joined: 02 Sep 2009 x+3>2 Posts: 17283 , so when Followers: 2868 x>-1 Kudos [?]: 18332 [6] , or 2. when given: 2345 , so when . We are given that second range is true ( ), so this inequality holds true. Or another way: ANY from the range (-5.1, -6, -7, ...) will make true, so as , then is always true. --> true. Answer: D. Hope it's clear. NEW TO MATH FORUM? PLEASE READ THIS: ALL YOU NEED FOR QUANT!!! PLEASE READ AND FOLLOW: 11 Rules for Posting!!! RESOURCES: [GMAT MATH BOOK]; 1. Triangles; 2. Polygons; 3. Coordinate Geometry; 4. Factorials; 5. Circles; 6. Number Theory; 7. Remainders; 8. Overlapping Sets; 9. PDF of Math Book; 10. Remainders; 11. GMAT Prep Software Analysis NEW!!!; 12. SEVEN SAMURAI OF 2012 (BEST DISCUSSIONS) NEW!!!; 12. Tricky questions from previous years. NEW!!!; COLLECTION OF QUESTIONS: PS: 1. Tough and Tricky questions; 2. Hard questions; 3. Hard questions part 2; 4. Standard deviation; 5. Tough Problem Solving Questions With Solutions; 6. Probability and Combinations Questions With Solutions; 7 Tough and tricky exponents and roots questions; 8 12 Easy Pieces (or not?); 9 Bakers' Dozen; 10 Algebra set. ,11 Mixed Questions, 12 Fresh Meat DS: 1. DS tough questions; 2. DS tough questions part 2; 3. DS tough questions part 3; 4. DS Standard deviation; 5. Inequalities; 6. 700+ GMAT Data Sufficiency Questions With Explanations; 7 Tough and tricky exponents and roots questions; 8 The Discreet Charm of the DS ; 9 Devil's Dozen!!!; 10 Number Properties set., 11 New DS set. What are GMAT Club Tests? 25 extra-hard Quant Tests Re: Quant Review 2nd Edition DS#156, Inequality [#permalink] 12 Aug 2010, 14:50 Joined: 21 Jun 2010 thanks Bunuel... Inequality still rattles me .... more practice I guess .... Posts: 113 Schools: Tuck, Duke, Cambridge, Said Followers: 3 Kudos [?]: 12 [0], given: Manager Re: Quant Review 2nd Edition DS#156, Inequality [#permalink] 14 Aug 2010, 07:50 Joined: 27 May 2010 thanks for the explanation bunuel. Posts: 103 Followers: 2 Kudos [?]: 6 [0], given: Joined: 03 Jun 2010 Re: Quant Review 2nd Edition DS#156, Inequality [#permalink] 20 Dec 2010, 04:29 Posts: 185 Great question. I thought we should eliminate II. Location: United States Concentration: Marketing, General Management WE: Business Development (Consumer Products) Followers: 4 Kudos [?]: 19 [0], given: Quant Rev #156: Inequalities [#permalink] 16 Jan 2011, 14:13 Current Student If 4 < \frac {7-x}{3}, which of the following must be true? Status: First-Year MBA Student I. 5 < x II. |x+3| > 2 Joined: 19 Nov 2009 III. - (x+5) is positive. Posts: 126 a. II only b. III only GMAT 1: 720 Q49 V39 c. I and II only d. II and III only Followers: 6 e. I, II, and III Kudos [?]: 44 [0], given: Re: Quant Rev #156: Inequalities [#permalink] 16 Jan 2011, 14:25 Math Expert Joined: 02 Sep 2009 This post received Posts: 17283 KUDOS Followers: 2868 Expert's post Kudos [?]: 18332 [1] , given: 2345 Re: Quant Review 2nd Edition DS#156, Inequality [#permalink] 16 Jan 2011, 19:15 Bunuel wrote: mn2010 wrote: If 4<[(7-x)/3], which of the following must be true? I. 5<x II. |x+3|>2 III. -(x+5) is positive A) II only B) III only C) I and II only D) II and III only E) I, II and III I am not confused about statement II ???? Good question, +1. Note that we are asked to determine which MUST be true, not could be true. . So we know that , it's given as a fact. Now, taking this info we should find out which of the following inequalities will be true OR which of the following inequalities will be true for the Basically the question asks: if which of the following is true? yogesh1984 --> not true as Senior Manager x<-5 Joined: 12 Dec 2010 . Posts: 282 II. Concentration: Strategy, |x+3|>2 General Management , this inequality holds true for 2 cases, (for 2 ranges): 1. when GMAT 1: 680 Q49 V34 GMAT 2: 730 Q49 V41 x+3>2 GPA: 4 , so when WE: Consulting (Other) x>-1 Followers: 5 or 2. when Kudos [?]: 29 [0], given: -x-3>2 , so when . We are given that second range is true ( ), so this inequality holds true. Or another way: ANY from the range (-5.1, -6, -7, ...) will make true, so as , then is always true. --> true. Answer: D. Hope it's clear. here for the |x+3| >2 we have 2cases- x> -1 or x <-5 (while second one satisfies the condn as asked but is not it we should be looking at all the possibilities and if all satisfies then only we can say that this option also holds as for as GMAT is concerned My GMAT Journey 540->680->730! ~ When the going gets tough, the Tough gets going! Re: Quant Review 2nd Edition DS#156, Inequality [#permalink] 17 Jan 2011, 02:29 Expert's post yogesh1984 wrote: Bunuel wrote: mn2010 wrote: If 4<[(7-x)/3], which of the following must be true? I. 5<x II. |x+3|>2 III. -(x+5) is positive A) II only B) III only C) I and II only D) II and III only E) I, II and III I am not confused about statement II ???? Good question, +1. Note that we are asked to determine which MUST be true, not could be true. . So we know that , it's given as a fact. Now, taking this info we should find out which of the following inequalities will be true OR which of the following inequalities will be true for the Basically the question asks: if which of the following is true? --> not true as , this inequality holds true for 2 cases, (for 2 ranges): 1. when , so when or 2. when , so when . We are given that second range is true ( ), so this inequality holds true. Or another way: ANY Math Expert from the range Joined: 02 Sep 2009 Posts: 17283 (-5.1, -6, -7, ...) will make Followers: 2868 true, so as , then is always true. --> true. Answer: D. Hope it's clear. here for the |x+3| >2 we have 2cases- x> -1 or x <-5 (while second one satisfies the condn as asked but is not it we should be looking at all the possibilities and if all satisfies then only we can say that this option also holds as for as GMAT is concerned true? --> this inequality is true if . Now, it's given that , so it must hold true. Or: ANY from the range (-5.1, -6, -7, ...) will make true, so as , then is always true. Hope it's clear. NEW TO MATH FORUM? PLEASE READ THIS: ALL YOU NEED FOR QUANT!!! PLEASE READ AND FOLLOW: 11 Rules for Posting!!! RESOURCES: [GMAT MATH BOOK]; 1. Triangles; 2. Polygons; 3. Coordinate Geometry; 4. Factorials; 5. Circles; 6. Number Theory; 7. Remainders; 8. Overlapping Sets; 9. PDF of Math Book; 10. Remainders; 11. GMAT Prep Software Analysis NEW!!!; 12. SEVEN SAMURAI OF 2012 (BEST DISCUSSIONS) NEW!!!; 12. Tricky questions from previous years. NEW!!!; COLLECTION OF QUESTIONS: PS: 1. Tough and Tricky questions; 2. Hard questions; 3. Hard questions part 2; 4. Standard deviation; 5. Tough Problem Solving Questions With Solutions; 6. Probability and Combinations Questions With Solutions; 7 Tough and tricky exponents and roots questions; 8 12 Easy Pieces (or not?); 9 Bakers' Dozen; 10 Algebra set. ,11 Mixed Questions, 12 Fresh Meat DS: 1. DS tough questions; 2. DS tough questions part 2; 3. DS tough questions part 3; 4. DS Standard deviation; 5. Inequalities; 6. 700+ GMAT Data Sufficiency Questions With Explanations; 7 Tough and tricky exponents and roots questions; 8 The Discreet Charm of the DS ; 9 Devil's Dozen!!!; 10 Number Properties set., 11 New DS set. What are GMAT Club Tests? 25 extra-hard Quant Tests Re: Quant Review 2nd Edition DS#156, Inequality [#permalink] 17 Jan 2011, 09:22 Senior Manager Joined: 12 Dec 2010 Is |x+3|>2 true? --> this inequality is true if x>-1 OR x<-5. Now, it's given that x<-5, so it must hold true. Posts: 282 Or: ANY x from the range x<-5 (-5.1, -6, -7, ...) will make |x+3|>2 true, so as x<-5, then |x+3|>2 is always true. Concentration: Strategy, General Management Hope it's clear. GMAT 1: 680 Q49 V34 Hmm... It was so obvious GMAT 2: 730 Q49 V41 GPA: 4 My GMAT Journey 540->680->730! WE: Consulting (Other) Followers: 5 ~ When the going gets tough, the Tough gets going! Kudos [?]: 29 [0], given: Intern Re: Quant Review 2nd Edition DS#156, Inequality [#permalink] 17 Jan 2011, 17:52 Joined: 17 Jan 2011 Nice one, got it myself as D Posts: 4 Followers: 0 Kudos [?]: 1 [0], given: Math Expert Re: If 4<(7-x)/3, which of the following must be true? [#permalink] 19 Jun 2013, 03:52 Joined: 02 Sep 2009 Expert's post Posts: 17283 Followers: 2868 Intern Re: If 4<(7-x)/3, which of the following must be true? [#permalink] 21 Aug 2013, 07:41 Joined: 13 May 2013 For Option II What about x>-1, i'm not getting.. Posts: 2 Followers: 0 Kudos [?]: 0 [0], given: Math Expert Re: If 4<(7-x)/3, which of the following must be true? [#permalink] 21 Aug 2013, 07:44 Joined: 02 Sep 2009 Expert's post Posts: 17283 Followers: 2868 Joined: 19 Oct 2012 Re: If 4<(7-x)/3, which of the following must be true? [#permalink] 23 Aug 2013, 22:38 Posts: 112 Good one! I got stumped with II, but now it makes more sense. Location: India _________________ Concentration: General Citius, Altius, Fortius Management, Operations GMAT 1: 660 Q47 V35 GPA: 3.81 WE: Information Technology (Computer Followers: 0 Kudos [?]: 12 [0], given: Re: If 4<(7-x)/3, which of the following must be true? [#permalink] 08 Jan 2014, 06:35 mn2010 wrote: If 4<(7-x)/3, which of the following must be true? kinjiGC I. 5<x II. |x+3|>2 Manager III. -(x+5) is positive Joined: 03 Feb 2013 A) II only B) III only Posts: 166 C) I and II only D) II and III only Location: India E) I, II and III Concentration: I am confused about statement II ???? Operations, Strategy 12 < 7-x => x < -5 GPA: 3.3 I. 5 < x not possible. WE: Engineering (Computer Software) II. |x+3| > 2 . now x < -5 or lets say x = -5.1 so |x+3| = |-2.1| = 2.1 > 2 So any case, it will always be more than 2. Definitely. Followers: 1 III. -(x+5) as x < -5 so x can be -5.1 so -(-.1) so +ve hence III is also possible. Kudos [?]: 18 [0], given: hence D) Never Give Up !!! Please click on Kudos, if you think the post is helpful gmatclubot Re: If 4<(7-x)/3, which of the following must be true? [#permalink] 08 Jan 2014, 06:35
{"url":"http://gmatclub.com/forum/if-4-7-x-3-which-of-the-following-must-be-true-99015.html?fl=similar","timestamp":"2014-04-16T19:51:28Z","content_type":null,"content_length":"271238","record_id":"<urn:uuid:c606215a-c5aa-4857-b212-959a5f18e137>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00207-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: marginal effect after xtlogit model Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: marginal effect after xtlogit model From Richard Williams <richardwilliams.ndu@gmail.com> To statalist@hsphsun2.harvard.edu, statalist@hsphsun2.harvard.edu Subject Re: st: marginal effect after xtlogit model Date Tue, 25 Oct 2011 22:49:40 -0500 At 09:15 PM 10/25/2011, ramesh wrote: I am using xtlogit model, which is in the form of: y= b0+b1.x1+b2.x1*x1+b3.x3 +b4.x4). I have one squared term (x1*x1) and two of my independent variables (x3 and x4) are binary and after using the 'xtlogit y x1 c.x1#c.x1 x3 x3' followed by 'margins, dydx(*) atmeans' I got the same beta coefficient as marginal effect, which is not correct. Is there any idea to correctly specify the marginal effect? For binary and other categorical variables, you should explicitly indicate that the variable is categorical by using the i. notation. Otherwise margins will assume the variable is continuous. In this xtlogit y x1 c.x1#c.x1 i.x2 i.x3 I also prefer to not use the atmeans option, but some people disagree about this and it may not make much difference in practice. I think your main problem, though, is that the default predict option for xtlogit is (I think) xb. See help xtlogit postestimation##predict and figure out what option you want to use instead, and specify it as part of the -predict- option on margins. Richard Williams, Notre Dame Dept of Sociology OFFICE: (574)631-6668, (574)631-6463 HOME: (574)289-5227 EMAIL: Richard.A.Williams.5@ND.Edu WWW: http://www.nd.edu/~rwilliam * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2011-10/msg01225.html","timestamp":"2014-04-20T23:51:41Z","content_type":null,"content_length":"9014","record_id":"<urn:uuid:aff938a3-3bec-4c91-beb8-55dab26d38a1>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00045-ip-10-147-4-33.ec2.internal.warc.gz"}
The IMA Conference on Mathematics in Finance Since John Maynard Keynes rescued a collection of Newton's private papers and declared that "Newton was not the first of the age of reason. He was the last of the magicians" the popular imagination has looked at the influence of esoteric arts on the emergence of Western' science. What is often forgotten is that in almost the same breath, Keynes declared Newton as "one of the greatest and most efficient of our civil servants", in recognition of his work as Master and Warden of the Mint, positions that he held longer than his Chair at Cambridge. The significance of the relationship between mathematics and finance is often overlooked when considering the development of science. Probability is, to both Poincare and Russell, the foundation of all science, emerged out of the analysis of financial contracts and Bernoulli first identified the number e in the context of interest payments. On a more profound level, historians such as Richard Hadden, Joel Kaye and Alfred Crosby have provided compelling arguments that the uniquely European 'mathematisation' of science came out of a synthesis of commercial practice, following Fibonacci, and scholastic analysis. Copernicus wrote on money before he wrote on planets. While equity options trading dominated the 1980s, today, the Black-Scholes-Merton pricing formula is used more as a gauge of market volatility than to price traded contracts and the problems of finance have moved on to managing the complex interactions of many agents in the economy. It is in recognition of this evolution that the financial world has changed, not just in the last four years but over the past 25 years, that the Institute of Mathematics and its Applications (the British equivalent of SIAM) is sponsoring its first conference on mathematics in finance, to take place in Edinburgh in 2013. Algorithmic trading is currently the focus of financial innovation. Investors, such as pension funds, will use algorithms implemented on electronic trading systems to , hopefully, optimise their market transactions. Market makers, and speculators, will use algorithms to search the markets for profit opportunities, often executing transactions in milliseconds in high frequency trading. Algorithmic trading is typically light on mathematics, using simple trend following or mean reverting criteria, and relies more on computational developments. Since the recent Financial Crises society has realised that financial innovation, like any technological development, is not always a good thing. The Quant and Mammon report of 1998 called for academics to support banks in innovation, today the emphasis has shifted and the consensus is that academics should be trying to understand the financial system and support society's eyes and ears, the Regulators, as much as the Banks. In response to this, the IMA have invited the Bank of England to help organise the conference and provide guidance on what the Regulators' key concerns are. The Bank of England believes that recent developments in financial mathematics have focused on microeconomic issues, such as pricing derivatives. Their concern is whether there is the mathematics to support macroeconomic risk analysis, how the whole system works. While probability theory has an important role to play in addressing these questions, other mathematical disciplines, not usually associated with finance, could prove useful. For example, the Bank's interest in complexity in networks and dynamical systems has been well documented. The initial outline of the conference is that it will have three parallel sessions, covering developments in algorithmic trading, the concerns of the Bank of England and contemporary issues in mainstream financial mathematics. For example, in the algorithmic trading stream topics could include data mining, pre-trade analysis, risk management and agent based modelling. As well as the Bank of England’s interest in models of market failure and systemic risk, more esoteric topics such as non-ergodic dynamical systems and models of learning in markets would be interesting. Topics associated with mainstream financial mathematics could include control in the presence of liquidity constraints, Knightian uncertainty and behavioural issues and credit modelling. In addition to the main mathematics Conference organised by the IMA, the Scottish Financial Risk Academy is planning to organise an "Industry Day" at the end of the Conference. Applied mathematics is developed as a consequence of solving problems. While it is easy to criticise the world's bankers, it is harder to come up with solutions to the complex issues they face. It is always worth remembering that the laws of physics (almost surely) do not change, but finance is constantly transforming itself. Ever since the time that Newton left Cambridge for the City, the UK has built its prosperity on financial innovation, funding the wars with France and the Industrial and Agricultural Revolutions. today financial services account for some 10% of the UK's GDP and it is only fitting that applied mathematicians consider whether they can provide solutions to the difficult problems the sector faces. The IMA Conference on Mathematics in Finance, scheduled for early April 2013, aims to provide a forum for mathematicians to become more involved in the industry and for industry to become more involved in mathematics, and we would invite any mathematician, academic or practitioner, to attend. If you would like to register your interest in attending the conference, please contact the IMA. 1 comment: 1. Keynes had a much broader view of probability and of its significance for market behaviours (e.g. crashes and sustained depressions) than most. At the same time, many in finance - including bankers - think that Bayesian theory is THE mathematical theory of uncertainty and have accused mathematicians of having much too narrow a view of these issues. The scene is set for an interesting debate.
{"url":"http://magic-maths-money.blogspot.com/2012/04/ima-conference-on-mathematics-in.html","timestamp":"2014-04-20T15:56:50Z","content_type":null,"content_length":"94942","record_id":"<urn:uuid:f2323f07-3b8f-4812-8c30-fe838ee69759>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00166-ip-10-147-4-33.ec2.internal.warc.gz"}
Linear Interpolation FP1 Formula Re: Linear Interpolation FP1 Formula I guess so but all I did was help her with maths then sit on her bed watching a movie. We did not eat. Re: Linear Interpolation FP1 Formula That is still a date. What has eating got to do with it? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula We never called it a date... Re: Linear Interpolation FP1 Formula Who cares what you call it. I would call it a date. Where you go or what you do on the date is not really important. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula Would she have seen it as a date? I just saw a picture of the party, looks like there were no guys there. Re: Linear Interpolation FP1 Formula That is very difficult to say. Strange party but I guess different strokes for different folks. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula The theme was to dress as what you wanted to be when you were younger. adriana dressed herself as a princess, because I remember her saying that that was what she wanted to be. Re: Linear Interpolation FP1 Formula They all wanted to be princesses. Most of them get their wish and turn into real primadonne. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula Looks like she has found her prince too. Re: Linear Interpolation FP1 Formula Back in the old days only sailors had a girl in every port. She probably has another one in Barcelona. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula It must be heaven to be a woman. Re: Linear Interpolation FP1 Formula That is something we will never know. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula Thankfully they won't ever know it either. (The converse.) Re: Linear Interpolation FP1 Formula I would not like to live in a world like this one and not be a man. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula Maybe women feel the same way. Re: Linear Interpolation FP1 Formula I meant I appreciate being physically strong enough to deal with the world. It must be hard to be walking around at night and not being able to defend yourself. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula adriana seems to think she is pretty strong. Re: Linear Interpolation FP1 Formula So did the one I told you about with the pink weights. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula Adriana's ones were silver, although small-looking. Re: Linear Interpolation FP1 Formula I will remember that little clue. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula They could have been heavy, I did not lift them. They were just not big. Thin discs. Re: Linear Interpolation FP1 Formula I meant the color. Most girls are uncomfortable lifting weights, they think it is too masculine. To compensate they pick pink. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula Hmm. adriana did say that she wanted to be a boyish-looking girl. She doesn't wear very girly clothes, and she considers herself a tomboy. Although she prefers having a female body. Come to think of it I didn't really see anything pink in her room... Re: Linear Interpolation FP1 Formula That is the right way. Remember everything about the room and her appearance and clothes. Her mannerisms and even how she phrased sentences. Her hair length, jewelry, watch, nail polish, lipstick. Her style, was she neat, orderly? Next time you will have a much better foreknowledge on what to expect. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula I can answer those questions about adriana but I am not sure it matters anymore.
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=256258","timestamp":"2014-04-21T15:10:09Z","content_type":null,"content_length":"34419","record_id":"<urn:uuid:9c675033-8e7e-4735-b79c-a490833b9cf0>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00599-ip-10-147-4-33.ec2.internal.warc.gz"}
Red Line Travel Times Travel time in minutes to Park Street assuming 3 mph walk as the crow flies and no waiting time About This Visualization The map above shows the transit time from points within approximately 1.5 miles from a Red line stop to park street station, assuming a 3mph walking pace (as the crow flies) and no waiting time at the station. That's pretty simplistic, and I hope to address that in a future full system version. The map zooms to view the data more closely. Future Work When time permits, I hope to create a more sophisticated model incorporating all the subway and bus lines with a cost model that incorporates wait times (likely half the headway for a given time of day). The result should be interesting to see any underserved communities in the MBTA system. The Model A transit system is a series of nodes (stations) connected by edges (rail/bus lines). The cost of transiting the edges between stations is simply the time it takes the train to get between stations. In order to model the time through the system from points in the area around a station, I added a regular grid of "test" nodes, each with an edge to any station within 1.5 miles. The cost of each of those edges is the time it takes to walk the length from the test node to the station at the given walking speed. Dijkstra's algorithm efficiently computes the lowest cost paths through the network to every node from a given start node. In this case, the cost of transit from any of the walking nodes to the start location is the time to ride the train to a close by station, then walk from that station to the walking test node. Just run the algorithm, then iterate through all the test nodes on the regular grid and ask for the cost to the designated start node and output the cost. Neo4j, being a graph database, was a natural fit for this problem, especially since they supply an implementation of Dijkstra's algorithm. With a handy JRuby wrapper, it was simple to build the model, and stunningly fast to get the point by point transit costs calculated. About the author By day, James Kebinger is a Senior Developer at PatientsLikeMe. The code The ruby code behind the visualization is available on github at http://github.com/jkebinger/transit-system-neo4j-example I couldn't have done this project without numerous open source projects Small, fast graph database. The graph-algo component brought Dijkstra's algorithmto the table, and away we went Packaged Neo4j and Neo4j's graph algorithm components in a gem for JRuby along with small amount of code to make things more Ruby-like This open source GIS package performed the task of loading the point by point travel time data and rasterizing it to create an image. Folks on the grass-user mailing list were super helpful in getting me pointed in the right direction. Reprojected and cut up the raster ouput of GRASS into the tiles shown for google maps. Also generated much of the Javascript to drive the google map integration on this page For releasing detailed schedule data in GTFS form
{"url":"http://www.monkeyatlarge.com/projects/redline-travel-time/","timestamp":"2014-04-16T15:59:41Z","content_type":null,"content_length":"10270","record_id":"<urn:uuid:26953743-cea8-4aeb-8c74-5cfdfb7b8a19>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00613-ip-10-147-4-33.ec2.internal.warc.gz"}
El Cajon Calculus Tutor Hello, My name is Sarmad, and I am a math tutor. I have a Bachelor degree in mathematics and teaching credentials from San Diego State University. Also, I have worked for a long period of time as a math tutor at various places. 8 Subjects: including calculus, statistics, geometry, algebra 2 ...I also love tutoring computer science and physics. I create a different lesson plan for each student based on their needs and what they struggle with, and then set them up for success. I understand that sometimes life gets hectic, so I only require 6 hours for cancellation. 37 Subjects: including calculus, geometry, algebra 1, statistics ...All it takes is getting to know the child well enough to be able to relate the topic to something of their interest, facilitating a conceptual understanding for the topic in question and then simply guiding them through the subject allowing them to come to their own means of looking at the proble... 10 Subjects: including calculus, chemistry, algebra 1, algebra 2 ...I have found that these are the most difficult to improve upon. Luckily, they are my specialty. I am an expert at seeing these kinds of problems from many different angles, and my students have benefited greatly. 26 Subjects: including calculus, Spanish, chemistry, physics ...Usually students find that once these are taken care of, they are free to soar. Oh the wonderfully weird trig! So mind boggling and tricky. 11 Subjects: including calculus, geometry, ASVAB, algebra 1
{"url":"http://www.purplemath.com/El_Cajon_Calculus_tutors.php","timestamp":"2014-04-17T13:49:18Z","content_type":null,"content_length":"23540","record_id":"<urn:uuid:0962ab03-8227-4217-a95c-ecc4c69005d9>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00464-ip-10-147-4-33.ec2.internal.warc.gz"}
In a multiple regression equation there are A) Two or more dependent variable. B) Two or more independent variables. C) Two or more intercept values. D) Multiple coefficients of determination A dummy variable or indicator variable A) May assume only a value of 0 or 1. B) Is another term for the dependent variable. C) Is found by [ ]. D) Is equal to [ ] The multiple standard error of estimate is A) Found by taking the square root of SSR/SS total. B) Negative when one of the net regression coefficients is 0. C) Based on the term [ ]. D) The same as the variance of a normal distribution. In the ANOVA table, the value of k is the A) Sum of squares total. B) Total number of observations C) Number of degrees of freedom. D) Number of independent variables. A correlation matrix shows the following: A) All simple coefficients of correlation. B) All possible net regression coefficients. C) Correlations that are positive. D) The multiple regression equation. The following statement is true for a multiple regression equation: A) There is only one dependent variable. B) The R^2 term must be at least .50. C) All the regression coefficient must be between -1.00 and 1.00. D) There is only one (Y — Ÿ). Multicollinearity occurs when the following condition exists: A) The residuals are correlated. B) Time is involved in the analysis C) The independent variables are correlated. D) The residuals are not constant for all Y' values. For a multiple regression analysis, the global hypothesis test determines A) Which independent variables do not equal 0. B) Whether any of independent variables differ from 0. C) Whether any of the correlation coefficients differ from 0. D) Whether the intercept differs from 0. In testing the significance of individual regression coefficients A) The test statistic is the t distribution. B) We test the independent variables in pairs. C) We usually delete the variables where the null hypothesis is rejected. D) Regression coefficients with a negative coefficient are deleted from the equation. A residual A) Has the same degrees of freedom as the MSE term. B) Cannot assume a negative value. C) Is also called the correlation matrix. D) Is the difference between the actual and the predicted value of the dependent variable.
{"url":"http://highered.mcgraw-hill.com/sites/0073401765/student_view0/chapter14/quizzes.html","timestamp":"2014-04-18T15:39:34Z","content_type":null,"content_length":"95483","record_id":"<urn:uuid:1ce1d536-9369-4c24-b76b-3d65f221da61>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00336-ip-10-147-4-33.ec2.internal.warc.gz"}
One sided noise spectral density VS double sided noise spectral density A complex signal like [tex]\exp{i\omega_0 t}[/tex] has a one-sided spectrum, in this case [tex]\delta(\omega-\omega_0)[/tex]. Its real counterpart [tex]\cos{\omega_0 t}[/tex] has the two-sided The same power is spread in the real-signal case over positive and negative frequencies, each of which is half as large as the complex spectrum. When you talk of the power spectral density (PSD) of thermal noise, it again matters whether you are using a real or a complex representation. Audio engineers often use the former, for instance, communications engineers generally the latter. For white noise where the PSD is a constant, and using your notation where N is two-sided and N0 is one-sided, they are related by N0 = 2N.
{"url":"http://www.physicsforums.com/showthread.php?t=487726","timestamp":"2014-04-20T08:44:43Z","content_type":null,"content_length":"22800","record_id":"<urn:uuid:76e2e523-a305-49f1-abfd-251e6d4300fa>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00556-ip-10-147-4-33.ec2.internal.warc.gz"}
Practical Hints for Reservoir Computing Reservoir computing (RC) has provided a new and robust way of training recurrent neural networks (RNNs) utilizing the standard linear regression for learning the output weights. Despite the simple training procedure, there are some pitfalls that should be avoided, and important parameters must be properly set to make the most of the RC modeling capabilities. The following are a few useful tips for a successful training of Echo State Networks (ESNs) [bib]Jaeger07sholarpedia[/bib], as a representative of the RC "flavors" that is probably most often used for applications. The first step before choosing any RC method for solving a particular problem is to determine whether RC is appropriate in the first place. RC, as well as any other RNN method, is inherently suited only to process temporal data. Therefore, RC should rather not be used for modeling static data, or temporal data where observations in every timestep are independent from each other. There are several important global control parameters that have a significant effect on ESN performance, and whose optimal settings highly depend on the given task (see [bib]Jaeger2001a[/bib] and [bib]Jaeger2002[/bib] for more): • The spectral radius of the reservoir weight matrix (i.e., the largest absolute eigenvalue) co-determines the timescale of the reservoir and the amount of nonlinear interaction of the input components through time. For the tasks that evolve on a slower time scale or/and have long range temporal interactions the spectral radius usually should be close to 1 or beyond. • The input scaling mostly determines the degree of nonlinearity in the model. With small input weights, the reservoir units will become only slightly excited around their resting states and behave almost like linear neurons; for tasks requiring a high level of nonlinearity, the input weight scaling must be increased. • Besides simple tanh units, reservoirs with leaky integrator neurons are often used. This gives another important parameter to optimize, the leaking rate, which allows to directly control the timescale of the reservoir. A detailed discussion of leaky integration (and related timing issues, like subsampling) can be found in [bib]verstraeten_thesis[/bib]. Even more powerful bandpass neurons that respond to particular frequencies can be used for tasks with structure on many timescales [bib]verstraeten_thesis[/bib] [bib]Siewert2007[/bib]. Optimizing these parameters is in practice done either by manual experimentation or automated grid search. In either case, cross-validation runs are used to assess the quality of the current parameter settings. The currently most extensive available RC toolbox, the Oger toolbox, supports grid search and a number of other state-of-the art optimization routines. Finally, as in all machine learning methods, it is important to optimize the model capacity (again, guided by cross-validation). In reservoir computing this can be done in two ways: either by changing the size of the reservoir or by regularization. Regularization can be done by inserting noise during training (as documented, e.g., in [bib]Jaeger2004[/bib]) or by ridge regression [bib] verstraeten_thesis[/bib] [bib]wyffels2008[/bib]. Note that the latter is a computationally cheap optimization, because only the readout weights have to be recomputed for every regularization parameter value, without re-running the ESN on the training data. In pattern generation tasks, it has been reported that noise insertion has advantages over ridge regression with respect to dynamical stability of the trained system [bib]Jaeger2007[/bib]. A general rule of thumb appears to be that it never (or very rarely) harms to use a large reservoir, not caring about optimizing its size, and rely solely on regularization for optimization.
{"url":"http://organic.elis.ugent.be/practical_hints","timestamp":"2014-04-19T23:01:03Z","content_type":null,"content_length":"17892","record_id":"<urn:uuid:3f318cc1-6b8e-4e01-bbd5-f6d9c841a6b7>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00252-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts from September 2008 on XOR's Hammer Playing Games in the Transfinite: An Introduction to “Ordinal Chomp” Chomp is a two-player game which is played as follows: The two players, A and B, start with a “board” which is a chocolate bar divided into $n \times m$ small squares. With Player A starting, they take turns choosing a square and eating it together with all squares above and to the right. The catch is that the square at the lower left-hand corner is poisonous, and the player who is forced to eat it loses. This image from the Wikipedia article shows a typical sequence of moves for a $5\times 3$ chocolate bar: At this point, Player A is forced to eat the poisoned square and hence loses the game. Although the question of what the winning strategies are for this game is very much an open problem, the question of who has a winning strategy is not: On the $1\times 1$ board, Player B wins (since Player A must eat the poison piece on his first move). But for any other board, Player A has a winning strategy. To see why, suppose not. Then if Player A’s first move is to eat just the one square in the top right-hand corner, Player B must have a winning response (since we are supposing that Player B has a winning response to any move that Player A makes). But if Player B’s response is winning, then Player A could have simply made that move to start with. However, suppose we play Chomp not just on $n\times m$ boards, but on $\alpha\times\beta$ boards, where $\alpha$ and $\beta$ are arbitrary ordinals. The game still makes sense just as before, and will always end in finite time, but Player A no longer wins all of the time (there will no longer be a top right-hand corner square if either $\alpha$ or $\beta$ is a limit ordinal). Scott Huddleston and Jerry Shurman investigated Ordinal Chomp in this paper, and showed that it has a number of interesting properties. I’ll describe a few of them below. Filed under Uncategorized Avoiding Set-Theoretic Paradoxes using Symmetry Intuitively, for any property $P(x)$ of sets, there should be a set $\{x \mid P(x)\}$ which has as its members all and only those sets $x$ such that $P(x)$ holds. But this can’t actually work, due to Russell’s Paradox: Let $b = \{x\mid xotin x\}$, and then you can derive a contradiction from both $b\in b$ and $botin b$. The standard solution to this is essentially to forbid the construction of any set which is too big. This solves the problem since you can prove that there are many sets which are not members of themselves, making $b$ too big to be a set. But you also end up throwing out many sets which you might want to have: for example, the set of all sets, the set of all groups, etc. Randall Holmes recently published a paper espousing another solution: instead of forbidding the construction of sets which are too big, forbid the construction of sets which are too asymmetric. Details below. Filed under Uncategorized The Undecidability of Identities Involving Sine, Exponentiation, and Absolute Value In the book A=B, the authors point out that while the identity $\displaystyle{\sin^2(|10 + \pi x|) + \cos^2(|10 + \pi x|) = 1}$ is provable (by a very simple proof!), it’s not possible to prove the truth or falsity of all such identities. This is because Daniel Richardson proved the following: Let $\mathcal{R}$ denote the class of expressions generated by 1. The rational numbers, and $\pi$. 2. The variable $x$ 3. The operations of addition, multiplication, and composition. 4. The sine, exponential, and absolute value functions. Then the problem of deciding whether or not an expression $E$ in $\mathcal{R}$ is identically zero is undecidable. This means as well that the problem of deciding whether or not two expressions $E_1, E_2\in \mathcal{R}$ are always equal is also undecidable, since this is equivalent to deciding if $E_1 - E_2 = E_1 + (-1)\cdot E_2$ is identically zero. A summary of Richardson’s proof (mostly from Richardson’s paper itself) is below. Filed under Uncategorized A Geometrically Natural Uncomputable Function There are many functions from $\mathbb{N}$ to $\mathbb{N}$ that cannot be computed by any algorithm or computer program. For example, a famous one is the halting problem, defined by $f(n) = 0$ if the $n$th Turing machine halts and $f(n) = 1$ if the $n$th Turing machine does not halt. Another one in the same spirit is the busy beaver function. We also know a priori that there must be uncomputable functions, since there are $2^{\aleph_0}$ functions from $\mathbb{N}$ to $\mathbb{N}$ but only $\aleph_0$ computer programs. But that is nonconstructive, and the two examples I gave above seem a bit like they’re cheating since their definitions refer to the concept of computability. Is there a natural example of an uncomputable function that does not refer to computability? In this paper, Alex Nabutovsky found what I think is a great example of such a function from geometry. Details below. Filed under Uncategorized Integrability Conditions (Guest Post!) Please enjoy the following guest post on differential geometry by Tim Goldberg. A symplectic structure on a manifold $M$ is a differential $2$-form $\omega$ satisfying two conditions: 1. $\omega$ is non-degenerate, i.e. for each $p \in M$ and tangent vector $\vec{u}$ based at $p$, if $\omega_p(\vec{u},\vec{v}) = 0$ for all tangent vectors $\vec{v}$ based at $p$, then $\vec{u}$ is the zero vector; 2. $\omega$ is closed, i.e. the exterior derivative of $\omega$ is zero, i.e. $\mathrm{d} \omega = 0$. In trying to come up with answers to questions like “what do you do?” and “what is symplectic geometry?” that would be accessible to an advanced undergraduate or beginning graduate student, I’ve tried to come up with fairly intuitive descriptions of what the two symplectic structure conditions really mean. Non-degeneracy is pretty easy, because my intended audience is certainly familiar with the dot product in Euclidean space, and probably familiar with more general machinery like inner products and bilinear forms. A bilinear form on a vector space over a field $\mathbb{F}$ is just an assignment of a number in $\mathbb{F}$ to each pair of vectors, in such a way that the assignment is linear in each vector in the pair. A bilinear form is called non-degenerate if the only thing that pairs to zero with every single vector is the zero vector. A $2$-form on $M$ is a collection $\omega = \{ \ omega_p \mid p \in M \}$ of skew-symmetric bilinear forms, one for each tangent space of $M$. Saying that $\omega$ is non-degenerate is saying that each of these bilinear forms is non-degenerate. It’s much less clear how to describe to the uninitiated what the closed condition means. It’s even a bit unclear why this condition is required in the first place. A pretty nice answer came up yesterday, in a reading group I attend that is trying to learn about generalized complex structure. We are going through the PhD thesis of Marco Gualtieri, titled “Generalized Complex Geometry”. It is available at the following websites: This was the first meeting, and Tomoo Matsumura was the speaker. He suggested that the requirement that $\mathrm{d} \omega = 0$ is an integrability condition. I had never thought of it this way, but I probably will from now on. Continue reading Filed under Uncategorized
{"url":"http://xorshammer.com/2008/09/","timestamp":"2014-04-17T18:23:32Z","content_type":null,"content_length":"48825","record_id":"<urn:uuid:b81453b7-5b74-431e-ad51-302c82c04cfd>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00386-ip-10-147-4-33.ec2.internal.warc.gz"}
Hitchcock, TX Math Tutor Find a Hitchcock, TX Math Tutor ...Being a younger tutor, I certainly can relate to the kids and teenagers better. I am quite nice and very respectful. I already work for two tutoring companies now. 8 Subjects: including algebra 1, algebra 2, chemistry, geometry ...I also have experience tutoring for the Texas State Test (TAKS & STAAR) resulting in 'recognized' or 'advanced' in the Reading and Science and outstanding achievements in Mathematics. I have a very encouraging & fun teaching style & believe anything can be easily understood if it is presented in... 12 Subjects: including prealgebra, English, geometry, algebra 1 ...In addition I taught at the following universities: San Antonio College, University of Maryland, University of Colorado, Auburn University at Montgomery, AL. In addition, I tutored the Air Force Academy football team in calculus. I have also tutored high school students in various locations. 11 Subjects: including trigonometry, statistics, algebra 1, algebra 2 ...I am by all parts of the definition an extrovert. I have worked as a controller but did start out as a billing clerk at the beginning of my career. My first degree of choice was actually not accounting but a very wise professor saw that accounting came very easy to me and suggested that I change my major. 9 Subjects: including algebra 1, business, algebra 2, accounting ...So that’s how it works!” moments as algebra becomes more familiar and understandable. Algebra 2 builds on the foundation of algebra 1, especially in the ongoing application of the basic concepts of variables, solving equations, and manipulations such as factoring. My approach in working with yo... 20 Subjects: including calculus, logic, algebra 1, algebra 2 Related Hitchcock, TX Tutors Hitchcock, TX Accounting Tutors Hitchcock, TX ACT Tutors Hitchcock, TX Algebra Tutors Hitchcock, TX Algebra 2 Tutors Hitchcock, TX Calculus Tutors Hitchcock, TX Geometry Tutors Hitchcock, TX Math Tutors Hitchcock, TX Prealgebra Tutors Hitchcock, TX Precalculus Tutors Hitchcock, TX SAT Tutors Hitchcock, TX SAT Math Tutors Hitchcock, TX Science Tutors Hitchcock, TX Statistics Tutors Hitchcock, TX Trigonometry Tutors Nearby Cities With Math Tutor Alvin, TX Math Tutors Bacliff Math Tutors Bayou Vista, TX Math Tutors Dickinson, TX Math Tutors Fresno, TX Math Tutors Galena Park Math Tutors La Marque Math Tutors Manvel, TX Math Tutors Piney Point Village, TX Math Tutors Santa Fe, TX Math Tutors Seabrook, TX Math Tutors Texas City Math Tutors Tiki Island, TX Math Tutors Webster, TX Math Tutors West University Place, TX Math Tutors
{"url":"http://www.purplemath.com/Hitchcock_TX_Math_tutors.php","timestamp":"2014-04-19T07:14:58Z","content_type":null,"content_length":"23859","record_id":"<urn:uuid:3f5c0ba4-c012-49ac-bffe-c2520bca640c>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00510-ip-10-147-4-33.ec2.internal.warc.gz"}
st: Use a few observations from a tab-delimited or csv file [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] st: Use a few observations from a tab-delimited or csv file From "Todd D. Kendall" <todddavidkendall@gmail.com> To statalist@hsphsun2.harvard.edu Subject st: Use a few observations from a tab-delimited or csv file Date Wed, 20 Aug 2008 09:40:33 -0500 Dear Statlisters, I have a file that is currently in csv format (or I could easily convert it to tab-delimited). It is fairly large: roughly 80,000 observations and 2,200 variables. In fact, it is too large to fit into Stata (I am running Stata 9.2 on a Windows XP machine with 1 GB of RAM). The maximum memory I can allocate to Stata is -set mem 636m-. When I try to simply insheet the file at this setting, I get only 16,276 observations read in -- not anywhere close to the whole file, so I don't think there are any easy tweaks to make this work. However, it turns out that, for roughly the last 2,000 variables, I really don't need every single variable; instead, I just need a few summary statistics calculated over these 2,000 variables (e.g., the mean or standard deviation). My idea is to write a simple do file that loads in, say, the first 15,000 observations, computes the mean and standard deviation of the 2,000 variables, then drops these variabes and saves as a .dta file. I would then repeat on the next 15,000 observations, and so on. Then I could just append all the little files together, and I would assume I could fit this into Stata, as it would only have around 200 variables instead of 2,200. My problem is that insheet doesn't work with "in" -- i.e., I can't write -insheet filename.csv in 1/15000-. Alternatively, if I could convert the file from csv into a fixed format, I could write a dictionary and use infix, but my Google search for how to convert a csv file into a fixed-column file has come up pretty dry. Am I barking up the wrong tree completely here, or am I missing something obvious? I greatly appreciate any suggestions. * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2008-08/msg00820.html","timestamp":"2014-04-19T09:40:30Z","content_type":null,"content_length":"7597","record_id":"<urn:uuid:4690e769-4d87-411d-a4c0-4467782500e4>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00532-ip-10-147-4-33.ec2.internal.warc.gz"}