content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
A Strategic market game approach for the private provision of public goods
Faias, Marta and Moreno, Emma and Wooders, Myrna (2009): A Strategic market game approach for the private provision of public goods.
Download (251Kb) | Preview
Bergstrom, Blume and Varian (1986) provides an elegant game-theoretic model of an economy with one private good and one public good. Strategies of players consist of voluntary contributions of the
private good to public good production. Without relying on first order conditions, the authors demonstrate existence of Nash equilibrium and an extension of Warr's neutrality result -- any
redistribution of endowment that left the set of contributors unchanged would induce a new equilibrium with the same total public good provision. The assumption of one-private good greatly facilities
the results. We provide analogues of the Bergstrom, Blume and Varian results in a model allowing multiple private and public goods. In addition, we relate the strategic market game equilibrium to the
private provision of equilibrium of Villanaci and Zenginobuz (2005), which provides a counter-part to the Walrasian equilibrium for a public goods economy. Our techniques follow those of Dubey and
Geanakoplos (2003), which itself grows out of the seminal work of Shapley and Shubik (1977). Our approach also incorporates, into the strategic market game literature, economies with production, not
previously treated and, as a by-product, establishes a new existence of private-provision equilibrium.
Item Type: MPRA Paper
Original A Strategic market game approach for the private provision of public goods
English Title: A Strategic Market Game Approach for the Private Provision of Public Goods
Language: English
Keywords: Public goods, market games, equilibrium, Nash equilibrium, private provision, voluntary contributions
D - Microeconomics > D0 - General > D02 - Institutions: Design, Formation, and Operations
Subjects: H - Public Economics > H4 - Publicly Provided Goods > H41 - Public Goods
D - Microeconomics > D0 - General > D01 - Microeconomic Behavior: Underlying Principles
Item ID: 37777
Depositing Myrna Wooders
Date 31. Mar 2012 22:32
Last Modified: 19. Feb 2013 13:35
Allouch, N. (2012) On the private provision of public goods on networks, typescript.
Amir, R., . Bloch, F. (2009): Comparative statics in a simple class of strategic market games. Games and Economic Behavior, 65, 7--24.
Amir, R. Sahi, S., Shubik, M, Yao, S. (1990): A strategic market game with complete markets. Journal of Economic Theory, 51, 126--143.
Andreoni, J. (1988): Privately provided public goods in a large economy: The limits of altruisam. Journal of Public Economics 35, 57-73.
Andreoni, J. (1990): Impure altruism and donations to public goods: A theory of warm-glow giving, The Economic Journal,100, 464-477.
Bergstrom, T., Blume, L., Varian, H. (1986): On the private provision of public goods. Journal of Public Economics, 29, 25--49.
Cornes, R., Itaya, J. (2010): On the private provision of two or more public goods. Journal of Public Economic Theory, 12(2), 363--385.
Dubey, P., Geanakoplos, J. (2003): From Nash to Walras via Shapley-Shubik. Journal of Mathematical Economics, 39, 391-400.
Florenzano, M. (2009): Walras-Lindahl-Wicksell: What equilibrium concept for public goods provision? I - The convex case. CES Working Papers 2009.09. Documents de Travail du Centre
d'Economie de la Sorbonne.
Giraud, G. (2003): Strategic market games: an introduction. Journal of Mathematical Economics 39, 355--375
Foley, D. (1970): Lindahl's solution and the core of an economy with public goods. Econometrica, 38, 66-72.
Itaya, J., de Meza, D., Myles, G. (2002): Income distribution, taxation, and the private provision of public goods. Journal of Public Economic Theory, 4(3), 273--97.
Kemp, M. C. (1984): A note of the theory of international transfers. Economics Letters, 14, 259--262.
Koutsougeras, L C. (2003a): Non-Walrasian equilibria and the Law of One Price. Journal of Economic Theory, 108(1), 169--175.
Koutsougeras, L. C. (2003b): Convergence to No arbitrage Equilibria in Market Games." Journal of Mathematical Economics, 39, 401--420.
Muench, T.J. (1972): The Core and the Lindahl Equilibrium of an economy with a public good: An example. Journal of Economic Theory, 4, 241--255.
Peck, K., Shell, J., Spear, S.E. (1992): The Market Game: Existence and structure of equilibrium. Journal of Mathematical Economics, 21, 271--299.
Samuelson, P.A., (1954): The pure theory of public expenditure. The Review of Economics and Statistics, 36, 387--389.
Shapley, L.S., Shubik, M., (1977): Trade using one commodity as a means of payment. Journal of Political Economy, 85, 937--968.
Silvestre, J. (2012) All but one free ride when wealth effects are small. SERIEs, 3, 201--207.
Villanaci, A., Zenginobuz, E.U. (2005): Existence and regularity of equilibria in a general equilibrium model with private provision of a public good. Journal of Mathematical
Economics, 41, 617--636.
Villanacci, A., Ü. Zenginobuz (2007), "On the neutrality of redistribution in a general equilibrium model with public goods. Journal of Public Economic Theory, 9 (2), 183-200.
Villanacci, A., Ü. Zenginobuz (20012),"Subscription equilibrium with production: Non-neutrality and constrained suboptimality." Journal of Economic Theory 147, 407--425
Warr, P.G. (1983): The private provision of a public good Is independent of the distribution of income. Economics Letters, 13, 207--211.
URI: http://mpra.ub.uni-muenchen.de/id/eprint/37777
|
{"url":"http://mpra.ub.uni-muenchen.de/37777/","timestamp":"2014-04-16T11:09:59Z","content_type":null,"content_length":"26847","record_id":"<urn:uuid:ad2bfbbe-7d9c-410e-90a2-816335c6cb3c>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00470-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Spline Interpolation Example
A selection of articles related to spline interpolation example.
Original articles from our library related to the Spline Interpolation Example. See Table of Contents for further available material (downloadable resources) on Spline Interpolation Example.
Spline Interpolation Example is described in multiple online sources, as addition to our editors' articles, see section below for printable documents, Spline Interpolation Example books and related
Suggested Pdf Resources
Suggested Web Resources
Example. Figure 2: Interpolation with cubic "natural" splines between three points .
Mar 11, 2009 Quadratic Spline Interpolation: Example: Part 1...by numericalmethodsguy6097 views; Thumbnail 9:37. Add to.
Apr 20, 2009 Learn the quadratic spline interpolation method via an example.
Learn linear spline interpolation via an example. Quadratic Spline Interpolation : Example: Part 1; 25.
We use a relaxed cubic spline to interpolate the six points. This means that between each two points, there is a piecewise cubic curve.
Great care has been taken to prepare the information on this page. Elements of the content come from factual and lexical knowledge databases, realmagick.com library and third-party sources. We
appreciate your suggestions and comments on further improvements of the site.
|
{"url":"http://www.realmagick.com/spline-interpolation-example/","timestamp":"2014-04-18T05:37:41Z","content_type":null,"content_length":"30027","record_id":"<urn:uuid:154fab6e-8ad7-454e-b061-200891c64ba9>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00034-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Numpy-discussion] numpy function to compute sample ranks
Abhimanyu Lad abhimanyulad@gmail....
Sun Nov 2 12:24:37 CST 2008
Is there a direct or indirect way in numpy to compute the sample ranks of a
given array, i.e. the equivalent of rank() in R.
I am looking for:
rank(array([6,8,4,1,9])) -> array([2,3,1,0,4])
Is there some clever use of argsort() that I am missing?
More information about the Numpy-discussion mailing list
|
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2008-November/038395.html","timestamp":"2014-04-17T14:23:44Z","content_type":null,"content_length":"2844","record_id":"<urn:uuid:1c4ac0b5-4313-4bce-bea6-dd131c975c06>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00609-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
if cos(t)=-1/5 and pi<t<3pi/2. find csc(t)
• one year ago
• one year ago
Best Response
You've already chosen the best response.
|dw:1362772905106:dw|Understand why I drew that triangle in the third quadrant?
Best Response
You've already chosen the best response.
not quite sure why?
Best Response
You've already chosen the best response.
|dw:1362773055192:dw| They told us that the angle t in within this interval.\(\large \pi<t<3\pi/2\). So we draw a line in the third quadrant and form a triangle from it.
Best Response
You've already chosen the best response.
ok now i understand that part
Best Response
You've already chosen the best response.
|dw:1362773145892:dw|Let's look at just the triangle now. Don't worry about the fact that I labeled the angle \(\large t'\). That's not super important. Just think of it as \(\large t\).
Best Response
You've already chosen the best response.
\(\large \cos t=-\dfrac{1}{5}=\dfrac{adjacent}{hypotenuse}\) So how would we label these sides? HINT: We always let the hypotenuse be positive.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Ok looks good! Just need to find the missing side using the `Pythagorean Theorem` and then we just have one small step after that.
Best Response
You've already chosen the best response.
opposite will equal 4?
Best Response
You've already chosen the best response.
sorry did an error......missing side equals to \[+/- 2\sqrt{6}\]
Best Response
You've already chosen the best response.
Ok looks good. Now remember that we drew our triangle in the `third` quadrant. So for that OPPOSITE leg, did we move UP or DOWN? Do we want the positive or negative answer?
Best Response
You've already chosen the best response.
we want a negative answer because tan is only positive in the 3rd Q
Best Response
You've already chosen the best response.
Haha that's an interesting way to think about it XD I was thinking about the fact that we moved downward in the y direction to make the opposite length. So it would be negative. But whatever
works for you! :D|dw:1362773803586:dw|
Best Response
You've already chosen the best response.
Ok good. Everything is labeled correctly. Just have to find \(\large \csc t\) now! :)
Best Response
You've already chosen the best response.
hypotenuse over opposite
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
yay good job! \c:/
Best Response
You've already chosen the best response.
For your final answer it might be a good idea to `rationalize` your answer. Get the irrational number out of the denominator. Multiply the top and bottom by sqrt6.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
a little confused on the denominator part....not sure if its 12 or not
Best Response
You've already chosen the best response.
\[\large 2 \cdot \sqrt6 \sqrt6 \qquad = \qquad 2\cdot6 \qquad = \qquad 12\]Hmm yah everything looks ok there!
Best Response
You've already chosen the best response.
ok thanks!!!
Best Response
You've already chosen the best response.
thanks for the help some time a 8am class only aborbs so much in an hour and with 300 classmates lol
Best Response
You've already chosen the best response.
heh ill bet! XD
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/513a4302e4b01c4790d1a2c3","timestamp":"2014-04-16T22:36:59Z","content_type":null,"content_length":"109975","record_id":"<urn:uuid:f4d46d4a-6339-4762-8e96-82aa97c35c42>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00445-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Help me with cross product question!
ok what am i supposed to do after writing the equations down (sorry i dont mean to be sarcastic, but i'm not getting a reasonable reply here!)
You did get a reasonable reply. You seemed to be having problems finding the right equations (and you were saying some strange things about cross products), so I tried to help you with that. I
thought that was what you needed to get started. I will not solve this problem completely for you, because that's not what we do here, but I will tell you a few more things.
There are four unknowns, but there are also four equations, so it should be possible to solve the system completely for a, b, x and y. (If there had been only three equations we would have had to
settle for a way to express three of the unknowns as functions of the fourth). It can be difficult to figure out how to solve a non-linear system of equations such as this one, but if you play around
with the equations for a while, you may find a trick that simplifies things. One such trick is use all of the equations to express (x+a)²+(x+b)² in two different ways. The result is an equation that
will tell you the norms of the vectors.
When you have the norms, you can proceed e.g. like this: 0=xa+yb=a(6-a)+b(8-b)=... If you use the result about what a²+b² is, this equation simplifes to a relationship between a and b that tells you
the "direction" of the vector (a,b). A similar calculation that starts with the same equation will tell you the "direction" of (x,y).
|
{"url":"http://www.physicsforums.com/showthread.php?t=50341","timestamp":"2014-04-16T07:50:34Z","content_type":null,"content_length":"41472","record_id":"<urn:uuid:dd9d7bc5-4c7b-468f-92e5-7a15d6920619>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00649-ip-10-147-4-33.ec2.internal.warc.gz"}
|
IntroductionThe IRED-Camera SetModel definitionDifferential ModelPractical ImplementationResultsConclusionsReferencesFigures and Tables
Sensors Sensors 1424-8220 Molecular Diversity Preservation International (MDPI) 10.3390/s91108896 sensors-09-08896 Article Sensor for Distance Measurement Using Pixel Grey-Level Information
LázaroJosé L.^1^* CanoAngel E.^2 FernándezPedro R.^1 PompaYamilet^2 Electronics Department, University of Alcalá, Polytechnic School, University Campus, Alcalá de Henares, Madrid 28871, Spain
Telecommunications Department, Oriente University, Av. de las Américas, SN, Santiago de Cuba 90900, Cuba; E-Mail: angel.cano@depeca.uah.es Author to whom correspondence should be addressed; E-Mail:
lazaro@depeca.uah.es; Tel.: +34-91-885-6551; Fax: +34-91-885-6591. 2009 6 11 2009 9 11 8896 8906 30 9 2009 29 10 2009 4 11 2009 © 2009 by the authors; licensee Molecular Diversity Preservation
International, Basel, Switzerland. 2009
This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).
An alternative method for distance measurement is presented, based on a radiometric approach to the image formation process. The proposed methodology uses images from an infrared emitting diode
(IRED) to estimate the distance between the camera and the IRED. Camera output grey-level intensities are a function of the accumulated image irradiance, which is also related by inverse distance
square law to the distance between the camera and the IRED. Analyzing camera-IRED distance, magnitudes that affected image grey-level intensities, and therefore accumulated image irradiance, were
integrated into a differential model which was calibrated and used for distance estimation over a 200 to 600 cm range. In a preliminary model, the camera and the emitter were aligned.
distance measurement radiometry cameras calibration infrared measurements infrared image sensors
Distance estimation using vision sensors is an important aspect of robotics since many robot positioning algorithms use distance to calculate the robot's position as the basis for more complicated
Traditionally, distance measuring in robotics has been conducted by sonar (US) and infrared (IR) sensing. Several methods based on the line-of-sight (LOS) and echo/reflection models have also been
used. The LOS model places the emitter and detector in different locations, and signals travel from emitter to detector. The Reflection model links emitter and detector physically (in the same place)
and signals are reflected off an object or wall, following a round-trip-path.
In the Reflection model, the viability of IR as an accurate means of measuring distance depends on extensive prior knowledge of the surface (scattering, reflection, and absorption). [1] details a
method for determining surface properties, and subsequently calculating the distance to the surface and the relative orientation of the surface in an unknown environment using previously acquired
sensory data; the developed sensor provides accurate range measurements when used in conjunction with other sensing modalities. In [2], low-cost infrared emitters and detectors are used for the
recognition of surfaces with different properties in a location-invariant manner. In order to resolve the dependency of intensity readings on the location and properties of the surface, the use of
angular intensity scans and an algorithm to process them was proposed. In [3], an IR sensor based on the light intensity back-scattered from objects and capable of measuring distances, and the sensor
model, are described. In all cases [1-3], sensors for short distances are evaluated.
Vision devices based on a geometrical model have been used for many positioning tasks. However, most vision positioning algorithms are based on geometrical imaging models, where 3D to 2D projection
constitutes the main mathematical tool for analysis [1-6]. With vision devices, the LOS model for signal transmission and distance measurement is used.
With geometrical models, a single camera and interest point can only estimate a 2D position, as projection based models cannot provide depth information. Nevertheless, depth can be calculated if
additional information is included in the model. This entails the use of two vision sensors or some kind of active device [1,2,4,5,7-13].
To date, artificial vision has been one of most widely used positioning techniques since it gives accurate results. However, geometric modeling is more normally used in order to obtain distance to
objects [14].
In intelligent spaces, smart living, etc., where a certain number of cameras are already installed, a simple method based on grey levels can be developed to determine the depth. The cameras already
installed in the environment are used for performing other tasks necessary in smart living and intelligent spaces. For example, if a mobile robot carries an IRED, depth can be estimated from the
cameras' pixel values, because there is a relationship between pixel grey-level intensity and the quantity of light that falls on the image sensor. Also, received light is related by inverse distance
square law to the distance between the camera and the IRED.
When an image is captured by a digital camera, it provides a relative measure of the distribution of light within the scene. Thus, pixel grey-level intensities are a function of sensor surface
irradiance accumulated during the exposition time.
The function that relates accumulated image irradiance to output grey-level intensities is known as the Radiometrical Camera Response Function (RCRF) [15-18].
In [16], the properties shared by all camera responses were analyzed. This enabled the constraints that any response function must satisfy, and the theoretical space of all possible camera responses,
to be determined. Using databases, the authors concluded that real-world responses occupy a small part of the theoretical space of all possible responses. In addition, they developed a low-parameter
empirical model of response.
In most cases, an inverse-RCRF is required in order to obtain a direct relationship between pixel grey-level intensities and accumulated image irradiance by a Radiometric Camera Calibration Process
(RCCP) [16,17].
Furthermore, if the camera takes an image of an IRED and this can be isolated, a point source model for the IRED can be used to estimate the irradiance on the surface of the camera lens using inverse
distance square law [19]. The camera lens captures irradiance distribution on the surface of the sensor and also accumulates sensor irradiance during the exposition time. Finally, the pixel
grey-level intensity can be related to the accumulated image irradiance by a RCCP, lens irradiance can be related to the image's sensor irradiance by lens modeling, and lens irradiance can be related
to the distance between the camera and the IRED by inverse distance square law. Thus, a relationship with pixel grey-level intensity can be defined which includes the distance between the camera and
the IRED.
Previous papers have been presented in the field of IR, in which the authors developed computer programs and methods. In [20], a reducing-difference method for non-polarized IR spectra was described,
where the IR signal of a five component mixture was reduced stepwise by subtracting the IR signal of other components until total elimination of the non-desired signals was achieved.
The aim of the present study was to use a geometrical model for 2-D positioning, which would provide coordinates on a plane parallel to a mobile robot (for example), and then to use a radiometrical
model in order to obtain the distance between the mobile robot and the plane. For our purposes, the LOS model was used in order to determine the distance between emitter (IR) and detector (CMOS
The method using a single IR point to develop an effective and practical method is based on the assumption that the final application will be carried out in settings such as intelligent spaces, smart
living spaces, etc., where a certain number of cameras are already installed. The cameras already installed in the environment are used for performing other tasks necessary in these smart living and
intelligent spaces.
One possibility would be to use a photodiode. However, these devices provide an instantaneous response to the signals received, and given the distances involved in these applications (several
meters), and working with an LOS link (the best alternative available), these can be less than pW. The low intensity of the received signal can, therefore, impede accurate, or even valid, measuring.
Since distance estimation and robot position determination can involve measurements on a ms. timescale, considered as real time, the signal must be integrated into a determined time interval. The use
of a photodiode would imply the need to design signal conditioning, integration and digital circuits, etc. All of these, in addition to a webcam, are already available in the proposed method;
consequently, the design is simpler, implementation is quicker and final costs are lower, as a variety of existing models can be selected.
A further reason for using a camera is that by applying a differential method for measuring, and given that by using cameras the method can be selected digitally from a computer, automation and
control, speed and safety of data acquisition is facilitated since two consecutive measurements are taken with different integration times.
Later, when the method for distance estimation using mathematical algorithms has been fully developed, it will be possible to combine this data with data obtained from the geometric calibration of
cameras in order to improve generation of the variables and parameters involved in position and location of the device incorporating the IRED.
In order to define a model for the IRED-Camera set, the following aspects were established:
Estimation of accumulated image irradiance. When inverse RCRF is used, a measure of the accumulated image irradiance can be obtained. A sample of energy accumulated in the camera is shown in Figure
The relationship between accumulated image irradiance and lens irradiance. This can be obtained using the camera's optical system model and also includes the camera exposition time. A linear
behaviour is assumed for the camera's optical system model.
Behaviour of lens irradiance with the distance between the camera and the emitter. For the IRED Camera set a point source model can be used; in this case, lens irradiance can be estimated using
inverse distance square law.
Image irradiance must be due to IRED light energy. Therefore, background illumination must be suppressed. An ideal implementation would be to test the algorithm in a dark room; however we used an
interference filter to select a narrow wavelength band centered on the typical emitter wavelength in order to ensure that images were created only by IRED energy.
From a general point of view, accumulated image irradiance is a function of camera parameters and radiometric magnitudes which quantitatively affect the image formation process.
The fact that the emitter transmits up to 120°, or at a different angle, only influences the angle at which the detector can be situated in order to receive the emitter's signal, and does not affect
the size of the image formed.
As regards image size, according to the laws of optical magnification this is only influenced by the size of the object and the distance at which it is located. In our case, as the diode size is both
constant and very small, the image appears reduced. Nevertheless, as can be seen in Figure 1, the point of light image increases in size as the distance of image acquisition decreases.
Reference [21] was taken as the starting point, and the inverse RCRF “g” was estimated using the method proposed by Grossberg and Nayar [16]. However, a new practical measure for accumulated image
irradiance E[r] was defined thus: E r = 1 A ∑ i = 1 A g ( M i )where M[i] is the normalized grey-level intensity for the pixel i, 1≤ i ≤ A, and where A is the total number of pixels in an image's
region-of-interest, containing the spot produced by the IRED. In practice, a ROI of 100 pixels × 100 pixels was selected. Since the camera and the IRED were aligned, the same ROI was selected for all
the images used.
To define the differential model, the magnitudes and relationships affecting E[r] which were defined in [21], were also used here.
A differential method was selected because a measurement taken with a specific exposition time will include various errors due to camera irradiance, external illumination factors (spotlights, the
sun, etc.), or the effect of temperature on the sensor, for example. The differential method enabled us to isolate the measurement from these effects.
Moreover, both the sensor and method are economic, since cameras are already installed in the application environment. Furthermore, the method is simple to launch and installation of the system is
easy. The system is non-invasive and safe to operate, and the sensorial system is complementary to other methods, facilitating ease of data fusion
Assuming that the camera and the emitter are aligned initially, there are three magnitudes that affect the accumulated image irradiance for the camera-IRED set the camera exposition time, the IRED
radiant intensity and the distance between the IRED and the camera [21].
In addition, and as in [21], the behavior of E[r] with each of the magnitudes affecting the IRED image formation process was measured by fixing values for the other magnitudes. For example, in order
to discover how E[r] behaves with camera exposition time, images were captured using fixed values for the emitter radiant intensity and distance whilst varying the exposition time. A similar
methodology was used to obtain all E[r]'s behaviors.
As in [21], the same E[r] behaviours with defined magnitudes were obtained. Therefore, all measured behaviors could be integrated into a unique expression as follows: E r = ( τ 1 t + τ 2 ) × ( ρ 1 P
e + ρ 2 ) × ( δ 1 1 d 2 + δ 2 )where τ[1], τ[2], δ[1], δ[2], ρ[1], and ρ[2] are the model parameters, and t, P[e], and d are the exposition time, the IRED radiant intensity and the distance between
the camera and the emitter, respectively.
From (2), this can be re-written as: E r = k 1 P e t d 2 + k 2 P e d 2 + k 3 t d 2 + k 4 1 d 2 + k 5 P e t + k 6 t + k 7 P e + k 8where k[j], 1≤ j ≤ 8, are model parameters that can be related to τ
[1], τ[2], δ[1], δ[2], ρ[1], and ρ[2]. The expression (3) has been obtained by suppressing the parentheses in (2).
If images captured with different camera exposition times are analyzed, then (3) can be written by considering the differences of accumulated image irradiances as follows: E r ⋅ t n − E r ⋅ t r = k 1
P e d 2 ( t n − t r ) + k 3 t n − t r d 2 + k 5 P e ( t n − t r ) + k 6 ( t n − t r )where t[n] and t[r] are the different camera exposition times, t[r] is the fixed reference exposition time and tn
represents different exposition time values.
Expression (4) was used as the proposed model to characterize the IRED-Camera set Therefore, values for k in (4) must be obtained in a calibration process.
Once k parameters have been obtained, the expression (4) can be solved for the distance estimation: d n = k 1 P e ( t n − t r ) + k 3 ( t n − t r ) E r ⋅ t n − E r ⋅ t r − k 5 P e ( t n − t r ) − k 6
( t n − t r )
The aim was to use the proposed differential model to estimate the distance between the camera and the IRED, where the model analyzes images of the IRED captured with different exposition times, and
also assumes that distance and emitter radiant intensity are constant during the image capturing process.
The method described in this paper, based on differential image processing, presents several advantages, innovations and benefits related to previous studies, which can be summarized as follows:
the development of a sensor for distance measuring and pose detection based on grey level intensity of the images;
the development of a method for obtaining the distance between two points (IR-camera) using a differential;
the sensor and method are economic, since cameras are already in the application environment (requiring only one IRED for each object/mobile to be detected);
the method is simple to launch;
installation of the system is easy;
the system is non-invasive and safe;
the sensorial system is complementary to other methods, facilitating ease of data fusion; etc
For practical reasons, a Basler camera was used, with a SFH42XX High Powered IRED with 950 nm emission peak; thus an interference filter centered at 950 nm and with a bandwidth of 40 nm was added to
the camera, which improved the signal/noise ratio since it eliminated background illumination: all visible light, and infrared light up to 930 nm and over 970 nm.
In order to use (5) for distance estimation, k[i] parameters must be estimated in a calibration process. This process was implemented by analyzing an image sequence of 4 different distances (d1 = 400
cm; d2 = 350 cm; d3 = 300 cm; and d4 = 250 cm), 10 different exposition time differences, assuming t[r] = 2 ms as the reference exposition time (the reference exposition time selected was
sufficiently low to eliminate possible offset illumination and dark current effects) and t[n] = {3; 4; 5; : : : ; 12} ms and 3 different IRED radiant intensities, which were selected by varying the
diode's polarization current. A representative result for the calibration process is shown in Figure 2.
In Figure 2, modeled and measured differences of accumulated image irradiances are shown versus exposition time differences. These values were extracted from images used in the calibration process,
specifically for the distance d[4] = 250 cm, where three different emitter radiant intensities and 10 different exposition time differences were considered. In addition, Figure 2 shows the
effectiveness of the model calibration process. Once the calibration process has been carried out, experiments for distance estimation can be conducted.
Two experiments were performed to test the validity of the differential model for distance estimation. In the first experiment, the 200 cm to 380 cm range was considered whilst the second considered
the 400 cm to 600 cm range. The second experiment showed greater error since the distances were greater and were also beyond the range for which the sensor had been calibrated (that is, the range
used in the first experiment). Nevertheless, the second experiment shows that even so, sufficiently accurate measurements can still be taken. For both experiments, distance was increased stepwise by
20 cm and the camera was aligned with the IRED.
In addition, to improve the efficiency of the methodology in practical applications, four images were analysed to estimate distance. The first image was captured with a t[r] = 2 ms of exposition time
and the others were captured with t[1] = 9 ms, t[2] = 10 ms and t[3] = 11 ms respectively, thus obtaining three distance estimations. The final distance estimation was the mean value of these
distance estimations. The IRED radiant intensity was fixed at P[e]2 corresponding to a diode polarization current of 5 mA.
Experiments were carried out using the method described above. The equipment used is indicated in Section 4. Once images from the optimal exposition time range had been selected, the deferential
method for distance estimation was applied. The distance estimation results for each difference in exposition time, corresponding to the first experiment, are shown in Figure 3.
The final distance measurement is shown in Table 1.
In the first distance range, errors in distance estimation using the differential model are less than 8 cm., representing a relative error of less than a 2.5%. The second experiment considered longer
distances, and results are given in Table 2.
In this case, the differential model was less accurate than in the first experiment.
An alternative method based on a radiometrical approach to the image formation process has been described for estimating the distance between a camera and an IRED. This method estimates the inverse
RCRF in order to obtain a measure of the accumulated image irradiance, and shows that the accumulated image irradiance depends linearly on the emitter radiant intensity, the camera exposition time
and the inverse square distance between the IRED and the camera. These behaviors are incorporated into a model, which can be re-written in a differential form.
The differential model has four parameters that must be estimated by means of a calibration process. Once the model's parameters have been calculated, the model's expression can be solved for
distance estimation. Two distance ranges were considered for model validation. In the first range, errors were less than 8 cm. However, in the second experiment the errors were higher than in the
In conclusion, the proposed differential model represents an alternative method for estimating the distance between an IRED and a camera through analysis of image grey-level intensities.
This study was made possible thanks to the SILPAR II DPI2006-05835 projects sponsored by the DGI of the Spanish Ministry of Education at the University of Alcalá. We would also like to thank the
Spanish Agency for International Development Cooperation (AECID), under the aegis of the Ministry of Foreign Affairs and Cooperation (MAEC).
NovotnyP.M.FerrierN.J.Using infrared sensors and the Phong illumination model to measure distancesProceedings of International Conferences on Robotics and AutomationDetroit, MI, USAMay 10–15,
1999Vol. 216441649 BarshanB.AytacT.Position-invariant surface recognition and localization using infrared sensors2003423589359410.1117/1.1621005 BenetG.BlanesF.SimoJ.E.PérezP.Using Infrared sensors
for distance measurement in mobile robots20024025526610.1016/S0921-8890(02)00271-3 FaugerasO.D.MITCambridge, MA, USA1993 FernándezI.MazoMLázaroJ.L.MartinP.GarcíaS.Local positioning system (LPS) for
indoor environments using a camera arrayProceedings of the 11th International Conference on Advanced RoboticsUniversity of CoimbraPortugalJune 30–July 3, 2003613618 ItoM.Robot vision modeling —
camera modelling and camera calibration19915321335 AdivG.Determining three-dimensional motion and structure from optical flow generated by several moving objects1985738440110.1109/
TPAMI.1985.476767821869277 FujiyoshiH.ShimizuS.NishiT.NagasakaY.TakahashiT.Fast 3D position measurement with two unsynchronized camerasProceedings of 2003 IEEE International Symposium on
Computational Intelligence in Robotics and AutomationKobe, JapanJul 16–20, 200312391244 HeikkiläJ.Geometric camera calibration using circular control points2000221066107710.1109/34.879788
Luna-VázquezC.A.Medida de la posición 3-D de los cables de contacto que alimentan a los trenes de tracción eléctrica mediante visionUniversidad de AlcaláMadrid, Spain2006
LázaroJ.L.GardelA.MazoM.MataixC.GarcíaJ.C.Guidance of autonomous vehicles by means of structured lightProceedings of IFAC Workshop on Intelligent Components for VehiclesSeville, SpainJune 6, 1997
GrossoE.SandiniG.TistarelliM.3-D object recognition using stereo and motion1989191465147610.1109/21.44065 LázaroJ.L.Modelado de entornos mediante infrarrojos. Aplicación al guiado de robots
móvilesUniversidad de AlcaláMadrid, Spain1998 LázaroJ.L.GardelA.MazoM.MataixC.GarcíaJ.C.MateosR.Mobile robot with wide capture active laser sensor and environment definition20013022724810.1023/
A:1008108427462 LázaroJ.L.MazoM.MataixC.3-D environments recognition using structured light and active triangulation for the guidance of autonomous vehiclesProceedings of ASEE/IEEE Frontiers in
Education ConferenceMadison, WI, USAJune, 1997No. A-4 LázaroJ.L.MataixC.GardelA.MazoM.GarcíaJ.C.3-D vision system using structured light and triangulation to create maps of distances for guiding
autonomous vehiclesProceedings of the International Conference on Control and Industrial SystemsLa Habana, Cuba2000ISBN: 959-237-031-1No. C-53 HartleyR.ZissermanA.2nd Ed.Press Syndicate of the
University of CambridgeCambridge, UK2003 MitsunagaT.NayarS.K.Radiometric self calibrationProceedings of IEEE Conference on Computer Vision and Pattern RecognitionFort Collins, CO, USAJune, 1999374380
GrossbergM.D.NayarK.Modeling the space of camera response functions2004261272128210.1109/TPAMI.2004.8815641715 GrossbergM.D.NayarS.K.Determining the camera response from images: What is knowable?
2003251455146710.1109/TPAMI.2003.1240119 ReinhardE.WardG.PattanaikS.DebevecP.Morgan KaufmannSan Francisco, CA, USA2005 McCluneyW.R.Artech HouseNorwood, MA, USA1994
IvanovaB.B.TsalevD.L.ArnaudovM.G.Validation of reducing-difference procedure for the interpretation of non-polarized infrared spectra of n-component solid mixtures20066982282810.1016/
j.talanta.2005.11.02618970643 Cano-GarciaA.LázaroJ.L.FernandezP.EstebanO.LunaC.A.A preliminary model for a distance sensor, using a radiometric point of view20097172310.1166/sl.2009.1004
Representative samples of images used in the emitter-to-camera distance characterization.
Representative results for the differential model calibration process. Three different emitter radiant intensities were considered by changing the diode polarization current.
Representative distance estimations and absolute error for Experiment 1.
Final distance estimation result for Experiment 1.
Real Dist. (cm) Est. Dist. (cm) Abs. Err. (cm) Relat. Err. (%)
200 204.4 4.4 2.2
220 223.5 3.5 1.6
240 245.1 5.1 2.1
260 262.7 2.7 1.0
280 284 4.0 1.4
300 306 6.0 2.0
320 327.5 7.5 2.3
340 345.6 5.6 1.6
360 367.7 7.7 2.1
380 387.6 7.6 2.0
Final distance estimation result for Experiment 2.
Real Dist. (cm) Est. Dist. (cm) Abs. Err. (cm) Relat. Err. (%)
400 412.2 12.2 3.1
420 425.6 5.6 1.3
440 452.6 12.6 2.9
460 487.5 27.5 6.0
480 495.1 15.1 3.1
500 508.3 8.3 1.7
520 543.1 23.1 4.4
540 567.5 27.5 5.1
560 568.0 8.0 1.4
580 595.1 15.1 2.6
600 615.7 15.7 2.6
|
{"url":"http://www.mdpi.com/1424-8220/9/11/8896/xml","timestamp":"2014-04-20T01:29:43Z","content_type":null,"content_length":"56828","record_id":"<urn:uuid:974317f6-db8a-449e-8a5b-a854d5926882>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00443-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Your thoughts on the math Olympiad
I have heard many say that being able to solve Olympiad problems is by no means a prerequisite to becoming a good mathematician, physicist, etc. However, would one benefit from practicing math
competition problems if he is older, i.e. undergrad level and on. Would there be any benefit to the person by being able to solve the problems in unrelated fields or for solving unrelated problems as
a researcher or on an unrelated exam?
|
{"url":"http://www.physicsforums.com/showthread.php?t=316480","timestamp":"2014-04-18T18:20:04Z","content_type":null,"content_length":"27814","record_id":"<urn:uuid:efeda12d-b6b8-4539-ace4-a903d9b3a49f>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00528-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Economics and Mathematics
Directors of undergraduate studies: Anthony Smith (Economics), Rm. 306, 28 Hillhouse Ave., 432-3583 or 432-3574, qazi.azam@yale.edu; Andrew Casson (Mathematics), 216 LOM, 432-7056,
The Economics and Mathematics major is intended for students with a strong intellectual interest in both mathematics and economics and for students who may pursue a graduate degree in economics.
PrerequisitesThe major has prerequisites in both mathematics and economics: MATH 120; ECON 110 or 115; and ECON 111 or 116. With permission of the directors of undergraduate studies, upper-level
courses may be substituted for prerequisite courses. Upper-level courses substituted for prerequisites do not count toward the total of twelve term courses (beyond the introductory level in economics
and mathematics) required for the major.
Requirements of the majorA total of twelve term courses is required beyond the introductory level in economics and in mathematics: seven term courses in economics and five term courses in
mathematics. These courses must include:
1. One intermediate microeconomics course chosen from ECON 121 or 125, and one intermediate macroeconomics course chosen from ECON 122 or 126
2. A year of mathematical economics, ECON 350 and 351
3. Two courses in econometrics, ECON 135 and 136 (with permission of the director of undergraduate studies in Economics, STAT 241 and 242 may be taken instead of ECON 135, in which case they count as
one economics course and not as mathematics courses)
4. A course in linear algebra, MATH 222 or 225 (or 230 and 231, for two course credits)
5. An introductory course in analysis, MATH 300 or 301
6. Senior seminar in mathematics, MATH 480
Because optimization is an important theme in mathematics and is particularly relevant for economics, OPRS 235 is recommended for students majoring in Economics and Mathematics and can be counted
toward either the Mathematics or Economics course requirements.
Credit/D/Fail coursesFor students in the Class of 2016 and subsequent classes, courses taken Credit/D/Fail may not be counted toward the requirements of the major.
Distinction in the MajorTo be considered for Distinction in the Major, students must meet specified grade standards (see the Undergraduate Curriculum section) and submit a senior essay written
either in an Economics department seminar or in ECON 491 or in 491 and 492 to the Economics department; for details see under Economics. (The paper must be written in a course taken in the senior
year.) All courses beyond the introductory level in Mathematics and Economics are counted in the computation of grades for Distinction.
Approval of programStudents interested in the major should consult both directors of undergraduate studies, and verify with each that their proposed program meets the relevant guidelines.
Registration forms must be signed by both directors of undergraduate studies each term.
PrerequisitesMATH 120; ECON 110 or 115; ECON 111 or 116
Number of courses12 term courses beyond prereqs (incl senior req)
Distribution of courses5 courses in math and 7 in econ
Specific courses requiredECON 121 or 125; ECON 122 or 126; ECON 135, 136, 350, 351; MATH 222 or 225 (or 230, 231); MATH 300 or 301
Substitution permittedSTAT 241 and 242 for ECON 135, with permission of DUS in Econ
Senior requirementSenior sem in math (MATH 480); optional senior essay in Econ
|
{"url":"http://catalog.yale.edu/ycps/subjects-of-instruction/economics-mathematics/","timestamp":"2014-04-16T22:19:07Z","content_type":null,"content_length":"18559","record_id":"<urn:uuid:b53c6fbf-c353-4826-9f3a-cc5ac4b5d973>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00571-ip-10-147-4-33.ec2.internal.warc.gz"}
|
find surface area obtained by rotating `y=r^2,r in[1,4]` about the y-axis - Homework Help - eNotes.com
find surface area obtained by rotating `y=r^2,r in[1,4]` about the y-axis
Sorry! I forgot to change the range of y in my earlier answer.
The surface area obtained by rotating a curve of a curve x=f(y) about the y-axis, in the interval [a,b], is given by the relation:
Here, `y=r^2`
`rArr r=sqrty`
Also, note that when r=1, y=1, and when r=4, y=16
`= int_1^16(2pi*sqrty*sqrt(1+1/(4y)dy)`
`=pi int_1^16(sqrt(4y+1)dy)`
Let, 4y+1=t
4dy = dt, dt=1/4*dy
(When y=1, t=5 and when y=16, t=65)
therefore, `S= pi/4int_5^65sqrttdt`
The surface area obtained by rotating a curve of a curve x=f(y) about the y-axis, in the interval [a,b], is given by the relation:
Here, `y=r^2`
`rArr r=sqrty`
So, `S=int_1^4(2pi*r*sqrt(1+(1/2y^(-1/2))^2)dy)`
Put `(4y+1)=t`
`rArr dy=1/4dt`
(Note that when y=1, t=5 and when y=4, t=17)
`therefore S=pi/4int_5^17(sqrttdt)`
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes
|
{"url":"http://www.enotes.com/homework-help/find-surface-area-obtained-by-rotating-y-r-2-r-1-4-454792","timestamp":"2014-04-24T03:32:37Z","content_type":null,"content_length":"27624","record_id":"<urn:uuid:6779218c-c3ce-4fed-b182-c77a6f87d09a>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00626-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Multiple-Conclusion Logic
Results 1 - 10 of 17
, 1997
"... This paper proposes and studies a typed λ-calculus for classical linear logic. I shall give an explanation of a multiple-conclusion formulation for classical logic due to Parigot and compare it
to more traditional treatments by Prawitz and others. I shall use Parigot's method to devise a natu ..."
Cited by 8 (0 self)
Add to MetaCart
This paper proposes and studies a typed λ-calculus for classical linear logic. I shall give an explanation of a multiple-conclusion formulation for classical logic due to Parigot and compare
it to more traditional treatments by Prawitz and others. I shall use Parigot's method to devise a natural deduction formulation of classical linear logic. This formulation is compared in detail to
the sequent calculus formulation. In an appendix I shall also demonstrate a somewhat hidden connexion with the paradigm of control operators for functional languages which gives a new computational
interpretation of Parigot's techniques.
- Department of Mathematics, Instituto Superior Técnico
"... A well-known result by W\'ojcicki-Lindenbaum shows that any tarskian logic is many-valued, and another result by Suszko shows how to provide 2-valued semantics to these very same logics. This
paper investigates the question of obtaining 2-valued semantics for many-valued logics, including paraconsis ..."
Cited by 8 (6 self)
Add to MetaCart
A well-known result by W\'ojcicki-Lindenbaum shows that any tarskian logic is many-valued, and another result by Suszko shows how to provide 2-valued semantics to these very same logics. This paper
investigates the question of obtaining 2-valued semantics for many-valued logics, including paraconsistent logics, in the lines of the so-called ``Suszko's Thesis". We set up the bases for developing
a general algorithmic method to transform any truth-functional finite-valued semantics satisfying reasonable conditions into a computable quasi tabular 2-valued semantics, that we call dyadic. We
also discuss how ``Suszko's Thesis" relates to such a method, in the light of truth-functionality, while at the same time we reject an endorsement of Suszko's philosophical views about the
misconception of many-valued logics.
, 2003
"... This is an initial systematic study of the properties of negation from the point of view of abstract deductive systems. A unifying framework of multiple-conclusion consequence relations is
adopted so as to allow us to explore symmetry in exposing and matching a great number of positive contextua ..."
Cited by 8 (6 self)
Add to MetaCart
This is an initial systematic study of the properties of negation from the point of view of abstract deductive systems. A unifying framework of multiple-conclusion consequence relations is adopted so
as to allow us to explore symmetry in exposing and matching a great number of positive contextual sub-classical rules involving this logical constant ---among others, well-known forms of proof by
cases, consequentia mirabilis and reductio ad absurdum. Finer definitions of paraconsistency and the dual paracompleteness can thus be formulated, allowing for pseudo-scotus and ex contradictione to
be di#erentiated and for a comprehensive version of the Principle of Non-Triviality to be presented. A final proposal is made to the e#ect that ---pure positive rules involving negation being often
fallible--- a characterization of what most negations in the literature have in common should rather involve, in fact, a reduced set of negative rules.
- Paraconsistency with no Frontiers
"... For any given consistent tarskian logic it is possible to find another nontrivial logic that allows for an inconsistent model yet completely coincides with the initial given logic from the point
of view of their associated single-conclusion consequence relations. A paradox? This short note... ..."
Cited by 3 (3 self)
Add to MetaCart
For any given consistent tarskian logic it is possible to find another nontrivial logic that allows for an inconsistent model yet completely coincides with the initial given logic from the point of
view of their associated single-conclusion consequence relations. A paradox? This short note...
, 2001
"... We show that several known theorems on graphs and digraphs are equivalent. The list of equivalent theorems include Kotzig's result on graphs with unique 1-factors, a lemma by Seymour and Giles,
theorems on alternating cycles in edge-colored graphs, and a theorem on semicycles in digraphs. We co ..."
Cited by 1 (1 self)
Add to MetaCart
We show that several known theorems on graphs and digraphs are equivalent. The list of equivalent theorems include Kotzig's result on graphs with unique 1-factors, a lemma by Seymour and Giles,
theorems on alternating cycles in edge-colored graphs, and a theorem on semicycles in digraphs. We consider computational problems related to the quoted results; all these problems ask whether a
given (di)graph contains a cycle satisfying certain properties which runs through p prescribed vertices. We show that all considered problems can be solved in polynomial time for p < 2 but are
NP-complete for p # 2. 1
- CombLog’04 — Proceedings of the Workshop on Combination of Logics: theory and applications, held in Lisbon, PT , 2004
"... This text aims at providing a bird’s eye view of possible-translations semantics ([10, 24]), defined, developed and illustrated as a very comprehensive formalism for obtaining or for
representing semantics for all sorts of logics. With that tool, a wide class of complex logics will very naturally tu ..."
Cited by 1 (1 self)
Add to MetaCart
This text aims at providing a bird’s eye view of possible-translations semantics ([10, 24]), defined, developed and illustrated as a very comprehensive formalism for obtaining or for representing
semantics for all sorts of logics. With that tool, a wide class of complex logics will very naturally turn out to be (de)composable by way of some suitable combination of simpler logics. Several
examples will be mentioned, and some related special cases of possible-translations semantics, among which are society semantics and non-deterministic semantics, will also be surveyed. 1 Logics,
translations, possible-translations Let a logic L be a structure of the form 〈S, �〉, where S denotes its language (its set of formulas) and � ⊆ Pow(S)×Pow(S) represents its associated consequence
relation (cr), somehow defined so as to embed some formal model of reasoning. Call any subset of S a theory. As usual, capital Greek letters will denote theories, and lowercase Greek will denote
formulas; a sequence such as Γ, α, Γ ′ � ∆ ′ , β, ∆ should be read as asserting that Γ ∪ {α} ∪ Γ ′ � ∆ ′ ∪ {β} ∪ ∆. Morphisms between any two of the above structures will be called translations. So,
given any two logics, L1=〈S1, �1 〉 and L2=〈S2, �2〉, a mapping t: S1 → S2 will constitute a translation from L1 into L2 just in case the following holds:
- PROC. OF THE TOKYO CONFERENCE ON LINEAR LOGIC , 1996
"... This paper considers a typed -calculus for classical linear logic. I shall give an explanation of a multiple-conclusion formulation for classical logic due to Parigot and compare it to more
traditional treatments by Prawitz and others. I shall use Parigot's method to devise a natural deduction formu ..."
Add to MetaCart
This paper considers a typed -calculus for classical linear logic. I shall give an explanation of a multiple-conclusion formulation for classical logic due to Parigot and compare it to more
traditional treatments by Prawitz and others. I shall use Parigot's method to devise a natural deduction formulation of classical linear logic. I shall also demonstrate a somewhat hidden connexion
with the continuation-passing paradigm which gives a new computational interpretation of Parigot's techniques and possibly a new style of continuation programming.
"... Whilst results from Structural Proof Theory can be couched in many formalisms, it is the sequent calculus which is the most amenable of the formalisms to metamathematical treatment. Constructive
syntactic proofs are filled with bureaucratic details; rarely are all cases of a proof completed in the l ..."
Add to MetaCart
Whilst results from Structural Proof Theory can be couched in many formalisms, it is the sequent calculus which is the most amenable of the formalisms to metamathematical treatment. Constructive
syntactic proofs are filled with bureaucratic details; rarely are all cases of a proof completed in the literature. Two intermediate results can be used to drastically reduce the amount of effort
needed in proofs of Cut admissibility: Weakening and Invertibility. Indeed, whereas there are proofs of Cut admissibility which do not use Invertibility, Weakening is almost always necessary. Use of
these results simply shifts the bureaucracy, however; Weakening and Invertibility, whilst more easy to prove, are still not trivial. We give a framework under which sequent calculi can be codified
and analysed, which then allows us to prove various results: for a calculus to admit Weakening and for a rule to be invertible in a calculus. For the latter, even though many calculi are
investigated, the general condition is simple and easily verified. The results have been applied to G3ip, G3cp, G3c, G3s, G3-LC and G4ip. Invertibility is important in another respect; that of
proof-search. Should all rules in a calculus be invertible, then terminating root-first proof search gives a decision procedure
, 2009
"... We construct explicit bases of single-conclusion and multiple-conclusion admissible rules of propositional Łukasiewicz logic, and we prove that every formula has an admissibly saturated
approximation. We also show that Łukasiewicz logic has no finite basis of admissible rules. ..."
Add to MetaCart
We construct explicit bases of single-conclusion and multiple-conclusion admissible rules of propositional Łukasiewicz logic, and we prove that every formula has an admissibly saturated
approximation. We also show that Łukasiewicz logic has no finite basis of admissible rules.
, 2008
"... According to Suszko’s Thesis, any multi-valued semantics for a logical system can be replaced by an equivalent bivalent one. Moreover: bivalent semantics for families of logics can frequently be
developed in a modular way. On the other hand bivalent semantics usually lacks the crucial property of an ..."
Add to MetaCart
According to Suszko’s Thesis, any multi-valued semantics for a logical system can be replaced by an equivalent bivalent one. Moreover: bivalent semantics for families of logics can frequently be
developed in a modular way. On the other hand bivalent semantics usually lacks the crucial property of analycity, a property which is guaranteed for the semantics of multi-valued matrices. We show
that one can get both modularity and analycity by using the semantic framework of multi-valued non-deterministic matrices. We further show that for using this framework in a constructive way it is
best to view “truth-values” as information carriers, or “information-values”.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=647031","timestamp":"2014-04-21T02:27:42Z","content_type":null,"content_length":"35707","record_id":"<urn:uuid:1b77a964-f4e3-430b-aae4-b0c2fea60e31>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00108-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Two Cultures in the Philosophy of Mathematics?
Posted by David Corfield
A friend of mine, Brendan Larvor, and I are wondering whether it would be a good idea to stage a conference which would bring together philosophers of mathematics from different camps.
Brendan is the author of Lakatos: An Introduction, and someone who believes as I do that one of our most important tasks is the Lakatosian one of attempting to understand the rationality of
mathematics through the history of its practice.
By contrast, a much more orthodox philosophical approach to mathematics in the English-speaking world, well represented in the UK, is to address the question of whether mathematics is reducible to
logic. To gain an idea of the current state of play here, you can take a look at What is Neologicism? by Linsky and Zalta. You can see from the final sentence of section 1 that organisational issues,
such as whether category theory is a good language for mathematics, are irrelevant to them.
Now, it could be that the town is big enough for the both of us. Just as you may choose to work on trying to discover the real story behind elliptic cohomology, and have very little interest in
random graph theory, so there might plausibly be Two Cultures of philosophy of mathematics. Still, I would like to see whether it’s possible to discover why we are led to ask such very different
questions about mathematics.
One issue that will inevitable arise is the extent to which it matters whether you mathematicians prefer one of the two approaches. Minhyong Kim mentioned the misconceptions about each other’s fields
that can afflict scientists and philosophers. If Brendan and I receive an appreciative nod from a mathematician, e.g., here and here, neologicists could well answer that this shows nothing. We’re not
here to please mathematicians. Why should they have any good notion of how philosophy ought to conduct itself?
Nor, I take it, does it matter to them that wheras the original logicism of Frege and Russell took place within spitting distance of the mathematical work of central figures such as Dedekind and
Hilbert, the current neo-logicism is not even on today’s mathematical radar.
Still, I think we ought to meet up. An external view on one’s work is never a bad thing, and I certainly learned from Alexander Paseau’s review of my book in Studies in History and Philosophy of
Posted at January 2, 2008 1:40 PM UTC
Re: Two Cultures in the Philosophy of Mathematics?
It may also be beneficial to have a conference on pure and applied mathematics.
The latter seems to have done more for civiliztion.
Perhaps this over simplifies the relation of pure and applied mathematics:
Archimedes was a mathematician and mechanical engineer.
Newton was a mathematician and mechanical engineer.
Maxwell was a mathematician and electrical engineer.
Steinmetz was a mathematician and electrical engineer.
Posted by: Doug on January 2, 2008 6:14 PM | Permalink | Reply to this
Re: Two Cultures in the Philosophy of Mathematics?
Doug wrote:
The latter seems to have done more for civiliztion.
There’s a simple explanation: when pure mathematics does something for civilization that nonmathematicians understand, it gets renamed ‘applied mathematics’.
Mathematics is like a tree: you can’t have fruits without roots.
Posted by: John Baez on January 2, 2008 9:53 PM | Permalink | Reply to this
Re: Two Cultures in the Philosophy of Mathematics?
This comment reminds me of a neat description of the difference between the disciplines of Artificial Intelligence and Computer Science: Researchers in AI study very challenging problems; whenever
one of these difficult problems is solved, it ceases to be AI and becomes part of CS.
Posted by: Peter on January 6, 2008 4:16 PM | Permalink | Reply to this
Re: Two Cultures in the Philosophy of Mathematics?
Hi John Baez,
I meant only to equate applied with pure mathematics rather than to suggest, as I did, that one branch is superior to the other.
For example, the Princeton, Stanford and RAND applied mathematician Richard Bellman is credited with inventing dynamic programming associated with optimal control theory.
Another Bellman biography states “In those days applied practitioners were regarded as distinctly second-class citizens of the mathematical fraternity.”
John von Neumann was a balanced mathematician with 60 each pure and applied mathematic papers.
Posted by: Doug on January 10, 2008 2:12 AM | Permalink | Reply to this
Re: Two Cultures in the Philosophy of Mathematics?
Newton was a mathematician and mechanical engineer.
Hi Doug,
I am interested in Newton in the historical context. I would appreciate if you could elabarote on your justification for calling Newton a mechanical engineer.
Thank you.
Posted by: Pioneer1 on January 3, 2008 1:13 AM | Permalink | Reply to this
Re: Two Cultures in the Philosophy of Mathematics?
Hi Pioneer1,
Recall that I am over simplifying or using the term engineer in the most liberal sense.
See Wiki Mechanical engineering first paragraph definition.
Newton built a machine, the Newtonian telescope [reflecting] from wiki listing of advantages and disadvantages.
Newton also experimented in alchemy, one might consider him a chemical engineer, in the most liberal sense.
Posted by: Doug on January 10, 2008 1:41 AM | Permalink | Reply to this
Re: Two Cultures in the Philosophy of Mathematics?
I think I misunderstood you. I thought you meant engineer in the sense we use today. But still I think that Galileo will fit in that list better than Newton. Galileo wrote a book used by engineers
for a long time. He was an engineer in the tradition of Archimedes. Compared to Galileo Newton was not an engineer. Newton was a world builder, yes, but that’s not really engineering, it is theory.
Posted by: Pioneer1 on January 15, 2008 2:32 AM | Permalink | Reply to this
Re: Two Cultures in the Philosophy of Mathematics?
This would be a wonderful conference!
I’ve published lots of math, and have taught Philosophy (Epistemology, History of Scientific Revolution, The Frontiers of Ignorance, …). I find that this blog is near the center of, and in contact
with, the best current work in the intersection.
David Corfield, the studies of and about Lakatos, and the oddities of neologicism are profound.
John Baez et al adds a great reality-testing set of insights as to how this all connects with Physics.
Great stuff!
I presume this idea will be floated next week at the big Math conference in San Diego?
Posted by: Jonathan Vos Post on January 2, 2008 7:29 PM | Permalink | Reply to this
Re: Two Cultures in the Philosophy of Mathematics?
Jonathan vos Post wrote:
I presume this idea will be floated next week at the big Math conference in San Diego?
I doubt it. I never go to those big AMS conferences anymore — I’m not even a member — and David and Urs are too far away.
But, it makes sense to ask, just so café regulars and lurkers can meet up if they want to:
Does anybody reading this plan to go to the 2008 Joint Mathematics Meetings in San Diego from January 6th to 9th? If so, post a comment!
Posted by: John Baez on January 2, 2008 9:59 PM | Permalink | Reply to this
Re: Two Cultures in the Philosophy of Mathematics?
[…] too far away.
This reminds me: hadn’t you also been invited to QGT08 in Kolkata next week?
The poster carries your name. And mine for that matter. But I had to cancel, due to teaching duties, unfortunately. I am very much regretting that.
Is any $n$-Café reader attending this conference?
Posted by: Urs Schreiber on January 2, 2008 10:06 PM | Permalink | Reply to this
Re: Two Cultures in the Philosophy of Mathematics?
Yes, I’d been planning to attend that conference, but I’ve been feeling sort of overwhelmed by overwork, and my wife had been travelling a lot during the fall, and flying to India during the first
week of class would be pretty bad for me and the students, so I decided to cancel.
I’ve been having a very productive winter break, staying home and breaking through the logjam of half-written papers that had been making me miserable lately. So, my mood is different than when I
cancelled that trip. But, I’m still very glad I don’t need to fly across the world in a few days from now. Instead, I can focus on finishing up that paper with Danny on the classifying space for
Posted by: John Baez on January 2, 2008 10:52 PM | Permalink | Reply to this
Re: Two Cultures in the Philosophy of Mathematics?
I’ll be there. Furthermore - I’ll be talking. Twice! And both times, it’ll be on A-infinity stuff…
I would very much like to meet up with people around this blog, and bloggers in general, and interesting people in general. I’ll be using my swedish cell phone: +46706450283 for coordination; please
drop me text messages there!
Posted by: Mikael Vejdemo Johansson on January 3, 2008 4:29 PM | Permalink | Reply to this
Re: Two Cultures in the Philosophy of Mathematics?
“I’ve … taught Philosophy (Epistemology, History of Scientific Revolution, The Frontiers of Ignorance, …).”
That makes me curious. If you have some texts online would you post the links?
Posted by: Thomas Riepe on January 5, 2008 12:16 PM | Permalink | Reply to this
If we had worlds enough and time; Re: Two Cultures in the Philosophy of Mathematics?
Dear Thomas, my degrees (Caltech and Umass/Amherst) are in Math (specialty was advanced mathematical logic), English Literature, and Computer Science. Plus PhD work (Thesis, yet All But Degree) in
“Molecular Cybernetics”). Hence I have not taught Philosophy for college credit. This is because amateur passion does not equate with credentials in most USA universities (or even secondary schools).
I am keenly interested in the Philosophy of Science, and Philosophy of Mathematics. But, rather, as follows:
CENTER FOR THE STUDY OF THE FUTURE, Ventura, CA 1995-Present
* The Center for the Study of the Future is a “supersite” of the Elderhostel organization, which has had over 2,000,000 adult students in the past 25 years
* I report directly to the co-founders and co-chairmen
* I have taught roughly 2,000 students of average age 65
* I have taught classes in Pasadena, Monrovia, San Diego, Ventura, Costa Mesa, Westwood, and Beverly Hills
* Courses which I developed and taught include:
* “The Search for Other Earths” – astronomy and planetary science
* “Time Machines” – Physics, Philosophy, and Fiction of Time Travel
* “New Paradigms” – the Structure of Scientific Revolution
* “How Do We Know What We Know” – Epistemology and Psychophysics
* “The Frontiers of Ignorance” – unsolved problems of science
* “Undersea Living” – the biology and history of undersea habitats
* “Human Evolution” – anthropology and archeaology
My formal teaching in colleges and universities and high schools is limited to Mathematics, and Astronomy.
I’d love to develoip my very extensive notes (many digitized in Wordperfect on an antique computer no longer functioning, but I think backed up to secondary storage, diskettes, even CD-ROM) of the
philosophy classes that I taught to these motivated senior citizens, and their reactions to texts I worked from, with them, by Kuhn, Lakatos, and others.
But, without institutional support, grant, or book contract, I’m sad to say that it doesn’t rise high enough in my hierarchy of priorities. This comes from having an 11-room home, 3 cars, a son in
law school, and other financial pressures.
But I’d love to discuss any of this offline with you.
Posted by: Jonathan Vos Post on January 5, 2008 7:56 PM | Permalink | Reply to this
Re: Two Cultures in the Philosophy of Mathematics?
Having no feeling for the landscape of academic philosophy, there’s little I can say about the professional aspect of the conference. On the other hand, the general idea of such communication seems
very nice. Therefore, if you think it appropriate, I can try to be helpful in various small ways:
-If the meeting is somewhere near the London area, I would gladly be a passive participant. Especially so if having some practicing mathematicians involved helps with obtaining funding, for example.
(Then again, maybe it would count against you!)
-If you need it, I can provide help with recruiting other mathematicians, for example, Michael Harris. He’s normally quite occupied with family obligations, but if it’s just for a day or two, he
should be able to come over from Paris. Of course I’m assuming again that having some mathematical participation with no specifically philosophical sophistication might at least be amusing for the
-One active participant I can recommend among mathematicians is Angus Macintyre. He’s of course among the most senior of mathematical logicians in the UK. But he’s also very communicative and
well-cultured on a broad spectrum of mathematical issues. He might be quite willing to provide an overview lecture on the evolution of foundational mathematics, and its relevance today. I find this a
fascinating development in foundations, that mainstream mathematical logicians think of logic as just being a proper branch of mathematics, and have seen some spectacular applications of logic to
number theory and algebra. I myself would definitely like to know what philosophers think about developments of this sort. On the other hand, this is perhaps just a third direction not compatible
with what you have in mind. In any case, I’ll be seeing him next week, so if you’d like me to sound him out, I’d be more than happy to.
-Finally, if there might be some symbolic meaning in having the meeting actually be hosted by a math department, I could probably arrange something at UCL.
Let me know.
Posted by: Minhyong Kim on January 3, 2008 12:20 AM | Permalink | Reply to this
Re: Two Cultures in the Philosophy of Mathematics?
Thanks for your suggestions.
Brendan and I were considering inviting Michael Harris. John Baez and I met him in Delphi this July at the ‘Mathematics and Narrative’ conference.
Interesting that you mention Angus Macintyre. Alexandre Borovik and I have some funding for a small workshop ‘New Directions in Philosophy of Mathematics’ and were considering inviting him. Alexandre
works on the interface of model theory and group theory.
Macintyre has written some interesting papers, including the one here (see also the Lawvere paper there). I’d like to have heard this talk.
I find this a fascinating development in foundations, that mainstream mathematical logicians think of logic as just being a proper branch of mathematics, and have seen some spectacular
applications of logic to number theory and algebra.
Right. One of my pathways into philosophy was via the category theoretic idea that logic was a facet of mathematics. I remember trying to work my way through Lambek and Scott’s Introduction to higher
order categorical logic before starting my Masters.
Once started, however, I was told that there’s a difference between mathematical logic and philosophical logic, and even that it’s almost a pun they share the term ‘logic’. Further, I was told that
it’s philosophical logic which does the philosophical work of telling us what our ontological commitments are (what we are committed to saying exists).
The paper on neologicism I linked to is engaged in this sort of quest, coming to the conclusion that mathematics is all expressible as some portion of third-order logic. This tells us then,
supposedly, what sort of entities mathematics is about, and how we can come to know about them.
I can’t say I was ever so convinced by what I was told, so stuck to the task of elaborating Lakatos’s ideas on concept-stretching. You can read Russell and feel the excitement he conveys that at last
philosophy has a wonderful new instrument for resolving age old problems. Somehow or other I just never got the reason why this ‘philosophical logic’ is so special.
At the IMA workshop on n-categories I met up with Steve Awodey, who although a professor in a philosophy department sees himself as a (category theoretic) mathematical logician, and has little time
for, or dialogue with, philosophical logic.
Posted by: David Corfield on January 3, 2008 2:52 PM | Permalink | Reply to this
Re: Two Cultures in the Philosophy of Mathematics?
Since you two (David Corfield and Minhyong Kim) are among my favorite people to talk to, you might have fun talking to each other. Maybe you could meet up at the January 9th conference on Categories,
Logic and Physics at Imperial College? I think David said he was going there…
Posted by: John Baez on January 3, 2008 8:24 PM | Permalink | Reply to this
Re: Two Cultures in the Philosophy of Mathematics?
Yes, it would be nice to meet up and I’ll try to create an occasion. But unfortunately, the meeting at Imperial is exactly the day Macintyre will be visiting me at UCL. Bad planning on my part.
When I found out, I scheduled Andreas Doering for a talk at the London Number Theory Seminar, in order to compensate.
Posted by: Minhyong Kim on January 4, 2008 11:24 AM | Permalink | Reply to this
Re: Two Cultures in the Philosophy of Mathematics?
Yes, I am going to the Imperial event. We’ll have to find another occasion to meet up.
Posted by: David Corfield on January 4, 2008 1:54 PM | Permalink | Reply to this
Re: Two Cultures in the Philosophy of Mathematics?
Meanwhile, perhaps I can insert a few naive remarks/questions about logicism.
–To me, it has always seemed clear that the question
`Can mathematics be reduced to logic?’
is entirely analogous to
`Can properties of complex physical systems be reduced to classical or quantum mechanics?’
My impression is that this analogy is in fact implicit in most mathematicians’ attitude towards logicism. That is, for the second question, we all know that there is an obvious sense in which the
answer is in the affirmative. Meanwhile, this fact is not terribly interesting or practical. And then, an increasing number of people seem to feel that the impracticality is even of conceptual
Therefore, I had assumed that various subtle theorems notwithstanding, the claims of logicism should be essentially valid, but in a somewhat trivial sense. So to the extent that it’s worth anyone’s
while to give an account of mathematical process, the focus should be on global principles whereby aggregate mathematical reasoning emerges (to use that awful word) out of the small steps formalized
in logic. But it seems many clever people get bogged down instead in innumerable examples and counterexamples.
Is this a silly viewpoint that’s already been dispensed with?
–I stress that it’s not logic itself that’s asserted to be trivial, any more than classical mechanics is.
–It occurs to me as I write that even in the strong form, the claims of logicism are not as strong as those of mechanics. As far as I know, a logicist does not presume to have *predictive power*.
–I fully understand that many philosophers of mathematics might not care much about the opinions of mathematicians themselves. A famous quip says something to the effect that `Art historians are to
artists as ornithologists are to birds.’ There is a standard reading of this sentiment that’s popular among artists. But I’ve understood it to express in part the irrelevance of a bird’s *opinions*
on ornithology.
Posted by: Minhyong Kim on January 3, 2008 1:30 PM | Permalink | Reply to this
Re: Two Cultures in the Philosophy of Mathematics?
The curious thing is how little agreement there is amongst what I called above the ‘orthodox’ in English-language philosophy of mathematics.
This paper by two proponents of the scottish form of neo-logicism marks the difference from the Zalta-Linsky form.
Frege is quoted there as showing that their programme is closer to his:
The problem becomes, in fact, that of finding the proof of the sentence, and of following it up right back to the primitive truths. If in carrying out this process, one comes only to general
logical laws and definitions, then the truth is an analytic one. [… ] [If the] proof can be derived exclusively from general laws, which themselves neither need nor admit proof, then the truth is
a priori.
In virtue of the gaplessness of the chain of inferences it is achieved that each axiom, each presupposition, hypothesis, or however else one might want to call that which a proof rests upon, is
brought to light; and thus one gains a foundation for the assessment of the epistemological nature of the proven law.
Personally, I find the following quotation from Frege much more interesting:
[Kant] seems to think of concepts as defined by giving a simple list of characteristics in no special order; but of all ways of forming concepts, that is one of the least fruitful. If we look
through the definitions given in the course of this book, we shall scarcely find one that is of this description. The same is true of the really fruitful definitions in mathematics, such as that
of the continuity of a function. What we find in these is not a simple list of characteristics; every element is intimately, I might almost say organically, connected with others.
Posted by: David Corfield on January 3, 2008 4:14 PM | Permalink | Reply to this
Re: Two Cultures in the Philosophy of Mathematics?
What confuses me about scottish neologicism is whether it matters to them whether their logical reconstructions of portions of mathematics get to the conceptual heart of those portions.
It might be the case that this is a purely ‘in principle’ exercise, where once it has been shown that portion X is expressible with an abstraction principle and second order logic, then the job is
done, and knowledge about X is revealed to be a priori.
If this is all they’re doing I don’t think they’re being very Fregean. Jamie Tappenden has papers showing how Frege, the Riemannian, cared about carving out concepts correctly. Fruitfulness is key.
If a neologicist doesn’t care about the fruitfulness of their abstraction principles then they’ve rejected an enormously important part of Fregean thinking.
Posted by: David Corfield on January 4, 2008 1:52 PM | Permalink | Reply to this
Re: Two Cultures in the Philosophy of Mathematics?
I’m probably not phrasing any of my questions properly because it’s been so long since I tried even vaguely to understand these issues. Nevertheless, if some concept of fruitfulness came up in Frege,
maybe I can make another attempt.
It seems to be generally agreed upon that the intuitive statement of the (neo-)logicism is
Mathematics can be reduced to logic
and I would like to better understand the meaning of this. Approaching the matter more or less as a scientist, one needs to see what non-trivial questions become associated to this statement, and
what would be regarded as significant progress on these questions. So let me propose another analogy by superficially comparing logicism with the theory of universal grammar. As I understand the
latter, language is regarded as a natural phenomenon, and it would be considered a major achievement just to give a complete account of the principles for checking the correctness of all sentences.
Would it similarly be agreed upon among logicists that showing logic to provide a good account of mathematical correctness is a major goal? I presume however, that this is not the only goal. How much
further the scope of logicism extends seems to me a significant point in coming to grips with the difference between the views of philosophical logic and practicing mathematics.
To pursue the comparison a bit further, universal grammar would not claim to understand why some sentences are `better’ than others, say Shakespeare (or John Baez) over Minhyong Kim. But perhaps
logicism does make this kind of a claim about the reduction of mathematics? To relate this back to the Frege quote, is your impression that Frege was using `fruitfulness’ in the sense that a naive
practicing mathematician would use the word in referring to a mathematical concept? (To start, I am avoiding an explanation of what that sense is.)
By the way, I apologize if I’m imposing the repetition of views that were thoroughly thrashed out in earlier discussions. I started following this blog in a systematic way rather recently.
Posted by: Minhyong Kim on January 5, 2008 4:37 PM | Permalink | Reply to this
Re: Two Cultures in the Philosophy of Mathematics?
You wrote,
…is it your impression that Frege was using ‘fruitfulness’ in the sense that a naive practicing mathematician would use the word in referring to a mathematical concept?
I think Tappenden has done enough to establish this, bearing in mind that the term will have changed its sense a little over intervening decades, and there is variation between mathematicians. This
is not so surprising in view of the fact that Frege was a mathematician, working in a mathematics department.
While one is told a story of the original logicists taking the next step after Weierstrass’ arithmetization of analysis, in fact Frege disliked his work. More had to be done to carve concepts
On p. 17 (443 in text) of that fruitfulness paper by Tappenden, he gives criteria for an example of the analysis of a piece of mathematics into principles which would be satisfying to Frege.
Condition C says
The principle must have incorporated mathematically pregnant notions which made possible inferences of the sort which “increase order and regularity”, “reveal connections between matters
apparently remote” etc.
Another lesser known fact about Frege was that he didn’t believe analysis into purely logic principles would be possible for geometry. He later gave up the thesis that it could even for arithmetic.
We can, of course, ask whether it matters what Frege thought. But should note the weight his name carries for analytic philosophers. It’s easy to forget that late nineteenth century German
mathematicians were educated in a very philosophically sophisticated environment, dominated to a large extent by Kant. This makes it more plausible that Frege’s concerns are not the same as
contemporary neo-logicists.
You also wrote,
Would it similarly be agreed upon among logicists that showing logic to provide a good account of mathematical correctness is a major goal?
I think the best way to understand the contemporary ‘orthodox’ scene is through a problem posed by Benacerraf. This is well enough described in Wikipedia:
In Mathematical Truth, he argues that no interpretation of mathematics (available at that time) offers a satisfactory package of epistemology and semantics; it is possible to explain mathematical
truth in a way that is consistent with our syntactico-semantical treatment of truth in non-mathematical language, and it is possible to explain our knowledge of mathematics in terms consistent
with a causal account of epistemology, but it is in general not possible to accomplish both of these objectives simultaneously. He argues for this on the grounds that an adequate account of truth
in mathematics implies the existence of abstract mathematical objects, but that such objects are epistemologically inaccessible because they are causally inert and beyond the reach of sense
perception. On the other hand, an adequate epistemology of mathematics, say one that ties truth-conditions to proof in some way, precludes understanding how and why the truth-conditions have any
bearing on truth.
Mathematics is seen as a real thorn in the side. What are the concerns of fruitfulness, when we’re caught in the dilemma produced by our knowledge of 2 + 2 = 4? Either numbers exist (where? how to we
come to know about them?), or we’d better be able to rewrite the sentence as a logical derivation from definitions and logical principles.
Posted by: David Corfield on January 7, 2008 10:52 AM | Permalink | Reply to this
Tao, Fruitful or Truthful; Re: Two Cultures in the Philosophy of Mathematics?
“Fruitfulness is key. If a neologicist doesn’t care about the fruitfulness of their abstraction principles then they’ve rejected an enormously important part of Fregean thinking.”
An extremely interesting statement!
Did not Terry Tao list this as one of the (nonexlusive) markers of “good mathematics”?
Mathematics under the Microscope
Monday, April 23, 2007
Fruitful or Truthful
Reuben Hersh kindly allowed me to include in the discussion fragments of our e-mail dialogue on philosophy of mathematics. Here it goes:
Reuben: I understand your comment that the “social constructivist” philosophy may not be very “fruitful”. When I wrote a chapter in What is Math, Really? about criteria for a philosophy of math, I
did not think of including fertility. Maybe I should have. I did stress truthfulness. The two do not seem to be identical. By no means do I mean to suggest that you are a Platonist, but I see an
analogy. There is a general belief that Platonism can be helpful for problem solving, but that is not a very strong reason to believe it is true.
Alexandre: You have raised a very interesting point: I never thought about applying the concept of “truthfulness” to philosophy. Philosophy is not a natural science. Was existentialism truthful?
Philosophy can be fruitful, however. Existentialism, for example, was fruitful because it generated a great literature; hence it touched something in human soul, which is a social practice proof of
its fruitfulness.
I myself had strictly Vygotskian upbringing; however, Vygotskianism in mathematics appears to generate more paradoxes than give answers. You have mentioned one of these paradoxes: indeed, it is an
established fact of centuries of social practice of mathematicians that Platonism is useful for problem solving. Moreover, Platonism was more fruitful than formalism (although perhaps not
considerably more fruitful since formalism led to computers) and considerably more interesting than intuitionism and finitism.
See also:
Ten Mathematical Essays on Approximation in Analysis and TopologyTen Mathematical Essays
To order this title, and for more information, click here
Edited By
J. Ferrera, Universidad Complutense de Madrid, Spain
J. Lopez-Gomez, Universidad Complutense de Madrid, Spain
F.R. Ruiz del Portal, Universidad Complutense de Madrid, Spain
This book collects 10 mathematical essays on approximation in Analysis and Topology by some of the most influent mathematicians of the last third of the 20th Century. Besides the papers contain the
very ultimate results in each of their respective fields, many of them also include a series of historical remarks about the state of mathematics at the time they found their most celebrated results,
as well as some of their personal circumstances originating them, which makes particularly attractive the book for all scientist interested in these fields, from beginners to experts. These gem
pieces of mathematical intra-history should delight to many forthcoming generations of mathematicians, who will enjoy some of the most fruitful mathematics of the last third of 20th century presented
by their own authors.
The Philosophy of Mathematics: An Introductory Essay - Google Books Result
by Stephan Körner - 1986 - Mathematics - 198 pages
Its failure suggests a modification of the original programme and is a source of much fruitful mathematics. But the logical status of the notion of an …
Not, of course, to be confused with:
ERIC #: EJ090184
Title: Fruitful Mathematics
Authors: Ranucci, Ernest R.
Descriptors: Algebra; Diagrams; Discovery Learning; Geometric Concepts; Instruction; Mathematics Education; Problem Solving; Secondary School Mathematics; Teaching Methods
Source: Mathematics Teacher, 67, 1, 5-14, Jan 74
Peer-Reviewed: N/A
Publisher: N/A
Publication Date: 1974-00-00
Pages: N/A
Pub Types: N/A
Abstract: To discover a generalization from a pattern of data, students need to know how to analyze the data. This is a description of how high school students can find a formula to predict the
number of spherical fruits in a piling by using differences. (JP)
Posted by: Jonathan Vos Post on January 5, 2008 7:39 PM | Permalink | Reply to this
Fruitful Fermat and Torricelli; Re: Tao, Fruitful or Truthful; Re: Two Cultures in the Philosophy of Mathematics?
Sorry. I left off the two specific Tao citations. The first, on what is Good mathematics, is probably familar to many here already.
The second is:
PCM article: Generalised solutions, where PCM = Princeton Companion to Mathematics.
The Companion also has a section on history of mathematics; for instance, here is Leo Corry’s PCM article “The development of the idea of proof“, covering the period from Euclid to Frege. We take for
granted nowadays that we have precise, rigorous, and standard frameworks for proving things in set theory, number theory, geometry, analysis, probability, etc., but it is worth remembering that for
the majority of the history of mathematics, this was not completely the case; even Euclid’s axiomatic approach to geometry contained some implicit assumptions about topology, order, and sets which
were not fully formalised until the work of Hilbert in the modern era. (Even nowadays, there are still a few parts of mathematics, such as mathematical quantum field theory, which still do not have a
completely satisfactory formalisation, though hopefully the situation will improve in the future.)
Following Tao’s link gets a wonderful PDF, which has passages on-topic here such as:
[p.5] “Examples of how the classical Greek conception of geometric proof was essentially followed but at the same time fruitfully modified and expanded are found in the works of Fermat, for example
in his calculation of the area enclosed by a generalized hyperbola…”
[p.6] “The rules of Euclid-like geometric proof were completely contravened in proofs of this kind and this made them unacceptable in the eyes of many. On the other hand, their fruitfulness was
highly appealing, especially in cases like this one in which an infinite body was shown to have a finite volume, a result which Torricelli himself found extremely surprising…”
What, I wonder, will 22nd century editions of Princeton Companion to Mathematics say about n-Category Theory? String Theory? QFT? Loop Quantum Gravity? Ed Witten? Greg Chaitin? Steve Wolfram? Richard
Feynman? Experimental Mathematics? Kurt Godel?
Posted by: Jonathan Vos Post on January 5, 2008 9:27 PM | Permalink | Reply to this
Re: Two Cultures in the Philosophy of Mathematics?
A famous quip says something to the effect that ‘Art historians are to artists as ornithologists are to birds.’
The version I know is supposedly due to Feynman,
Philosophy of science is about as useful to scientists as ornithology is to birds.
Presumably this was meant to say not useful at all. If so, then for me philosophy or science (or both) has gone astray.
On the other hand, in view of habitat destruction, perhaps birds do need ornithologists.
Posted by: David Corfield on January 7, 2008 4:53 PM | Permalink | Reply to this
Re: Two Cultures in the Philosophy of Mathematics?
What says philosophy about QFT?
If I remember correctly, a consistency-proof of QFT exists only for uninteresting cases, e.g. when fields don’t interact and nothing happens?
Posted by: Thomas Riepe on January 5, 2008 12:30 PM | Permalink | Reply to this
Re: Two Cultures in the Philosophy of Mathematics?
You could try the papers on this archive.
Posted by: David Corfield on January 5, 2008 2:35 PM | Permalink | Reply to this
Re: Two Cultures in the Philosophy of Mathematics?
Thanks! Exists there any investigation of the way mathematicians really think in philosophy? E.g. in case mathematical thinking makes use of something like aprioric ideas, how is that use distributed
and how changes such a distribution (I found only accidentially that N. Hartmann suggested to investigate “categorial dynamics” ca. like the changes of opinions expressed here, but there may be
others). An other question I wonder about is if Chaitins Number is considered by philosophers as relevant.
Posted by: Thomas Riepe on January 5, 2008 8:11 PM | Permalink | Reply to this
Wolfram, Chaitin, Leibnitz; Re: Two Cultures in the Philosophy of Mathematics?
My recollection is that at last year’s 7th International Conference on Complex Systems, Steve Wolfram introduced Greg Chaitin, who traced his ideas back to both Godel and, he emphasized, Leibniz, his
favorite philosopher.
Chaitin’s web site gives Leibnitz quotations to support Chaitin’s claim that Leibnitz was the father of Complexity Theory in a very modern sense, albeit for theological reasons intertwined with
notions of “fecundity” and fruitfulness in Mathematical Physics.
I spoke at length with Wolfram, Chaitin, and James Gleick about this, and how it related to Feynman’s never-published critique of that specific Leibnitz notion (about a finite number of Physical Laws
or an infinite number).
But I have no idea what “mainstream” Philsophy says about Chaitin. Anyone here know?
Posted by: Jonathan Vos Post on January 5, 2008 9:40 PM | Permalink | Reply to this
Re: Two Cultures in the Philosophy of Mathematics?
- “the way mathematicians really think”
I just found that David Corfields text on his book on that. But why your restriction to “what leading mathematicians of their day”? I would guess that it works better to investigate mathem.s in
general and only later to define subgroups. E.g. such an approach on the development of musicians. To ask “how these “notions, conceptions, intuitions, and so on” are developed”, would fit into the
mentioned idea of Hartmann and a funny idea how to accelerate the creation of sci. notions has been pursued by Gunkel . Apparently Gunkel inputs his creations into normal science, so one could
perhaps see there like in a laboratory how new notions and concepts are processed in the sci. community. “Why do we persist in teaching certain ways of thinking” - when reading books about life in
the middle ages nearly everything appears extremly strange, with exception of texts on medieval universities. Did science develop back to that? E.g. when John collects online books, I wonder if that
makes sense because now as in the middle ages the reading skills seem to develop only at the end of university studies and therefore such collections are useless for the intended readership.
“Connectivity of mathematics” - could this be just an illusion because we know only very little? Chaitins number seems to imply that.
Posted by: Thomas Riepe on January 6, 2008 7:52 PM | Permalink | Reply to this
Re: Two Cultures in the Philosophy of Mathematics?
David Corfields text links to an article about Cartier, who is quoted that Bourbaki-texts are “a disaster” as textbooks. But Deligne seems to have studied math when still in school with them. What do
you think of EGA as textbook?
Because you mention greek philosophy and Platon, here the link to Gyburg Radke who wrote about Platonism and its concept of number some very interesting texts.
Posted by: Thomas Riepe on January 6, 2008 8:11 PM | Permalink | Reply to this
Re: Two Cultures in the Philosophy of Mathematics?
Conc. how mathematicians (and philosophers?) really think, here a fascinating report about new ways to observe the semantic localisation of concepts in the living human brain. Apparently one could
look how mathematicians brains proceed with mathematical terminology, relate this to the existing (acc. to the report very extensive) data and look for interesting individual differences and how they
Posted by: Thomas Riepe on February 17, 2008 11:10 AM | Permalink | Reply to this
Re: Two Cultures in the Philosophy of Mathematics?
Another fascinating insight into mathematical brains in Ioan James’
article in the Bulletin of the Royal Society of Medicine.
Ioan writes further:
Simon Baron-Cohen played a role, created quite a stir and led to my book. MF had observed that his Aspie patients were interested in mathematics and was writing a book on the subject He came to see
me and proposed a collaboration. Meanwhile I had been lecturing on the subject to psychologists and mathematicians at various places, including Philadelphia, and have been in correspondence with
various Aspie mathematicians. Also Simon B=C’s research group have investigated the connection between autism and mathematics. The Royal Society are running a conference in the autumn which should
throw further light on the matter.
I’m looking forward tro the whole book.
Posted by: jim stasheff on February 17, 2008 1:34 PM | Permalink | Reply to this
Re: Two Cultures in the Philosophy of Mathematics?
Thanks Jim, would you please post the link?
Here an article on musicians brains.
Posted by: Thomas Riepe on February 18, 2008 5:57 PM | Permalink | Reply to this
Re: Two Cultures in the Philosophy of Mathematics?
Thomas Riepe wrote:
If I remember correctly, a consistency-proof of QFT exists only for uninteresting cases, e.g. when fields don’t interact and nothing happens?
No, the situation is not that bad! On my website you can find a free book describing a mathematically rigorous construction of interacting quantum fields in 2d spacetime. This is old 1970’s work of
Irving Segal and his student Edward Nelson. A different approach to solving the same problem can be found in the famous book by Glimm and Jaffe: Quantum Physics: A Functional Integral Point of View.
Later, people constructed interacting quantum field theories in 3d spacetime — for a good review, try the book by Vincent Rivasseau, From Perturbative to Constructive Renormalization.
Things get really hard in 4 dimensions. If you can first show that $SU(2)$ Yang–Mills quantum field theory makes sense in 4d, and then show that the lightest particle has mass $\gt 0$, you will win a
million dollars.
However, I’ve heard that people have recently constructed other interacting quantum fields in 4 dimensions. I’m waiting for more details before writing about this in This Week’s Finds — it would be a
big deal.
Posted by: John Baez on January 10, 2008 2:50 AM | Permalink | Reply to this
Re: Two Cultures in the Philosophy of Mathematics?
If I remember correctly, a consistency-proof of QFT exists only for uninteresting cases, e.g. when fields don’t interact and nothing happens?
No, the situation is not that bad! On my website you can find a free book describing a mathematically rigorous construction of interacting quantum fields in 2d spacetime.
The question also depends on what exactly one counts as a QFT. There are different definitions floating around and surprisingly little work has been done on trying to relate them.
The approach called local- or algebraic quantum field theory, which adopts the definition:
A QFT is a certain cosheaf of algebras.
has been strongly motivated by the desire to understand the kind of 4-dimensional quantum field theory relevant for the real world.
A certain disdain among some of its leading practitioners can be felt (and heard in their talks) towards efforts to study QFTs in dimensions other than 4 and for fields not seen in nature.
Therefore it is both remarkable and a little ironic, that AQFT has to date – and that’s probably the statement Thomas Riepe had in mind – of all 4-dimensional QFTs been only able to handle the free
field (at least I have never ever seen anything other in 4-dimensions discussed), while the axiomatics of AQFT has turned out to be a strikingly powerful tool for the analysis of two-dimensional QFT,
in particular of 2-dimensional conformal quantum field theory.
In two dimensions, it turns out that local nets of von Neumann algebras are essentially an alternative to vertex operator algebras (even though here, too, there has been surprisingly little work (I
know one single paper) concerned with understanding what exactly the relation is).
There are cool powerful classification theorems for 2-dimensional CFTs using AQFT, the kind of stuff you need to do things like deciding if Witten’s recent conjecture about CFTs of central charge 24
is correct.
While this is true, one must be careful: according to the definition of QFT which I think is the right one, the data provided by a solution to the AQFT axioms is not what is called a “full” QFT.
Rather, in the CFT context at least, it is what is called a “chiral” QFT.
This says, and that’s not surprising if you look at the axioms, that if you regard a quantum field theory as a global thing which allows you to assign amplitudes (morphisms in some category, really),
not just to local patches of your parameter space, but to arbitrary topologically nontrivial parts of parameter space (i.e. if you really regard QFT as a functor on cobordisms), then a solution to
the AQFT axioms givew you only necessary, but not sufficient information, in general, to construct that assignment.
This is a problem that people haven’t even tried to address in more than two dimensions, as far as I can tell. But for 2-dimensional CFT, there is a powerful theory by Fjelstad, Fuchs, Runkel and
Schweigert which entirely solves the problem of constructing a full 2D CFT from a solution to the AQFT axioms – but only in the special case that this solution to the AFTQ axioms happens to be what
is called “rational”.
(This solution, by the way, it very beautiful: the statement is that the full CFT is uniquely fixed by the Morita class of a Frobenius algebra object internal to the representation category of the
local net of algebras).
And even then, one has to be more careful: to really be able to turn the crank on the general FFRS solution, one needs to have first computed something called the “conformal blocks” of the local net
of observables. Which has been done in some cases. But only in comparatively few.
Nothing is ever easy.
Posted by: Urs Schreiber on January 10, 2008 7:53 PM | Permalink | Reply to this
Re: Two Cultures in the Philosophy of Mathematics?
Many thanks for the answers! An other question I’m curious about is what philosophy of physics says about the compatibility of general relativity and quantum theory. E.g. if I remember correctly,
there exist situations where GR predicts “timelike loops”. Would that not conflict with the unpredictability of decay-events etc. for any observer?
Posted by: Thomas Riepe on January 18, 2008 4:14 PM | Permalink | Reply to this
Re: Two Cultures in the Philosophy of Mathematics?
Here and here Fesenkos interesting thoughts on a possible use of model theory in physics and mathematics.
Posted by: Thomas Riepe on January 8, 2008 8:45 PM | Permalink | Reply to this
Re: Two Cultures in the Philosophy of Mathematics?
I realize this topic is a bit old, but I just discovered it, and thought that others might like to hear directly from one of the neo-logicists who are being discussed.
Anyway, a few points worth making:
In the original post David Corfield writes:
“By contrast, a much more orthodox philosophical approach to mathematics in the English-speaking world, well represented in the UK, is to address the question of whether mathematics is reducible to
logic. To gain an idea of the current state of play here, you can take a look at What is Neologicism? by Linsky and Zalta.”
This is misleading at best. Since Russell’s paradox shattered Frege’s own hopes of reducing arithmetic (and real and complex analysis) to logic, very few philosophers have taken seriously the idea
that mathematics is reducible to logic. The reason is simple: Logic (as understood today) has no ontological commitments, while mathematics has many.
This brings up a crucial issue: Neo-logicism (at least, the standard variant of it, as espoused by myself, Crispin Wright, Bob Hale, etc,) is not, strictly speaking, a version of logicism at all.
There is no claim that mathematics is reducible to logic, since the abstraction principles used (e.g. Hume’s Principle) are not logical truths. Abstraction principles are viewed as being definitions
of a certain kind. As a result, their consequences can be known a priori, etc. But the neo-logicist claim is not, and never has been, that mathematics can be reduced to logic alone (misleading
nomenclature notwithstanding).
[It should be pointed out that there are alternative views which have also adopted the”logicist” or “neo-logicist” label, such as Neil Tennant’s project and the object-theory version proposed by
Zalta and Linsky. In the philosophical literature, however, “neo-logicism” (without any qualification) is generally understood to refer to the views of Wright and Hale (and myself). Linsky and
Zalta’s characterization of this version of neo-logicism is, unfortunately, misleading in this respect. Unfortunately, this confusion has persisted recently, necessitating the need to distinguish the
‘Scottish School’ from other variants of neo-logicism as in the Ebert/Rossberg paper.]
In a later post David also notes that he is unclear regarding “whether it matters to them whether their logical reconstructions of portions of mathematics get to the conceptual heart of those
portions.” The reason for the unclarity is that there is substantial disagreement between neo-logicists (of the Scottish school) on exactly this issue. The principle in question is this (quoted from
C. Wright’s “Neo-Fregean Foundations for Real Analysis”):
“Frege’s Constraint: That a satisfactory foundation for a mathematical theory must somehow build its applications, actual and potential, into its core – into the content it ascribes to the statements
of the theory – rather than merely ‘patch them on from outside.’”
Wright rejects Frege’s Constraint as a general requirement on neo-logicist reconstructions of mathematical theories, arguing that whether it applies to a given theory depends on the nature of that
theory (and, in particular, on the nature of its applications). In particular, he requires that our reconstruction of arithmetic satisfy it, but not our reconstruction of analysis. Hale, on the other
hand, seems to require that Frege’s constraint be met across the board, and he demonstrates in some of his work how an account of analysis can be formulated within the neo-fregean framework that
arguably meets the constraint. (I am also sympathetic to Frege’s Constraint, for what it is worth).
Finally, there is a recurring idea (both in David’s book, but also in discussion) that typical philosophers of mathematics are ignorant of of, or are too lazy to learn, ‘real’ mathematics and instead
concentrate on three rather artificial areas: arithmetic, analysis, and set theory. The formulation of this idea is particularly offensive in the following quote from John Baez’s discussion of
David’s book:
“Alas, too many philosophers seem to regard everything since Goedel’s theorem as a kind of footnote to mathematics, irrelevant to their loftier concerns (read: too difficult to learn).”
I think there are a number of misunderstandings regarding the way philosophy of mathematics has progressed that are at work here.
First off, it is clear that most philosophers of mathematics of any significance know a lot more mathematics than just these three areas. And there is definitely widespread agreement that the more
math one knows (including recent mathematics) the better (and not just areas one might classify as ‘mathematical logic’). So why does discussion of these areas not pop up more within philosophy of
mathematics? The underlying cause, as Baez noted, has to do with Gödel’s theorem, I think, but not for the reasons Baez suggests.
It is not that mathematics later than this is irrelevant, uninteresting, or too hard. Instead, the problem is that Gödel’s theorem (and various other limitative results) has shown the problem of
accounting for the metaphysics and epistemology of mathematics to be much, much more difficult than it had previously appeared to be (and it appeared hard enough as it was).
As a result, philosophers of mathematics have restricted their attention to a small number of conceptually simple ‘test cases’ (arithmetic, analysis, and set theory) The underlying idea would seem to
be that if we cannot adequately handle these cases, then there is little point in trying to tackle more complicated mathematical structures and theories. [Note that although ZFC is complicated, the
intuitive notion of ‘set’ is rather simple.] In addition, some philosophers (but not all) think there is something particularly special and central about these specific theories (see the comments
regarding Frege’s constraint above), and this might also be contributing to this emphasis on a few mathematical domains.
Additionally, I think that many philosophers of mathematics believe that if we can handle these three cases, then we should be able to deal with the rest of mathematics in a roughly similar manner.
This might be optimistic (I suspect it is), but I certainly don’t think there is any knock-down argument to the effect that concentrating on these three domains is causing the thinkers in question to
miss out on important aspects of the nature of mathematics (after all, pretty much all mainstream mathematics can be modeled in the powerset of the reals).
Posted by: Roy T Cook on February 16, 2008 12:29 AM | Permalink | Reply to this
Re: Two Cultures in the Philosophy of Mathematics?
Roy, could you clarify a couple of points?
You quote David Corfield and then write:
This is misleading at best.
Some people might find the “at best” provocative. Many things are worse than “misleading”, and some such things are accusations that aren’t conducive to productive discussion. It might help to keep
the temperature down if you could clarify what you meant. (Or, of course, it might raise the temperature — but at least we’d know what you had in mind.)
At the end of the comment, you write:
pretty much all mainstream mathematics can be modeled in the powerset of the reals.
Can you explain what this means? I can’t see a way of understanding it that both makes it true and interprets the word “modeled” to mean anything similar to what I’d expect it to mean.
For example, suppose it means that pretty much all of the sets encountered in mainstream mathematics are in bijection with some subset of the reals. Suppose I accept this as true. Then it’s very far
from what I would take “the modelling of mathematics” to mean. A model is presumably meant to be an accurate portrayal of reality — something that captures all of its essential aspects. Simply
keeping track of which sets get used seems to me to be a million miles from the goal of capturing the essence of mathematics.
I’m not saying that this is what you mean. Probably it’s not. I’m just trying to explain what it is that I don’t understand about your statement. I guess you mean “modeled” in some technical or
semi-technical sense; but what is it?
Posted by: Tom Leinster on February 17, 2008 1:36 AM | Permalink | Reply to this
Re: Two Cultures in the Philosophy of Mathematics?
Yeah, I should have been more careful here (in two ways - this is of course the danger of typing out a quick rant on the internet, which causes me, like others, to perhaps be less careful than I
should be).
Anyway, the “misleading at best” comment was meant in exactly the ‘provocative’ manner that you suggest. The problem, however, is not David’s in particular. Instead, I think that there is a
widespread misunderstanding regarding what many philosophers of mathematics are up to - especially us neo-logicists. The terminology doesn’t help, of course, since neo-logicism is NOT a new variant
of old-fashioned logicism. But the Zalta and Linsky paper doesn’t help either, since they characterize the debate between their own view and Wright/Hale style neo-logicism as a debate about what the
most promising form of logicism is (i.e. which project has a better claim to having reduced mathematics to logic). Neo-logicists, however, never claimed that they were reducing mathematics to logic.
It is particularly telling that the titles of two of the best and most influential articles in the literature on this topic are both “Is Hume’s Principle Analytic” (one by Crispin Wright and the
other by George Boolos). Notice that the title of both papers makes it clear that the issue is the analyticity of Hume’s Principle, not its status as a truth of logic!
The “misleading at best” comment was reflecting, not any anger or anything at particular authors posting here, or discussed here, but the more general fact that outside of the few specialists who
work on this topic very few philosophers of mathematics seem to ‘get’ this point. All too often I tell people (philosophers of math) I work on neo-logicism and they say “well, that’s sort of
pointless - after all, logicism is dead.”
Regarding the comment about the powerset of the reals - I did, in fact, mean the rather trivial embedding notion that you suggest (i.e. that the vast majority of mathematical structures studied in
mainstream mathematics are isomorphic to some substructure of the powerset of the reals). While such embeddings are typically not all that illuminating, the point I was (perhaps none too clearly)
trying to make is that, given such embeddings and playing Devil’s advocate for the moment, I don’t see any serious reason to suspect that limiting attention to arithmetic, analysis, and set theory
will cause philosophers of mathematics to ‘miss’ crucial aspects of mathematics that ought to be incorporated into their accounts. The argument, in more detail, might go something like this: Assume
that philosophers of mathematics successfully produce accounts that capture the ‘essence’ (your word!) of arithmetic, analysis, and set theory. Then, since the rest of mathematics can be embedded
into the powerset of the reals, these theories are not, in some sense, all that different from the theories we have already accounted for, and thus it shouldn’t be too hard to generalize our account
to other areas. Keep in mind that I also suggested I was a bit skeptical of moves like this. But I can understand the underlying persuasiveness of arguments along these lines.
Posted by: Roy T Cook on February 18, 2008 6:47 PM | Permalink | Reply to this
Re: Two Cultures in the Philosophy of Mathematics?
All too often I tell people (philosophers of math) I work on neo-logicism and they say “well, that’s sort of pointless - after all, logicism is dead.”
Well that is ignorance on their part if they say that.
But you might also be hearing some part of another complaint that bears on what Tom Leinster and you are beginning to discuss about modelling mathematics within the powerset of the reals.
What just about any mathematician (excluding some proof theorists) feels about projects like yours which aim to describe their discipline as the deduction of statements from a set of axioms/
definitions/stipulations is that some part of the essence of mathematics has simply been overlooked.
This raises the question of what is not being captured, and the further question of whether that which is overlooked is the proper subject matter of philosophy.
To give an example, deciding which way to reformulate or extend a mathematical concept is taken by mathematicians to be a process which may be conducted rationally. Where then does the rationality
reside? Following Lakatos, I take this to be a question philosophy can and should address.
Posted by: David Corfield on February 19, 2008 10:28 AM | Permalink | Reply to this
Re: Two Cultures in the Philosophy of Mathematics?
Roy writes:
…there is a recurring idea (both in David’s book, but also in discussion) that typical philosophers of mathematics are ignorant of of, or are too lazy to learn, ‘real’ mathematics and instead
concentrate on three rather artificial areas: arithmetic, analysis, and set theory.
I certain never meant to give any impression of laziness. Or am I to parse that ‘or’ in your sentence to allow the latter’s truth if in my book I only charged philosophers with ignorance?
But even then, ignorance was never the charge I was making. What I was attempting to say there was that I think philosophy has gone astray with regard to mathematics in that it largely fails to ask
the right kinds of question of it. Given the questions you do choose to ask, I can see perfectly that arithmetic, analysis and set theory would be enough.
Meta-philosophical discussion is never an easy business. But Brendan and I genuinely hope that in the conference we plan we can learn to see what it is in the questions neo-logicists ask that forces
them upon you. We also hope that anyone attending would come open to the proposal that philosophy might pose new questions.
Perhaps we might find a point of contact in
Frege’s Constraint: That a satisfactory foundation for a mathematical theory must somehow build its applications, actual and potential, into its core – into the content it ascribes to the
statements of the theory – rather than merely ‘patch them on from outside’.
I’m not really sure what this means. I wonder what would constitute success for you in this project.
I’ve mentioned that I’m fond of Frege’s comment:
[Kant] seems to think of concepts as defined by giving a simple list of characteristics in no special order; but of all ways of forming concepts, that is one of the least fruitful. If we look
through the definitions given in the course of this book, we shall scarcely find one that is of this description. The same is true of the really fruitful definitions in mathematics, such as that
of the continuity of a function. What we find in these is not a simple list of characteristics; every element is intimately, I might almost say organically, connected with others.
Can you understand why a philosopher might be interested in this theme?
Posted by: David Corfield on February 18, 2008 10:30 AM | Permalink | Reply to this
Re: Two Cultures in the Philosophy of Mathematics?
First off, re: the first passage you quoted. I do think that the comment is accurate when parsed in the charitable manner you suggest, but it is misleading at best (Ha! I used the loaded phrase aimed
at myself!). Your own claim (if the reviews, etc. I have read are accurate) is merely that many philosophers of mathematics are ignoring ‘real’ mathematics to their detriment. You do, however, cite
Baez’s discussion approvingly. And he does, quite clearly, make accusations of laziness or lack of ability.
At any rate, as you note, the real issue is the sort of question that should be asked by philosophers of mathematics. I don’t want to speak for any other neo-logicists (all 2 of them, at least of the
Scottish School variety), but I personally would never claim that the questions asked by thinkers such as yourself were the wrong questions, or misguided. At the most, all I would claim is that we
should answer simple questions about the ontological and epistemological status of mathematical objects BEFORE moving on to more subtle questions about particular mathematical disciplines. In other
words, we need to know what it is we are talking about, and how we manage to talk about it, before we can start examining in detail how such talk plays a role in explanation, applications, etc. In
addition, so long as we think that the epistemology and ontology of mathematics generally is relatively uniform, and that arithmetic, set theory, and analysis are relatively typical cases, then, with
respect to these basic questions at least, restricting attention to these three areas is not harmful.
Now, I take it that the Frege quotation which you are fond of is, in fact, an expression of the general idea behind what we now call Frege’s constraint. The idea, applied to numbers, is that our
account of number should, all in one go, explain ALL of the important aspects of number talk, that is, that “every element [of the concept of number] is intimately, I might almost say organically,
connected with others”
Included amongst such aspects of (cardinal) number talk are:
(1) The idea that numbers are objects. (“Five is a number”)
(2) The idea that numbers can be ascribed adjectivally (“There are five apples.”)
(3) The natural applications of arithmetic.
Frege’s insight - the one retained by (Scottish School) neo-logicism, is that the idea underlying Hume’s Principle (that number is really a function from concepts to objects) can provide a natural,
straightforward account of all of these uses.
Now, there are certainly other aspects of the notion of cardinal number that one might want to consider. And, for other mathematical concepts, the list of important features to be explained might be
different. Nevertheless, if something like the Fregean project can be carried out for numbers (and it seems like it can), then this would seem to be a pretty strong argument in favor of exploring
whether, and how, the account can be carried out for other domains.
Posted by: Roy T Cook on February 18, 2008 7:23 PM | Permalink | Reply to this
Re: Two Cultures in the Philosophy of Mathematics?
You say
…our account of number should, all in one go, explain ALL of the important aspects of number talk, that is, that “every element [of the concept of number] is intimately, I might almost say
organically, connected with others”
You then mention three examples of number talk:
(1) The idea that numbers are objects. (“Five is a number”) (2) The idea that numbers can be ascribed adjectivally (“There are five apples.”) (3) The natural applications of arithmetic.
And then claim that Hume’s Principle can “provide a natural, straightforward account of all of these uses.” This principle says “for any concepts F and G, the number of F-things is equal to the
number G-things if and only if there is a one-to-one correspondence between the F-things and the G-things.”
First then we need to see if there were other aspects of number talk which could prove more challenging to account for. For one thing it is noticeable just how many of mathematics’ most intricate
theories are used in number theory. There’s a question then of whether we can extricate simple arithmetic from the rest of mathematics.
To take a simple example, it is the case that any prime number of the form $4 n + 1$ is expressible as the sum of the squares of two natural numbers. Do we need to check in what sense Hume’s
principle explains this talk?
Second, while considering the application of numbers, we might wonder if to thoroughly explain their use to count fruit we should say something about the world and our being in it as cognitive
agents. But perhaps all this work has been done by prior analysis of ‘concept’ and ‘object’ or ‘thing’.
Lastly, let’s note that Hume’s Principle sounds quite familiar to us here. It’s related to the process of decategorification from the category of (finite) sets to the set of cardinals. Perhaps then
we can shed some more light on related uses of number theory.
Let’s consider the statements 3 + 2 = 4 + 1 and 2 + 3 = 3 + 2. John Baez gives us an account of what higher category theory has to say about associativity and commutativity, and it leads to deep
waters. So the question arises of whether there is here some aspect of number talk which needs more than Hume’s Principle.
Posted by: David Corfield on February 20, 2008 1:58 PM | Permalink | Reply to this
Re: Two Cultures in the Philosophy of Mathematics?
I think that you are absolutely correct that the issue here (for the success of Scottish-school neologicists, at least) is whether there are “other aspects of number talk which could prove more
challenging to account for.”
Now, the fact that we use a lot of intricate mathematics when doing number theory (Wiles’ proof is the example that is always trotted out here) is relatively simple to understand. Second-order
arithmetic is categorical, so any truth stateable in the language of second-order arithmetic follows (semantically) from Hume’s Principle. Second-order logic is incomplete, however, so some of these
consequences are not provable. Moreover, many of the theorems that are provable from these resources will have horribly complicated proofs. Thus, the utility of bringing in additional resources is
obvious, since doing so may allow us to prove things we couldn’t prove before, or couldn’t prove at all. Nevertheless, there is no truth of arithmetic which is not ‘guaranteed’, in the appropriate
sense, by Hume’s Principle.
Also, regarding the worries about how counting applies to the real world, it is striking that we can count apples but not ‘waters’. The reason for this, however, on the neo-logicist picture, has
little to do with our role in the world, as cognitive agents (after all, the number of moons of mars would still be two, even if no one had ever been around to count them). Instead, the difference
stems from the fact that APPLE, but not WATER, is a sortal concept. Scottish neo-logicists have worked hard on sorting out (pun intended) which concepts are sortal, and what criteria distinguish
sortal from non-sortal concepts, since abstraction operators such as “number of” arguably only apply to sortal concepts (and it is this fact that explains, among other things, their applicability). I
recommend the relevant chapters of Wright and Hale’s The Reason’s Proper Study (Oxford, 2001?) for details.
Regarding the idea that Hume’s Principle is just the “process of decategorification of the category of (finite) sets to the set of cardinals”, you note that Hume’s Principle sounds familiar. This
brings up an interesting question. Mac Lane studied in Gottingenwith, among others, Bernays. So there is a good chance that he would have had at least a passing familiarity with Frege’s work, I would
think. It would be interesting to know how much Fregean ideas influence the early (and perhaps later) development of category theory.
Posted by: Roy T Cook on February 25, 2008 6:59 PM | Permalink | Reply to this
Re: Two Cultures in the Philosophy of Mathematics?
One bone of contention I see concerns what it is to explain or account for an aspect of number talk.
In the case of number theoretic results with what are taken to be good ‘explanatory’ proofs which go via extraneous constructions (the vast majority of contemporary number theory), it is debatable
whether merely being ‘guaranteed’ by Hume’s Principle suffices. As to how to settle this question, I’m not sure how to proceed. I believe it is important for philosophy to try to make sense of
‘explanatory’ in mathematics. And I don’t believe it is merely a psychological phenomenon. It’s something which points to the idea of carving concepts well, as revealed by what happens next in their
usage and extension.
Seeing a number as a function on the set of primes is a good idea.
I can see a rationale for Wright rejecting the Frege Constraint for analysis. Where you can with some plausibility believe that applications of numbers to the world have largely exhausted their range
with counting five apples, it seems unlikely that the story of the application of analysis is in any way near completion. Just look at discussions on this blog about smoothness.
On the other hand, as I suggested, it is possible to argue that Hume’s Principle doesn’t get to the heart of moving enumerated objects about in the world.
As for Fregean influence on Mac Lane, certainly the latter was very interested in philosophy. I don’t know how much the promoton of the idea of cardinal numbers as equivalence classes of sets owes to
Frege. Frege himself seems to have learned from Riemann about this kind of technique, according to Jamie Tappenden.
Posted by: David Corfield on February 27, 2008 12:31 PM | Permalink | Reply to this
Math, Chess, Music; Re: Two Cultures in the Philosophy of Mathematics?
I agree with David Corfield. What is it “to explain or account for an aspect of number talk”?
I go to colloquia and seminars at Caltech, in the Math, Applied Math, Physics, and CS departments, where “number talk” proliferates.
I even go for subdisciplines that are outside of my areas of understanding, in hopes of learning something.
I do usually learn something, and I seem to have acquired a reputation for asking superficially stupid questions which the speaker finds interesting. Sometimes I ask by explicitly channeling my
mentor Richard Feynman, complete with thick Brooklyn accent. He had a metaphysics of Mathematics versus Physics which is well-explained in his writings and recorded lectures.
It is most interesting to observe talks which give impressionistic “sketches” of proofs, complete with weird whiteboard scribblings and/or digital projections of beautiful elaborate computer graphics
and/or xeroxes of xeroxes of hand-written notes and/or literal hand-waving.
There is indeed a strong flavor of “extraneous constructions” and more. There’s sometimes a kind of pointing to the Platonic Ideal, a nod towards a reality external to the lecture hall, as if to say:
“Look into the infinite directly, and see that I am right, or at least on the right track.” Formally, this does not bear on proof, and yet it has a social function parallel to formal proof.
It reminds me of informal gatherings of Chess masters, international masters, and grandmasters I have attended. I can’t play adequately myself, but I have an old friend, Ben Nethercott, who is a
Tournament Director, even of the US Chess Open. I also socially am friends with the former U.S. Women’s Chess Champion.
They have conversations over a chessboard which are like bilingual discussions switching back and forth between languages. Here, one language is English (maybe with Russian or Chinese accent), and
the other language is on the board. Example:
“Yes, but she was bluffing when she [speaker moves a rook on the board and grins]. Because of course he could simply [takes a pawn en passant, shrugs]. But, on the other hand, the fianchetto bishop
[taps a bishop, makes a hand flick in a diagonal motion].”
Second Speaker: “No, no, you fall into the same trap. Because you see [wiggles the queen] and so [knocks over a knight], except of course [forefinger waves up and down, left and right, over a
particular rank and file].”
I am not being glib in comparing Number Talk and Chess Talk. And I further make a commutative diagram by referencing Music Talk. As a child I had the wonderful experience of attending many Young
People’s Concerts by Leonard Bernstein. This was music talk of genius. A small orchestra would play for a minute, interrupted by Lenny waving his hands, talking in apparent simple English,
illustrated by his making little burst of sound on a piano. Children could follow the Music Talk, to some extent. That was genius-level lecture in the same class as Feynman, I believe.
Mathematics, Music, Chess. The three domains of human life where a child prodigy can be world-class. Why? I wave my hands towards a large literature on the subject.
But how can Philosophers of Mathematics capture what we know about this, but cannot prove?
Posted by: Jonathan Vos Post on February 28, 2008 3:04 PM | Permalink | Reply to this
Re: Two Cultures in the Philosophy of Mathematics?
Regarding Mac Lane and Frege, I had the following from Colin McLarty:
Saunders’ fullest discussion of Frege is in “Mathematical Models: A Sketch for the Philosophy of Mathematics”, The American Mathematical Monthly, Vol. 88, No. 7 (Aug. - Sep., 1981), 462-472. This
consists of the same not very detailed things Weyl says about Frege in Philosophy of Mathematics and Natural Science (1949). Saunders probably got them from Weyl in Göttingen though by 1981 they were
also common knowledge. Two things specially:
Frege, Dedekind and later Russell founded arithmetic on set theory (Weyl pp. 11, 230).
Hilbert eventually axiomatized a formalist arithmetic (which merely concerns combinations of symbols) “unassailable even by the criticism directed against it by Frege” (Weyl p. 35).
These are the things I expect one heard in Göttingen in those days. But I am no Bernays scholar. And this is the view which survives in outline in Mac Lane’s Mathematics: Form and Function. There is
another thing he likely heard but paid less attention to:
Pasch and Frege gave the first clear accounts of definition by abstraction (like Hume’s principle). (Weyl p. 12)
Actually Weyl finds the principle of abstraction early in Leibniz, and also credits Helmholtz, but I doubt that Saunders had any interest in either of them. And Saunders early and late was very
interested in the practice of definitions by equivalence relations and I suspect that very interest drove all worry about the history of the idea out of his thinking.
I doubt Saunders looked at anything actually written by Frege, unless possibly Grundgesetze just enough to use it as a reference in his dissertation.
Posted by: David Corfield on February 28, 2008 8:10 PM | Permalink | Reply to this
Post-modern web-centric doing Math; Re: Two Cultures in the Philosophy of Mathematics?
Over at John Armstrong’s fine Math blog, I gave an abstract (accidently omitting blogs) of how I actually do Mathematics, as opposed to how Philosophers contend that I should do Mathematics:
My default Math writing strategy is similar but slightly more post-modern:
On the average of more than once per day, over the past 5 years, find:
(a) simple but nontrivial solutions;
(b) to overlooked or shallowly-probed elementary problems;
(c) regardless of whether they need to be solved;
(d) deliver them as precisely and usefully formatted as possible;
(e) in an edited legitimate online venue such as the Online Encyclopedia of Integer Sequences, Prime Curios, or MathWorld;
(f) which links to dead-tree Math and science literature and online resources;
(g) which is date-stamped and has my email address so that people may contact me if interested, because the Killer App of the World Wide Web is Collaborationware;
(h) and consider that I have been starting with a very crude version 1.0; then
(i) iterating and deepening and collaborating optimally for the golden mean of my 2,500+ cardinality portfolio of Journal articles, Books, refereed International Conference papers, arXiv reprints,
letters to the editor, newspaper columns, science fiction stories with math content, poems about Math or Science, screenplays or teleplays about Math/Science, and other stuff that someone will
actually pay me to do (via salary, consulting fees, or grants).
Posted by: Jonathan Vos Post on February 19, 2008 7:27 PM | Permalink | Reply to this
Re: Post-modern web-centric doing Math; Re: Two Cultures in the Philosophy of Mathematics?
Thanks for the hat-tip, JVP. I was actually hoping that a philosopher of mathematics would look at my adaptation of Paul Graham’s design philosophy to mathematics research.
Now, where would I find a philosopher of mathematics? Preferably one who’s just gotten tenure so he doesn’t have to worry about job security from looking at my silliness, while being young enough to
find my silliness entertaining.
Posted by: John Armstrong on February 19, 2008 11:05 PM | Permalink | Reply to this
Re: Post-modern web-centric doing Math; Re: Two Cultures in the Philosophy of Mathematics?
One interesting phenomenon that apparently occurs to some extent in most manufacturing, but particularly in computing where getting interfaces between “components” (of whatever sort) is so crucial,
is “first mover advantage”. This is basically that reluctance to significantly change the “brand” of an installed system, partly just due to general “organisational inertia” but particularly because
of the aforementioned complexity of interfaces, means that if you get something “workable” into the marketplace first, you’ll generally force any competition into being minor competitors. This
applies even if your competitors are thinking slowly and trying to get things “right”.
This is partly why iterative refinement is emphasised in writings such as Paul Graham’s. The interesting question if you’re trying to apply this to mathematics: is it possible that in mathematical
development a set of definitions/theory/notation/viewpoint/whatever that, whilst being completely “correct” might be suboptimal and yet, by virtue of being first, crowd out more carefully thought
out, more optimal definitions/theory/notation/viewpoint/whatever?
Clearly publishing quickly has is advantageous to an individual researcher, but does the field as a whole have things that are significant “first mover artifacts”?
Note that this is different from the “paradigm shift” idea, in that those are supposed to happen when the weight of newly observed things unexplained by the old theory becomes too great. Here
everything can be dealt with by both setups, but just much more “conveniently” in the one which came historically second (and thus lost out). You often see books referring to “using standard notation
to avoid being confusing”, but anything deeper?
Posted by: bane on February 20, 2008 8:23 AM | Permalink | Reply to this
Re: Post-modern web-centric doing Math; Re: Two Cultures in the Philosophy of Mathematics?
Firstly, I think that good mathematics will eventually push out “suboptimal”, or at least come to coexist. If it’s really a superior viewpoint, it will eventually prove itself.
On the other hand, I’m not in a position to deny the real-world consequences of research styles. If I publish fewer, but more polished or “complete” papers, my publication list looks thin. I’m less
likely to make it past the first cut of any given hiring process.
What I’m thinking would be a good adjustment in strategy is not to rush in with a different idea, but for me to publish papers I’m currently viewing as “incomplete”. Get out a paper that works over
an algebraically closed field, and later come back to do it over more general fields, and then commutative rings. Three papers instead of one, and all incremental refinements of the basic idea that’s
there in the first.
Posted by: John Armstrong on February 20, 2008 3:00 PM | Permalink | Reply to this
Re: Post-modern web-centric doing Math; Re: Two Cultures in the Philosophy of Mathematics?
Get out a paper that works over an algebraically closed field, and later come back to do it over more general fields, and then commutative rings. Three papers instead of one, and all incremental
refinements of the basic idea that’s there in the first.
But preferably release all three papers at the same time, to prevent somebody reading the first one and beating you to the others!
Posted by: Jamie Vicary on February 20, 2008 4:26 PM | Permalink | Reply to this
Re: Post-modern web-centric doing Math; Re: Two Cultures in the Philosophy of Mathematics?
Well, that’s exactly why I delay release in a lot of situations. But the upshot is that I can’t get hired because I haven’t published enough. But if I published partial results faster, I may get
Posted by: John Armstrong on February 20, 2008 5:15 PM | Permalink | Reply to this
Re: Post-modern web-centric doing Math; Re: Two Cultures in the Philosophy of Mathematics?
If $p$ is someone’s ‘natural’ publication rate — the rate at which they’d publish if they had a permanent job — then by combining these two strategies, after a career of time $T$, they will have
published $a p T - b$ papers, for their choices of positive constants $a$ and $b$. From what you’ve said, I guess you aim for $a \simeq b \simeq 3$. Time for a fun survey! Everybody else, what values
do you use?
Posted by: Jamie Vicary on February 20, 2008 6:16 PM | Permalink | Reply to this
Re: Post-modern web-centric doing Math; Re: Two Cultures in the Philosophy of Mathematics?
Jamie, you’re taking me way too literally here.
What I mean is that the first few points of Graham’s philosophy I’ve got. I work in an area not many people have taken explicit notice of, but which has things that need doing.
But left to my own devices I tend to build up papers that get as much generality as possible, even though I could get a useful special case down on paper more easily. What Graham’s method would
entail is getting those partial results down and published rather than holding out until I have more thorough results.
Posted by: John Armstrong on February 20, 2008 9:42 PM | Permalink | Reply to this
Re: Post-modern web-centric doing Math; Re: Two Cultures in the Philosophy of Mathematics?
“On the other hand, I’m not in a position to deny the real-world consequences of research styles.”
Note that I wasn’t putting any judgement on this. There’s a strong argument that there’d be much less advanced software around if we tried to “develop software properly” because the general pace of
innovation would be less. (First mover’s tend to get displaced by applications that redefine what the target task is, eg, www essentially replacing gopher, etc.) I was more thinking that we’d like to
believe that the better viewpoint will win in mathematics, but if we’re taking the computer analogy seriously it’d be interesting to look at the question empirically.
As trivial examples, I know that writing functions to the left of the their arguments is said to yield awkwardness because it’s natural to get “apply $h$ to $x$, then $g$ to result, then $f$ to that
result” for left-right language readers as $(((x)h)g)f$ rather than $f(g(h(x)))$, particularly with some of the kind of category theory stuff. Do we stick with the current convention just because
reprinting all the textbooks, and mentally convert “pre-change” papers is too much work for too little gain. Maybe, but there’s an argument that in the most common case of just $f(x)$ having the
(more important) function before its argument makes pyschological sense. (Incidentally, I sat through one undergrad lecturer who periodically unconsciously changed that convention because that’s how
he did research calculations; confused the heck out of me.)
But are there more signficant examples.
Posted by: bane on February 21, 2008 4:57 AM | Permalink | Reply to this
“First Mover advantage” a myth; Re: Post-modern web-centric doing Math; Re: Two Cultures in the Philosophy of Mathematics?
Yes, I know what’s in the Econonomics/Game Theory textbooks. But my coauthor and I respectfully challenge the notion. The paper will explain all (cut and pasted from an email this weekend):
“Congratulations [Prof.] Philip [V.] Fellman [Southern New Hampshire University], Jonathan [Vos] Post:
We are pleased to announce that your submission to the 2006 International Conference on Complex Systems, Paper #089, ‘Complexity, competitive
intelligence and the “first mover” advantage’, has been accepted for the print proceedings of the conference.”
As to the protocol of paper-writing, Phil and I are still recovering from the 3 sessions we chaired at the 2007 International Conference on Complex Systems [I ran 2 tracks of Physics, he ran one on
Consciousness] and we are knee-deep in preparing our papers for the 2009 International Conference on Complex Systems.
And I’m trying to boil down 1,000 pages of notes to a 4-page paper for “Nature” with my coauthor Thomas L. Vander Laan, M.D., F.A.C.S., on the mathematical Physics of the small intestine modeled with
Fitzhugh-Nagumo equations and more, where I believe I have classified 8 distinct dynamics, including solitons. Solitary waves in peristalsis? That itself should be publishable. Am reading more deeply
into soliton theory, thank you Terry Tao!
And teaching several days a week as substitute teacher (Math, Physics, Computers, English) in Pasadena Unified School District high schools and middle schools.
And trying to keep my average of one submission per day to OEIS and Prime Curios and the like.
And gave a short funeral oration for the great Applied Mathermatician, Parallel Computing Pioneer, and Bifurcation expert Herb Keller at Caltech yesterday. He was the equal to Feynman at Caltech for
exuberence, eccentricity, chutzpah, and brilliance. Both taught me that ultimately there are no experts – if a problem is interesting, you should fling yourself into it regardless of your label, and
become the expert as needed.
Posted by: Jonathan Vos Post on February 26, 2008 2:12 AM | Permalink | Reply to this
Re: “First Mover advantage” a myth; Re: Post-modern web-centric doing Math; Re: Two Cultures in the Philosophy of Mathematics?
I’m assuming since google found it there’s no problem with a direct link to the paper. It looks interesting and I haven’t had time to do more than skim it. I do however notice that it’s primarily a
theoretical economics analysis rather than looking at data gathered from real products/whatever. Certainly in anecdotal experience I’ve seen many examples of “psychological inertia” in choice of
software: once you’ve made a choice you simply can’t summon the energy to change unless a staggeringly better product becomes available. So I view the issue of first mover advantage, particularly
amongst real people, as still an open question.
But the paper looks very interesting and I’ll try to get around to it.
Posted by: bane on February 26, 2008 7:58 AM | Permalink | Reply to this
|
{"url":"http://golem.ph.utexas.edu/category/2008/01/two_cultures_in_the_philosophy.html","timestamp":"2014-04-18T06:06:56Z","content_type":null,"content_length":"156112","record_id":"<urn:uuid:f5cea2f9-9ed3-4449-afdf-6dde349687ce>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00201-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Issue with boxplot factors
up vote 1 down vote favorite
I'm having an issue when trying to make side by side boxplots by factors. I've read several examples, but for some reason my plots are not displaying correctly. I think it's trying to plot a boxplot
for each value, even though I specified it as a factor.
I'm using the following code:
samp.norm = rnorm(1000,0,1)
samp.exp = rexp(1000,1)
samp.unif = runif(1000)
samp = c(samp.norm,samp.exp,samp.unif)
dist = c( rep("norm",1000), rep("exp",1000), rep("unif",1000) )
DATA = as.data.frame(cbind(samp,dist))
DATA$dist= as.factor(DATA$dist)
p = ggplot(DATA, aes(x=factor(DATA$dist), y = DATA$samp)) + geom_boxplot()
r ggplot2 boxplot
add comment
migrated from stats.stackexchange.com Nov 20 '12 at 12:39
This question came from our site for people interested in statistics, machine learning, data analysis, data mining, and data visualization.
2 Answers
active oldest votes
The problem is your use of cbind() coerces the resulting object so that DATA$samp is a factor rather than numeric. The columns resulting from cbind need to have the same class, which
means they go for the lowest common demoninator class in this case "character". This is exactly what data frames were invented to solve.
instead of the more complicated line you've got and it all should work.
up vote 3
down vote As an aside, you also should have the much simpler
p=ggplot(DATA, aes(x=dist, y = samp)) + geom_boxplot()
rather than your second-last line. Once you have specified to ggplot() you are using DATA, you don't need to tell it where to find dist and samp ie no need for DATA$dist, just dist.
Also, as dist is already a factor, you don't need to specify factor(dist).
Thank you! Makes sense now. I must have picked up some bad R programming habits somewhere along the way. – Jonathan Nov 20 '12 at 17:25
add comment
+1 to @PeterEllis. Note that you can also get even simpler than his suggestion with:
up vote 0 down vote boxplot(samp~dist)
add comment
Not the answer you're looking for? Browse other questions tagged r ggplot2 boxplot or ask your own question.
|
{"url":"http://stackoverflow.com/questions/13473409/issue-with-boxplot-factors","timestamp":"2014-04-23T11:22:34Z","content_type":null,"content_length":"67705","record_id":"<urn:uuid:fdc83ede-8526-4646-80a1-18cf20ddfe2b>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00395-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Less Than Dot - Blog - Calculating Mean, Median and Mode with SQL Server
Calculating Median and Mode with SQL Server can be frustrating for some developers, but it doesn’t have to be. Often times, inexperienced developers will attempt to write this with procedural
programming practices, but set based methods do exist.
Before showing you methods to calculate these values, it’s probably best to explain what they are.
Mean is another name for average. SQL Server has a built-in function to calculate this value.
To calculate the average, sum the data and divide by the number of rows. In this case, 1 + 2 + 5 + 5 + 5 + 6 + 6 + 6 + 7 + 9 + 10 = 62. 62/11 = 5.636363
Median represents the ‘middle’ value. To calculate the median by hand, you sort the values and return the value that appears in the middle of the list. If there is an odd number of items, there will
be exactly one value in the middle of the list. If there is an even number of items, you average the 2 values that are closest to the middle.
<span style="color:red;">6</span>
Since there is an odd number of rows, the row appearing in the middle of the list contains your median value.
<span style="color:red;">5
Now, there is an even number of rows. The median for this data set is (5 + 6)/2 = 5.5. Simply take the average of the 2 values appearing in the middle of the data set.
The mode for a data set is the item(s) that appear most frequently. To calculate this by hand, you write a distinct list of values and count the number of times a value appears. The value the appears
the most is your mode.
Data Frequency
---- ---------
<span style="color:red;">5 3
6 3</span>
This data set is considered to be Bi-Modal because there are 2 values with the same frequency. With this data set, the modes are 5 and 6.
For demonstration purposes, I will create a table variable and populate it with data. All code will be based on this table variable. The data type for the data column will be Decimal(10,5). If we
used an integer column, then we would need to concern ourselves with integer math issues, which is not the focus of this blog.
As I stated earlier, SQL Server has a built-in function for calculating the average. The Avg function will ignore rows with NULL. So the average of 1, 2, NULL is 1.5 because the sum of the data is 3
and there are 2 rows that are not NULL. 3/2 = 1.5.
Declare @Temp Table(Id Int Identity(1,1), Data Decimal(10,5))
Insert into @Temp Values(1)
Insert into @Temp Values(2)
Insert into @Temp Values(5)
Insert into @Temp Values(5)
Insert into @Temp Values(5)
Insert into @Temp Values(6)
Insert into @Temp Values(6)
Insert into @Temp Values(6)
Insert into @Temp Values(7)
Insert into @Temp Values(9)
Insert into @Temp Values(10)
Insert into @Temp Values(NULL)
Select Avg(Data)
From @Temp
-- 5.636363
To calculate the median, we will select the last value in the top 50 percent of rows, and the first value in the bottom 50 percent (all while ignoring NULL values).
To get the last value in the top 50 percent of rows….
Select Top 1 Data
From (
Select Top 50 Percent Data
From @Temp
Where Data Is NOT NULL
Order By Data
) As A
Order By Data DESC
To get the first value in the last 50 percent of rows…
Select Top 1 Data
From (
Select Top 50 Percent Data
From @Temp
Where Data Is NOT NULL
Order By Data DESC
) As A
Order By Data Asc
Putting it all together:
Declare @Temp Table(Id Int Identity(1,1), Data Decimal(10,5))
Insert into @Temp Values(1)
Insert into @Temp Values(2)
Insert into @Temp Values(5)
Insert into @Temp Values(5)
Insert into @Temp Values(5)
Insert into @Temp Values(6)
Insert into @Temp Values(6)
Insert into @Temp Values(6)
Insert into @Temp Values(7)
Insert into @Temp Values(9)
Insert into @Temp Values(10)
Insert into @Temp Values(NULL)
Select ((
Select Top 1 Data
From (
Select Top 50 Percent Data
From @Temp
Where Data Is NOT NULL
Order By Data
) As A
Order By Data DESC) +
Select Top 1 Data
From (
Select Top 50 Percent Data
From @Temp
Where Data Is NOT NULL
Order By Data DESC
) As A
Order By Data Asc)) / 2
-- 6
To Calculate the mode with sql server, we first need to get the counts for each value in the set. Then, we need to filter the data so that values equal to the count are returned.
Declare @Temp Table(Id Int Identity(1,1), Data Decimal(10,5))
Insert into @Temp Values(1)
Insert into @Temp Values(2)
Insert into @Temp Values(5)
Insert into @Temp Values(5)
Insert into @Temp Values(5)
Insert into @Temp Values(6)
Insert into @Temp Values(6)
Insert into @Temp Values(6)
Insert into @Temp Values(7)
Insert into @Temp Values(9)
Insert into @Temp Values(10)
Insert into @Temp Values(NULL)
SELECT TOP 1 with ties DATA
FROM @Temp
WHERE DATA IS Not NULL
GROUP BY DATA
ORDER BY COUNT(*) DESC
As you can see, there are set based methods for calculating all of these values, which can be many times faster than calculating these values in a cursor.
25 Comments
1. Thank you! I run into this all the time and I never remember to take advantage of the top X percent phrase.
2. Very helpful, thanks for posting this!
3. this is helpful but I made changes to Mode calculation as following..is it any better?
SELECT TOP 1 DATA
FROM @Temp
WHERE DATA IS Not NULL
GROUP BY DATA
ORDER BY COUNT(*) DESC
4. @manish,
Strictly speaking, your query is not correct because data sets can be multi-modal, which your query does not accommodate.
For example, if you have the following values: 1,2,5,5,5,6,6,6,7,9,10
5 appears 3 times and 6 appears 3 times. This data set is considered multi-modal because there are multiple values that appear in the data set the same number of times.
That being said, a slight modification to your query will return the correct results and execute faster than the query I show.
SELECT TOP 1 WITH TIES DATA
FROM @Temp
WHERE DATA IS Not NULL
GROUP BY DATA
ORDER BY COUNT(*) DESC
Notice the “WITH TIES” addition to your query. This allows multiple rows to be returned where the top 1 value appears in multiple rows.
Thank you for your comment. I will change the query in the blog so that others may benefit from this (without needing to read these comments).
5. very useful, thx
6. Very helpful. One suggestion on the mode, however. You probably want a HAVING COUNT(*)>1 in there to account for the fact that some datasets may not have a mode.
7. Ben,
Could you give an example of a data set that doesn’t have a mode? Consider the data set:
Isn’t the mode [1, 2]?
8. Thank you for the great tip, helped a lot!
9. How would you get the mean, mode, median By a Group of some sort?
DECLARE @Temp TABLE(Id INT IDENTITY(1,1),Application_id int, DATA DECIMAL(10,5))
INSERT INTO @Temp VALUES(1,1)
INSERT INTO @Temp VALUES(1,2)
INSERT INTO @Temp VALUES(1,5)
INSERT INTO @Temp VALUES(1,5)
INSERT INTO @Temp VALUES(2,5)
INSERT INTO @Temp VALUES(1,6)
INSERT INTO @Temp VALUES(2,6)
INSERT INTO @Temp VALUES(2,6)
INSERT INTO @Temp VALUES(1,7)
INSERT INTO @Temp VALUES(2,9)
INSERT INTO @Temp VALUES(2,10)
INSERT INTO @Temp VALUES(null,NULL)
Want to get the mean, mode, median by the Application_Id in the above table.
10. are you sure mean equals average in statistics?
11. @greatbear302,
Yes. I am sure that “Arithmetic Mean” is the same thing as Avg. There are other types of means that statisticians use, like Geometric Mean, Harmonic Mean, etc…
The calculations that SQL Server perform with the AVG aggregate function is what statisticians refer to as Arithmetic Mean.
For more information regarding other types of Means:
12. Great Article and Helped me a lot to clearly understand the logic.
13. Hi,i cant have the results i want, here below my sample. could you help please? I have
select 75012345 bc, cast(2.5 as decimal(12,2)) vl into _tbl
insert into _tbl select 75012345, 5.0
insert into _tbl select 75012345, 4.0
insert into _tbl select 75054321, 3.5
insert into _tbl select 75054321, 2.0
insert into _tbl select 75054321, 3.0
select top 1 with ties vl
from _tbl
where vl is not null
group by vl
order by count(*) desc
select bc, avg(vl) from _tbl group by bc
14. ivonna,
The code you are using is for MODE. Your sample data set is considered multi-modal because each value appears exactly once.
15. i have the following sample data and script cobbled together through forums and head scratching (fairly new to SQL)
I can get the median for the total of the figures but need to group it by month.
Any idea anyone??
CREATE TABLE #data (number INT, Month_Name nvarchar (10))
INSERT INTO #data
SELECT 15 as number, ‘jan’ as Month_Name union all
SELECT 26 as number, ‘jan’ as Month_Name union all
SELECT 47 as number, ‘jan’ as Month_Name union all
SELECT 25 as number, ‘jan’ as Month_Name union all
SELECT 15 as number, ‘jan’ as Month_Name union all
SELECT 20 as number, ‘jan’ as Month_Name union all
SELECT 22 as number, ‘jan’ as Month_Name union all
SELECT 40 as number, ‘jan’ as Month_Name union all
SELECT 98 as number, ‘mar’ as Month_Name union all
SELECT 15 as number, ‘mar’ as Month_Name union all
SELECT 48 as number, ‘mar’ as Month_Name union all
SELECT 75 as number, ‘mar’ as Month_Name union all
SELECT 25 as number, ‘mar’ as Month_Name union all
SELECT 40 as number, ‘mar’ as Month_Name union all
SELECT 44 as number, ‘mar’ as Month_Name union all
SELECT 40 as number, ‘mar’ as Month_Name union all
SELECT 5 as number, ‘feb’ as Month_Name union all
SELECT 2 as number, ‘feb’ as Month_Name union all
SELECT 3 as number, ‘feb’ as Month_Name union all
SELECT 4 as number, ‘feb’ as Month_Name union all
SELECT 5 as number, ‘feb’ as Month_Name union all
SELECT 2 as number, ‘feb’ as Month_Name union all
SELECT 3 as number, ‘feb’ as Month_Name union all
SELECT 4 as number, ‘feb’ as Month_Name union all
SELECT 7 as number, ‘feb’ as Month_Name
select * from #data
SELECT AVG(1.0E * number) ,Month_Name as mine
FROM (
SELECT number,Month_Name,
2 * ROW_NUMBER() OVER (ORDER BY number) – COUNT(*) OVER () AS y
FROM #data
)AS d
WHERE y BETWEEN 0 AND 2
group by Month_Name
drop table #data
16. Paul, try this:
Select MonthCount.Month_Name, Avg(Number)
From (
Select Month_Name, Count(*) As MonthCount From #Data Group By Month_Name
) As MonthCount
Inner Join (
select Number, Month_Name, Row_Number() Over (Partition By Month_Name Order By NUmber) As RowNumber
from #data
) As RowCounts
On MonthCount.Month_Name = RowCounts.Month_Name
Where 2 * RowNumber – MonthCount Between -1 and 1
Group By MonthCount.Month_Name
17. Thanks! One of the business managers thinks I’m a god now!
18. You never gave the sql for the MEAN.
19. Marge,
MEAN is exactly the same as Average, which I did show the code for. I labelled is AVERAGE instead of MEAN. Sorry for the confusion.
20. Interesting discussion with good links in this thread in MSDN forum
21. Hey guys, thanks for creating this blog. My question is that I’m a bit confused on the median coding. Can you please explain in detail, perhaps with an example?
22. Is there a way to do this if your version of SQL does not provide the TOP 50 Percent phrase?
23. Regarding the query for the median, it might be better to make the denominator 2.0 instead of just 2 so that it will work even for integer data types.
24. – Here’s an alternative solution for MEDIAN using
– ROWNUMBER and AVG
DECLARE @TotalRecords int
DECLARE @Temp Table(Id int IDENTITY(1,1), Data int) –Decimal(10,5))
INSERT INTO @Temp VALUES(1)
INSERT INTO @Temp VALUES(2)
INSERT INTO @Temp VALUES(3)
INSERT INTO @Temp VALUES(4)
–INSERT INTO @Temp VALUES(5)
SELECT @TotalRecords = COUNT(*)
FROM @Temp
WITH TEMP_WITH_ROW_NUMBER
SELECT ROW_NUMBER() OVER(ORDER BY Data ASC) as [_RowIndex]
FROM @Temp
WHERE Data IS NOT NULL
SELECT AVG(Data * 1.0) as ‘Median’
FROM TEMP_WITH_ROW_NUMBER
WHERE _RowIndex IN (
FLOOR((@TotalRecords + 1) / 2.0) — MidLow
,CEILING((@TotalRecords + 1) / 2.0) — MidHigh
25. Hi Guys,
I need to get Modes for two columns at a time, see this:
Product Catalogue Page
Considering this table structure, my expected result is :
Product Catalogue Page
Where Catalogue=1 and Page=3 are Modes.
Pelase help
|
{"url":"http://blogs.lessthandot.com/index.php/DataMgmt/DataDesign/calculating-mean-median-and-mode-with-sq/","timestamp":"2014-04-19T15:27:27Z","content_type":null,"content_length":"71015","record_id":"<urn:uuid:8788754f-9f9a-4336-88fe-7534d992a6da>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00200-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions - Re: Matheology § 224
Date: Mar 16, 2013 2:10 PM
Author: mueckenh@rz.fh-augsburg.de
Subject: Re: Matheology § 224
On 16 Mrz., 18:10, William Hughes <wpihug...@gmail.com> wrote:
> > Ok, I understand. Anyhow, if the number of lines is not empty, then
> > there must remain at least one line as a necessary line.
> Not a particular line. This is similar to
> the case where any set of lines with an unfindable
> last line has at least one "necessary" findable line.
> This line has a line number in the original
> list but we can choose the "necessary"
> findable line to have any line number we want.
No, it is always the last line. We call it unfindable or unfixable
because as soon as we have found it, it is no longer the last line.
> The fact that more than one findable line
> is "necessary" does not mean there must
> be a set of line numbers which is nonempty
> and has a least element.-
That is interesting. We have a set of natural numbers, so called line-
numbers of necessary findable lines. This fact does not mean that the
set of so called line-numbers of necessary findable lines is nonempty
and has a least element.
I understand that an empty set need not and can not have a least
element. What I not yet understand is that an empty set can house more
than zero elements, in fact more than one.
But with this premise accepted, set theory is certainly not provably
Regards, WM
|
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=8653317","timestamp":"2014-04-20T06:23:47Z","content_type":null,"content_length":"2491","record_id":"<urn:uuid:2004b66a-78d1-4689-86c3-76e91e0f2ab6>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00419-ip-10-147-4-33.ec2.internal.warc.gz"}
|
greement b
- IEEE Transactions on Information Theory , 1993
"... . The problem of generating a shared secret key S by two parties knowing dependent random variables X and Y , respectively, but not sharing a secret key initially, is considered. An enemy who
knows the random variable Z, jointly distributed with X and Y according to some probability distribution PX ..."
Cited by 255 (18 self)
Add to MetaCart
. The problem of generating a shared secret key S by two parties knowing dependent random variables X and Y , respectively, but not sharing a secret key initially, is considered. An enemy who knows
the random variable Z, jointly distributed with X and Y according to some probability distribution PXY Z , can also receive all messages exchanged by the two parties over a public channel. The goal
of a protocol is that the enemy obtains at most a negligible amount of information about S. Upper bounds on H(S) as a function of PXY Z are presented. Lower bounds on the rate H(S)=N (as N !1) are
derived for the case where X = [X 1 ; : : : ; XN ], Y = [Y 1 ; : : : ; YN ] and Z = [Z 1 ; : : : ; ZN ] result from N independent executions of a random experiment generating X i ; Y i and Z i , for
i = 1; : : : ; N . In particular it is shown that such secret key agreement is possible for a scenario where all three parties receive the output of a binary symmetric source over independent binary
, 1999
"... This paper is concerned with secret-key agreement by public discussion. Assume that two parties Alice and Bob and an adversary Eve have access to independent realizations of random variables X ,
Y , and Z, respectively, with joint distribution PXY Z . The secret key rate S(X ; Y jjZ) has been define ..."
Cited by 36 (7 self)
Add to MetaCart
This paper is concerned with secret-key agreement by public discussion. Assume that two parties Alice and Bob and an adversary Eve have access to independent realizations of random variables X , Y ,
and Z, respectively, with joint distribution PXY Z . The secret key rate S(X ; Y jjZ) has been defined as the maximal rate at which Alice and Bob can generate a secret key by communication over an
insecure, but authenticated channel such that Eve's information about this key is arbitrarily small. We define a new conditional mutual information measure, the intrinsic conditional mutual
information between X and Y when given Z, denoted by I(X ; Y # Z), which is an upper bound on S(X ; Y jjZ). The special scenarios are analyzed where X , Y , and Z are generated by sending a binary
random variable R, for example a signal broadcast by a satellite, over independent channels, or two scenarios in which Z is generated by sending X and Y over erasure channels. In the first two
scenarios it can be sho...
- Advances in Cryptology - ASIACRYPT '96, K. Kim and T. Matsumoto (Eds.), Lecture Notes in Computer Science , 1996
"... . This paper is concerned with information-theoretically secure secret key agreement in the general scenario where three parties, Alice, Bob, and Eve, know random variables X, Y , and Z,
respectively, with joint distribution PXY Z , for instance resulting from receiving a binary sequence of random b ..."
Cited by 9 (5 self)
Add to MetaCart
. This paper is concerned with information-theoretically secure secret key agreement in the general scenario where three parties, Alice, Bob, and Eve, know random variables X, Y , and Z,
respectively, with joint distribution PXY Z , for instance resulting from receiving a binary sequence of random bits broadcast by a satellite. We consider the problem of determining for a given
distribution PXYZ whether Alice and Bob can in principle, by communicating over an insecure channel accessible to Eve, generate a secret key about which Eve's information is arbitrarily small. The
emphasis of this paper is on the possibility or impossibility of such key agreement for a large class of distributions PXY Z more than on the efficiency of the protocols. When X, Y , and Z are
arbitrary random variables that result from a binary random variable being sent through three independent channels, it is shown that secret key agreement is possible if and only if I(X; Y jZ) ? 0,
i.e., under the sole condition ...
, 2005
"... We introduce a new problem of broadcast source coding with a discrimination requirement --- there is an eavesdropping user from whom we wish to withhold the true message in an entropic sense.
Binning can achieve the Slepian-Wolf rate, but at the cost of full information leakage to the eavesdropper. ..."
Cited by 7 (0 self)
Add to MetaCart
We introduce a new problem of broadcast source coding with a discrimination requirement --- there is an eavesdropping user from whom we wish to withhold the true message in an entropic sense. Binning
can achieve the Slepian-Wolf rate, but at the cost of full information leakage to the eavesdropper. Our main result is a lower bound that implies that any entropically efficient broadcast scheme must
be "like binning" in that it also must leak significant information to eavesdroppers I.
- IN PROC. 1997 IEEE SYMPOSIUM ON INFORMATION THEORY, (ABSTRACTS , 1997
"... This paper is concerned with secret key agreement by public discussion: two parties Alice and Bob and an adversary Eve have access to independent realizations of random variables X , Y , and Z,
respectively, with joint distribution PXY Z . The secret key rate S(X ; Y jjZ) has been defined as the m ..."
Cited by 5 (3 self)
Add to MetaCart
This paper is concerned with secret key agreement by public discussion: two parties Alice and Bob and an adversary Eve have access to independent realizations of random variables X , Y , and Z,
respectively, with joint distribution PXY Z . The secret key rate S(X ; Y jjZ) has been defined as the maximal rate at which Alice and Bob can generate a secret key by communication over an insecure,
but authenticated channel such that Eve's information about this key is arbitrarily small. We define a new conditional mutual information measure, the intrinsic conditional mutual information,
denoted by I(X ; Y#Z), and show that it is an upper bound on S(X ; Y jjZ). The special scenarios where X , Y , and Z are generated by sending a binary random variable R, for example a signal
broadcast by a satellite, over independent channels, or where Z is generated by sending X and Y over erasure channels, are analyzed. In the first scenario it can be shown, even for continuous random
variables, that the s...
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=2213554","timestamp":"2014-04-18T20:10:16Z","content_type":null,"content_length":"24762","record_id":"<urn:uuid:fd34d56d-ca98-48c3-84c7-e184aae34d47>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00579-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Myelin increases resistance across the cell membrane
A cylindrical resistance is proportional to volume
No, resistance is proportional to length/area. Volume is length*area. So, resistance is not proportional to volume.
Where did you find a unit of ohm*cm²?
The "area specific resistance" is in ohm*cm². It is the appropriate "normalized" resistance for current through a membrane. To the best of my knowledge it is used primarily for characterizing fuel
cell membranes and neuron membranes.
Note Gm listed in your last reference (
). Conductance/unit area is simply the inverse of area specific resistance and is, IMO, more convenient.
|
{"url":"http://www.physicsforums.com/showthread.php?p=1887629","timestamp":"2014-04-21T04:50:08Z","content_type":null,"content_length":"84870","record_id":"<urn:uuid:1f79188b-78c5-44fc-b2f9-0eb4d805b43a>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00367-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Jeremy Lin Advanced Statistics: Is Lin the next Nash or Stockton?
The answer to the question above, you will find, is a definite maybe. To evaluate this query – can Jeremy Lin be the next Steve Nash or John Stockton? – the Editorial Board of Basketball I.Q. decided
to compare the age-23 seasons of all three players (which happens to be the first season in which all three players averaged more than 20 minutes per game). The age-23 season serves as a nice basis
of comparison for each of these three players: they are each the same age, of course; it was the second professional season for each player; it marks a strikingly similar arc in their professional
careers, as each player also completed four full years of college ball at a Division I mid-major; and they each averaged a very similar number of minutes per game in these respective seasons. To make
such an analysis more accessible, I will start with a glossary of statistical terms, which can be referred to by the reader:
SPR = [2PFGM + 1.5(3PFGM) + (FTM/2) + AST]/[FGA + (FTA/2) + AST + TOV]
TAPPS = PTS/[FGA + (FTA/2) + TOV]
TOT = TOV/[TOV + FGA + (FTA/2) + TRB + STL + AST]
SSI = FTA/FGA
SAR = [FGA + (FTA/2)]/AST
3PR = 3PFGA/FGA
3PS = 3PFGM/FGM
E = SPR + TAPPS + (1 – TOT)
wCE = (MPG/48) x [SPR + TAPPS + (1 – TOT)]
P/E = Salary/[SPR + TAPPS + (1 – TOT)]
wP/E = Salary/(MPG/48) x [SPR + TAPPS + (1 – TOT)]
EG = (Present Year’s E – Previous Year’s E)/(Previous Year’s E)
wEG = (Present Year’s wCE – Previous Year’s wCE)/(Previous Year’s wCE)
Despite all the surface similarities between the current, age-23 season for Jeremy Lin and the same respective seasons for John Stockton and Steve Nash, Lin has accumulated his playing time in stark
contrast to the other two. Stockton, for instance, was a back-up to able veteran Rickey Green of the Jazz, and achieved his playing time steadily, with very few starts. Nash, similar to Stockton, was
not at the top of the depth chart, but third behind two All-Star veterans in Phoenix – the young Jason Kidd, and the old Kevin Johnson – and also accumulated his minutes in a steady fashion, rarely
starting a game for the Suns.
But Lin has been buried behind no one on the Knicks: he has simply waited his turn behind an ineffective Toney Douglas and a worn-out Mike Bibby, and barely played while doing so. When he finally got
his turn, his minutes surged, and he has averaged his 24.5 minutes per game (Stockton had 23.6, Nash had 21.9) by becoming an entrenched starter after having barely played at all.
The first question we will address, using the alternative metrics of Shot-to-Assist Ratio (SAR), Shot Selection Index (SSI), Three-point Rate (3PR), and Three-Point Skew (3PS), is whether or not Lin,
Stockton and Nash are comparable players at all.
SAR SSI 3PR 3PS
Stockton 23 0.93 .440 .032 .009
Nash 23 2.39 .147 .334 .302
Lin 23 2.27 .484 .176 .125
Indeed, based on the SAR of each player, Stockton, Nash and Lin all fall into the first quintile of player classification, the Primary Distributor (referred to in the vernacular as “point guard”), as
each player has an SAR below 2.77. As you can see, Lin and Nash have extremely similar SARs, and at age 23 made their own scoring a mildly prominent part of their game. Stockton, however, accumulated
more assists than shots taken in his age-23 season, which is a remarkable achievement, even for a Primary Distributor. Clearly, passing was an even more important part of Stockton’s game than Lin’s
or Nash’s – but they are still similar players, stylistically, with a knack for playmaking.
Based on the SSI, you can see that the two most similar players were Stockton and Lin, who each had almost half as many free throw attempts as field goal attempts. This suggests that each player,
when he chose to score, attempted to do so by driving to the hoop (and getting fouled a fair amount of the time). This is further corroborated by their 3PR and 3PS: a small proportion of their field
goals attempted (about 18% for Lin, and only 3% for Stockton), and even smaller fraction of their field goals made (about 12% for Lin, and only 1% for Stockton) were from the perimeter. The
difference between Lin and Stockton, in regards to three-point attempts, can probably be explained by their different eras: in 1985-86 (Stockton’s second year), the three-point shot was reserved
mostly for the latter seconds of a possession, whereas the current game sees treys going up relatively early on in possessions.
Nash, on the other hand, played his individual offensive game out on the perimeter: a third of his shots were treys, and nearly a third of his makes (he was an astonishingly good three-point
shooter). Not surprisingly, given that he was so far away from the basket when he shot, Nash did not get to the line all that much (he took only 15% as many free throws as field goal attempts).
In sum, all three players were Primary Distributors, with slightly different flavors: Lin was a willing passer who drove to the basket; Nash was a willing passer who shot from the perimeter; and
Stockton was an absolute pass-first player who, like Lin, drove toward the basket when it was time to score.
Having established that Lin, at age 23, is sufficiently similar in style to both Nash and Stockton, let’s turn our attention to their performance statistics. The first statistics that we will
evaluate are the standard ones:
PPG RPG APG FG% FT% MPG
Stockton 23 7.7 2.2 5.1 .489 .839 23.6
Nash 23 9.1 2.1 3.4 .459 .860 21.9
Lin 23 14.4 2.8 5.8 .471 .766 24.5
The standard statistics suggest that, of the three, the age-23 Lin is clearly the best player: he scored many more points per game, collected more rebounds, and distributed the most assists. Even if
you reduce these numbers by 10% for Lin, acknowledging that he played about 10% more minutes per game, he still comes out with the better numbers. If we try to explain away this apparent superiority
by his higher field goal attempts, this argument is weakened by his field goal percentage – all three players shoot at virtually the same rate, with the slight differences in percentage explained by
their varying rates of three-point attempts and makes. It is only in free throw percentage that Lin demonstrates a weakness relative to the other two.
Can this be true? Is the 23-year-old Jeremy Lin really better than John Stockton and Steve Nash at the same age? Though the standard statistics suggest that this is the case, the alternative metrics
suggest a slightly different conclusion. Let’s take a look at those comparisons, using the Successful Possession Rate (SPR), Turnover-Adjusted Points per Shot (TAPPS), and Turnovers per Touch (TOT):
Stockton 23 .687 .855 .100
Nash 23 .616 .954 .081
Lin 23 .588 .875 .129
The SPR reflects a player’s ability to create a successful scoring opportunity for his team as a whole, and the 23-year-old Stockton is clearly superior to Nash and Lin at the same age. Some of this
is due to statistical skew, since Stockton’s low SAR would emphasize the value of an assist, and thus inflate this statistic. Still, even taking this statistical artifact into consideration,
Stockton’s SPR is much better than that of the other two, and he accomplished this on a mediocre team. Nash’s SPR is slightly better than Lin’s, and this is mostly accounted for by his superior
turnover rate.
The TAPPS reflects a player’s ability to create a successful scoring opportunity for himself, and in this arena the 23-year-old Nash is clearly superior to Stockton and Lin at the same age (in
reality, Stockton is 12 years older than Nash, and 27 years older than Lin). Nash’s superiority in this stat is mostly explained by his extremely low turnover rate, but also by the contribution of
his excellent three-point shooting (and playing in an era that, strategically, placed great value on the three-point shot). Lin’s TAPPS is slightly better than Stockton’s, but this is mostly
accounted for by the different eras in which they played, and the relative emphasis on the three-point shot in each era.
The TOT reflects the rate at which a player turns the ball over relative to his meaningful touches. The watermark TOT for a Primary Distributor is 10%, which is exactly the number that the
23-year-old Stockton achieved. Lin’s TOT of nearly 13% reflects the most glaring weakness in his game – and the reason why, no matter what the standard statistics say, Lin is a notch below Stockton,
Nash and the like at the same points in their careers. Nash’s turnover rate of about 8% is absolutely phenomenal for a player who takes so much responsibility for the ball, and getting it to his
Taken all together, the SPR, TAPPS and TOT suggest that, even at age 23, Stockton and Nash were showing glimpses of greatness, while Lin is a very good player who lags a degree or so behind them. A
couple different ways to appreciate the aggregate of these statistics are the statistics of Earnings (E) and weighted Cumulative Earnings (wCE). The E combines the three stats analyzed above, whereas
the wCE does so while taking into account playing time. In this manner, the E tells you what a player does when he is on the court, whereas the wCE tells you his relative contribution for a full 48
minutes, including the time in which he sits on the bench.
When looking at E and wCE in a young player, it is helpful to evaluate their growth – what is their improvement (or decay) from one season to the next. To evaluate Earnings Growth (EG) or weighted
Earnings Growth (wEG), one can calculate the E and wCE from a preceding year and compare it to the year in question. To that end, the E and wCE of Lin’s, Stockton’s and Nash’s age-22 and age-23
seasons have been calculated, and their growth from their rookie year to their sophomore campaign has been estimated:
E wCE EG wEG
Stockton 22 2.31 0.88 -- --
Stockton 23 2.44 1.20 5.6% 36.3%
Nash 22 2.29 0.50 -- --
Nash 23 2.49 1.17 8.7% 134%
Lin 22 2.21 0.45 -- --
Lin 23 2.33 1.19 5.4% 164%
Taking first things first, the age-23 Earnings for Stockton and Nash are indeed excellent – and Lin is, in fact, a step behind them. When playing time is taken into consideration (wCE), the age-23
Lin, Stockton and Nash all brought virtually identical value to their respective teams – but that is because Lin played more than the other two.
The interesting thing to consider here is growth: is there something about the change in performance from one year to the next that suggests that Lin is not a flash in the pan, but on a career arc
that would justify comparison to some of the game’s better players? From the rookie season to their second year, each player’s EG (what they did when they were on the court) improved significantly,
with Nash’s jump almost off the charts. But Stockton and Lin grew quite nicely, and at similar rates – which suggests that Lin is on a segment of the learning curve comparable to that of the game’s
better players. Lin may never be as good as the best, but early signals suggest that he will, in the least, continue to improve.
The stat of wEG, for these players at this stage of their careers, is, admittedly, meaningless. The tremendous jumps in wEG for all three (especially Nash and Lin), merely reflect the fact that they
were getting into the game for meaningful minutes in their second years, while in their first years they were used sporadically.
So back to the original question: Is Jeremy Lin the next John Stockton or Steve Nash? The early returns suggest that he is not, although he might come pretty close. But let’s close our eyes and dream
a little. Let’s say that, over the next year or two, Lin demonstrates a growth in on-court Earnings of 5%, and does so while averaging 36 minutes per game. That would project, for Lin, an E of 2.44,
and a wCE of 1.83.
Put in perspective, in 2010-11, league MVP Derrick Rose posted an E of 2.45 and a wCE of 1.91. So the current Lin phenomenon may represent equal parts reality and wishful thinking, but there is
enough substance there to suggest that happy dreams should not be extinguished.
1 comment:
1. Lin is not like Stockton AT ALL. He is like Nash, but your analysis is flawed because it failed to take into account the fact that Nash played a LOT of shooting guard beside Kevin Johnson and
Jason Kidd. Even when he started playing more minutes, he often was a shooting guard in Phoenix and Dallas. He moved to full time point guard after Kidd left. Even then, he still took a lot of
shots off picks because he was such a great jump shooter. The Nash we know now evolved much later, after he moved to Phoenix. Maybe, you should compare Lin to other like minded combo guards like
Nash, Stephon Marbury, and Stephen Curry in Oakland. Just a thought.
|
{"url":"http://hoopstats101.blogspot.com/2012/03/jeremy-lin-advanced-statistics-is-lin.html","timestamp":"2014-04-21T07:03:22Z","content_type":null,"content_length":"106815","record_id":"<urn:uuid:65d7f27e-c023-481f-be4d-086724398124>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00439-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Compressed Least-squares regression
Odalric-Ambrym Maillard and Rémi Munos
In: ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 22 (2010) MIT Press , .
We consider the problem of learning, from K data, a regression function in a linear space of high dimension N using projections onto a random subspace of lower dimension M. From any algorithm
minimizing the (possibly penalized) empirical risk, we provide bounds on the excess risk of the estimate computed in the projected subspace (compressed domain) in terms of the excess risk of the
estimate built in the high-dimensional space (initial domain). We show that solving the problem in the compressed domain instead of the initial domain reduces the estimation error at the price of an
increased (but controlled) approximation error. We apply the analysis to Least-Squares (LS) regression and discuss the excess risk and numerical complexity of the resulting “Compressed Least Squares
Regression” (CLSR) in terms of N, K, andM. When we choose M = O(sqrt K), we show that CLSR has an estimation error of order O(log K/sqrt K).
PDF - Requires Adobe Acrobat Reader or other PDF viewer.
|
{"url":"http://eprints.pascal-network.org/archive/00005993/","timestamp":"2014-04-19T09:26:25Z","content_type":null,"content_length":"8254","record_id":"<urn:uuid:1d44e3ef-0c61-4d1d-bb53-0af5fc953118>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00612-ip-10-147-4-33.ec2.internal.warc.gz"}
|
About dense orbits on dynamical systems
up vote 5 down vote favorite
Preliminars and notation:
Let $M$ be an $n$-dimensional compact manifold, $T\colon M\rightarrow M$ a diffeomorphism and $\( x_n\)_{n\in\mathbb{Z}}$ a dense orbit under $T$, ($x_n = T^n(x_0)$). Let $p\in M$ be another point
and define, for $\delta > 0$, $B_n(\delta) = B(p, e^{-n\delta})$.
Question: Is it true that for every $\delta > 0$ the set
$A = $( $n\in\mathbb{N} | x_n \in B_n(\delta) $ )
has finite cardinality? How can it be proven?
Thank you for the answers!
Does the exact choice of metric on the manifold affect this question? If not, do you know a purely topological statement (without invoking an arbitrary metric on the manifold)? – macbeth Apr 8 '10
at 15:08
You've used the symbol $n$ twice, once for the dimension and once to index a sequence. – Ian Morris Apr 16 '10 at 16:46
add comment
3 Answers
active oldest votes
This is called the shrinking target problem, and there is a reasonably large literature on it. For hyperbolic dynamical systems we can usually find quite a few pairs $x$, $p$ such that $A$
is infinite for all $\delta$. Indeed, I believe that there are results showing that in certain cases, for any point $z$ and positive real number $\delta>0$, the set of all $x$ such that $d
(T^nx,z)<\exp(-n\delta)$ for infinitely many $n \geq 1$ has positive Hausdorff dimension. A good place to start would be the articles "Ergodic theory of shrinking targets" and "The
shrinking target problem for matrix transformations of tori", both by Hill and Velani, but there are many results beyond this.
For illustration, here is a nice example in the case where $T$ is a smooth map of the circle which is not a diffeomorphism. I realise that this falls slightly outside the purview of your
up vote 5 question, but it is possible to extend this argument to the case of toral diffemorphisms using the technical device of a Markov partition. (I will not attempt this here because it is very
down vote fiddly.) Let $X=\mathbb{R}/\mathbb{Z}$ be the circle, let $T \colon X \to X$ be given by $Tx = 2x \mod 1$, and let $d$ be a metric on $X$ which locally agrees with the standard metric on $\
accepted mathbb{R}$. Take $p=0 \in X$ and fix any $\delta>0$. Now, the orbit of $x$ is dense if and only if it enters every interval of the form $(k/2^n,(k+1)/2^n)$, if and only if every possible
finite string of 0's and 1's occurs somewhere in the tail of its binary expansion. On the other hand, we have $d(T^nx,0)<2^{-\delta n}$ as long as the binary expansion of $x$ contains a
string of zeroes starting at position $n$ and having length $\lceil \delta n \rceil$. I think that it is not difficult to see that we can construct an infinite binary expansion, and hence a
point $x$, such that this condition is met for infinitely many $n$, whilst simultaneously meeting the condition that the orbit of $x$ is dense. In particular we can construct a point $x$
for which $A$ is infinite, even for all $\delta$ simultaneously if you like.
Extremely good reference. Thanks for the article!!! I will enjoy a lot it! – Kaminoite Apr 16 '10 at 17:46
add comment
$x_n \in B(p,e^{-n\delta})$ iff $p \in B(x_n,e^{-n\delta})$. Thus we ask whether $$ \bigcap_{k=1}^\infty \bigcup_{n=k}^\infty B(x_n,e^{-n\delta}) = \varnothing $$ But that is a countable
intersection of dense open sets, so (by Baire category) is NOT empty.
up vote 7
down vote (I hope my quantifiers are right...)
This does not mean that $p$ is in the intersection but in its closure. – Kaminoite Apr 8 '10 at 19:55
1 Nice! We could even take a further intersection over a sequence of $\delta$'s tending to zero, proving the existence of points which work simultaneously for all $\delta$. – Ian Morris
Apr 16 '10 at 16:52
Ian, I think that Gerald's proof is not correct? Am I wrong? – Kaminoite Apr 16 '10 at 17:25
Oops, I should have said for $\delta$ tending to infinity, not zero. I don't think that there's a problem with Gerald's proof: given any dense orbit $(x_n)$ and real number $\delta>0$,
Baire's theorem shows that there exists $p \in \bigcap_{k=1}^\infty \bigcup_{n=k}^\infty B(x_n,e^{-n\delta})$, and by unravelling the meaning of this statement we find that this $p$
satisfies $d(x_n,p)<e^{-n\delta})$ for infinitely many $n$. – Ian Morris Apr 16 '10 at 20:19
I see.. The problem that I found is that it not assures that $p$ is in the intersection. Nevertheless, as you said, the Baire's theorem asserts that this intersection is dense but, by
Borel-Cantelli lemma, that its measure is $0$. – Kaminoite Apr 17 '10 at 9:07
add comment
Take $M=\mathbb{R} / \mathbb{Z}$, $T(x)=x+\alpha$ for some $\alpha$. If $\alpha$ is irrational, all orbits will be dense. Set $p=0$, then the set $A$ can be made to be infinite by choosing
an $\alpha$ that can be approximated well: choose $\alpha$ from $B_1(\delta-2\cdot 10^{-k_1})$ for some $k_1$, then modify it at most by $10^{-k_1}$to make $10^{k_1} \alpha -[10^{k_1} \
up vote 2 alpha] \in B_{10^{k_1}}(\delta-2 \cdot 10^{-k_2})$ and repeat ad infinitum.
down vote
The thing is that you can not modify the value $\delta$. The question states that it must be fixed. – Kaminoite Apr 8 '10 at 16:00
You mean $\alpha$, if I understand you correctly. I am giving you a construction of an $\alpha$ for which the dynamical system exhibits an infinite $A$, so there is no modification. I
1 assume there could be an explicit $\alpha$ given, something along the lines of $\sum 10^{-10^{k^2}}$ - as long as it can be approximated superexponentially, the set $A$ will be infinite
for all $\delta>0$. – Thorny Apr 9 '10 at 7:03
add comment
Not the answer you're looking for? Browse other questions tagged ds.dynamical-systems or ask your own question.
|
{"url":"http://mathoverflow.net/questions/20730/about-dense-orbits-on-dynamical-systems?sort=votes","timestamp":"2014-04-18T10:53:58Z","content_type":null,"content_length":"69892","record_id":"<urn:uuid:973c3c95-36ab-4563-9619-7b4589388c4a>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00432-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Pitt's New Salary Numbers make more sense?
Anybody knows what Pitt's new median starting salary and the 25/75 percentile for the private sector is? And the percentage of people reporting?
It was a pretty curious stat in last year's rankings. I'd like to see if it makes more sense this year.
|
{"url":"http://www.lawschooldiscussion.org/index.php?topic=30415.msg451412","timestamp":"2014-04-18T23:47:01Z","content_type":null,"content_length":"32488","record_id":"<urn:uuid:cd82bc67-1461-4937-a106-6656cda6dae9>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00127-ip-10-147-4-33.ec2.internal.warc.gz"}
|
another exam question
02-12-2003 #1
Registered User
Join Date
Jan 2003
another exam question
This another question that will be placed on the exam but with a different layout. Give it a try... I have others that I'm considering.
Give me your answer and opinion (fair or not fair) for a beginner learning C. Thanks for your help!
The following function supposedly computes the sum and average of the numbers in the array a, which has the length n. avg and sum point to variables that the function should modify.
Unfortunately, the function contains several error; find and correct them.
void avg_sum(float a[], int n, float *avg, float *sum)
int i;
sum = 0.0;
for(i = 0; i < n; i++);
sum += a[i];
avg = sum / n;
Looks like a fair question to me.
When all else fails, read the instructions.
If you're posting code, use code tags: [code] /* insert code here */ [/code]
OK, you seem to be on a hunt for questions.... here's one for you.
Here's some code that works.... or does it?!
* sample1.c
* This program adds up the values in the array,
* and prints the results to the screen
#include <stdio.h>
int Adder(int a[])
int i;
int myTotal;
for (i = 0; i < sizeof(a); i++)
myTotal += a[i];
return myTotal;
int main(void)
int myarray[6] = {5, 1, 3, 2, 4};
int Total;
Total = Adder(myarray);
printf ("The total of the %d elements is %d\n", sizeof(myarray), Total);
As you've guessed/worked out, it doesn't do what it's supposed to. Explain the problems and suggest some corrections.
When all else fails, read the instructions.
If you're posting code, use code tags: [code] /* insert code here */ [/code]
How about this?
In C99, what will the following code print?
int main(void)
for(unsigned char i=0;i<=255;printf("%i\n",i++));
a) The program will print characters from the character set from value 0 through 255 each on a new line.
b) The program will print numbers from 0 through 255 each on a new line.
c) The program will print the numbers 0 through 255 each on a new line and repeat forever.
d) Both b and c are possible results.
e) It is not a valid for loop because of where i is declared.
f) It is not a valid for loop because of where the call to printf occurs.
g) It is not a valid for loop because it has no body.
h) The program will not compile because main doesn't return anything.
As an instructor as well I have posted questions in the past on this board. The average question is a fair question. I teach C as well to beginners and intermediate students.
Mr. C: Author and Instructor
02-12-2003 #2
02-12-2003 #3
02-12-2003 #4
02-12-2003 #5
CS Author and Instructor
Join Date
Sep 2002
|
{"url":"http://cboard.cprogramming.com/c-programming/34273-another-exam-question.html","timestamp":"2014-04-16T10:43:02Z","content_type":null,"content_length":"55881","record_id":"<urn:uuid:370b0226-19d5-46b6-aba9-c1b3de222537>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00019-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Numpy-discussion] Does np.std() make two passes through the data?
[Numpy-discussion] Does np.std() make two passes through the data?
josef.pktd@gmai... josef.pktd@gmai...
Mon Nov 22 13:15:00 CST 2010
On Mon, Nov 22, 2010 at 2:04 PM, Keith Goodman <kwgoodman@gmail.com> wrote:
> On Mon, Nov 22, 2010 at 11:00 AM, <josef.pktd@gmail.com> wrote:
>> I don't think that works for complex numbers.
>> (statsmodels has now a preference that calculations work also for
>> complex numbers)
> I'm only supporting int32, int64, float64 for now. Getting the other
> ints and floats should be easy. I don't have plans for complex
> numbers. If it's not a big change then someone can add support later.
Fair enough, but if you need numerical derivatives, then complex
support looks useful.
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
More information about the NumPy-Discussion mailing list
|
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2010-November/054036.html","timestamp":"2014-04-19T17:33:30Z","content_type":null,"content_length":"4049","record_id":"<urn:uuid:d6db5d25-9ad7-4b48-b671-c7b1b0512ebc>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00472-ip-10-147-4-33.ec2.internal.warc.gz"}
|
College Algebra Tutors
New York, NY 10016
GRE, GMAT, SAT, NYS Exams, and Math
...I have over three years of experience tutoring geometry for elementary school through high school. The subjects I cover include area, perimeter, volume, surface area, lines and angles, polygons,
triangles, circles, geometry proofs and logic, coordinate geometry,...
Offering 10+ subjects including algebra 2
|
{"url":"http://www.wyzant.com/geo_Hoboken_NJ_college_algebra_tutors.aspx?d=20&pagesize=5&pagenum=1","timestamp":"2014-04-16T19:52:45Z","content_type":null,"content_length":"60200","record_id":"<urn:uuid:96817322-6403-4ffa-a02c-f608621df895>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00608-ip-10-147-4-33.ec2.internal.warc.gz"}
|
totally convex space
totally convex space
Totally convex spaces
Essentially, the totally convex spaces are what you get if you try to build an algebraic theory of Banach spaces.
Abstract definition
A totally convex space is a module for the monad on Set which sends a set $X$ to the closed unit ball of the Banach space $\ell^1(X)$.
This is an example of an algebraic theory (in the strong sense that it is bounded monadic) but not finitary, hence not a Lawvere theory. The free totally convex space on $X$ is the unit ball of the
Banach space $\ell^1(X)$, and thus an operation in this theory is a formal sum $\sum_{x \in X} a_x x$ for $a_x \in \mathbb{R}$ with the property that $\sum_{x \in X} {|a_x|} \le 1$. The finiteness of
this sum forces it to have only countably many non-zero terms, and thus it factors through an operation on $\mathbb{N}$. Hence there is a presentation of this theory with operations in $\ell^1(\
mathbb{N})$ (which is why the theory is bounded monadic). The corresponding identities are simply the substitution rules – namely that substituting a sum into another works as expected – and the
reordering rules.
Concrete definition
A totally convex space is a set $X$ equipped with, for each infinite sequence $(a_1, a_2, \ldots)$ of real numbers such that $\sum_i {|a_i|} \leq 1$, an operation from $X^{\mathbb{N}}$ to $X$,
written $(x_1,x_2,\ldots) \mapsto \sum_i a_i x_i$, such that:
1. Reordering: If $\sigma$ is any permutation of $\mathbb{N}$, then $\sum_i a_i x_i = \sum_i a_{\sigma(i)} x_{\sigma(i)}$;
2. Nullary substitution: If $\delta_i$ is the Kronecker delta at $0$ (so $\delta_i = 0$ normally, but $\delta_0 = 1$), then $\sum_i \delta_i x_i = x_0$;
3. Binary substitution: If the functions $\pi,\rho\colon \mathbb{N} \to \mathbb{N}$ express $\mathbb{N}$ as its own product with itself (it is enough to pick one pair of functions and state this
axiom only for it), then $\sum_i a_i (\sum_j b_{i,j} x_{i,j}) = \sum_k a_{\pi(k)} b_{\pi(k),\rho(k)} x_{\pi(k),\rho(k)}$.
Of course, one would normally write the right-hand side of the last equation as $\sum_{i,j} a_i b_{i,j} x_{i,j}$, but that is not technically an operation in the theory, except as mediated by $\pi$
and $\rho$. A common choice for $(\pi,\rho)$ is the inverse of $(i,j) \mapsto \big({i + j + 1 \atop 2}\big) + j$, where for this expression to work we take $0$ to be a natural number.
I haven't actually checked that this list is complete; but it's what I get if I take Andrew at his word that we need only substitution and reordering rules. I wouldn't be terribly surprised if
nullary substitution is redundant, but right now I don't see how. —Toby
1. A totally convex space is a pointed convex space. The operations of a convex space are encoded in the operations $(r, 1 - r, 0, 0, 0, \ldots)$ and the “point” comes from the operation $(0, 0, \
ldots)$. This functor preserves underlying sets and so has a left adjoint; thus any convex space can be “completed” to a totally convex space.
2. The operations for this theory are commutative, hence the category of totally convex spaces is a closed symmetric monoidal category.
1. Clearly, the closed unit ball $B X$ of any Banach space $X$ is a totally convex space.
2. Bizarrely, the open unit ball of a Banach space is a totally convex space. This is because if a sum, $\sum_{x \in B X} a_x x$ for $\sum_x {|a_x|} \leq 1$, lies on the boundary of $B X$ then every
$x$ for which $a_x e 0$ must have norm $1$. Thus if the series only contains terms from the interior of $B X$, the sum remains in the interior. Hence the open unit ball is a subalgebra of the
closed unit ball.
3. Continuing, the quotient of the closed unit ball by the open unit ball is a totally convex space.
4. In particular, the three-point space $\{-1,0,1\}$ is (assuming excluded middle) a totally convex space. The operations on this space are as follows:
$\sum a_j \epsilon_j = \begin{cases} 1 & \epsilon_j = \operatorname{sign}(a_j), \sum {|a_j|} = 1 \\ -1 & \epsilon_j = -\operatorname{sign}(a_j), \sum {|a_j|} = 1 \\ 0 & \;\text{otherwise} \end
5. Going back one step, $(-1,1)$ is a totally convex space. It is illuminating to describe this as a coequaliser of free totally convex spaces. Consider a functional $f \colon \ell^1(\mathbb{N}) \to
\mathbb{R}$ which is bounded of norm $1$ but does not achieve its norm; for example, let $f$ be represented by the sequence $(\frac{1}{2},\frac{2}{3},\frac{3}{4},\ldots)$. Then for any $x \in B\
ell^1(\mathbb{N})$, $f(x) \in (-1,1)$. Thus $(-1,1)$ is the coequaliser of the inclusion $\ker f \to \ell^1(\mathbb{N})$ and the zero map $\ker f \to \ell^1(\mathbb{N})$.
Revised on January 29, 2010 16:52:39 by
Toby Bartels
|
{"url":"http://ncatlab.org/nlab/show/totally+convex+space","timestamp":"2014-04-21T14:47:43Z","content_type":null,"content_length":"35837","record_id":"<urn:uuid:f20f324c-dd6e-4e71-be00-7a0b59bbcde2>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00259-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Infamous .999... = 1
Date: 01/12/2002 at 08:14:36
From: Dan T
Subject: The Infamous .999... = 1
Dear Dr. Math,
I have seen answers to the question that .9999... = 1, but some
people in my 8th grade class STILL don't agree. My geometry teacher
says that it is all about giving infinity a value. A student in the
class says I am trying to prove that 1 = 1/infinity.
Can you straighten this out? Thanks.
Dan T.
Date: 01/12/2002 at 09:27:23
From: Doctor Tom
Subject: Re: The Infamous .999... = 1
Hi Dan,
I assume you've seen the Dr. Math FAQ:
0.9999... = 1
If not, take a look.
Rather than saying "giving infinity a value," it's perhaps a bit
clearer to say, "giving the concept of a limit of an infinite sequence
of numbers a value."
.9 is not 1; neither is .999, nor .9999999999. In fact if you stop the
expansion of 9s at any finite point, the fraction you have (like .9999
= 9999/10000) is never equal to 1. But each time you add a 9, the
error is less. In fact, with each 9, the error is ten times smaller.
You can show (using calculus or other methods) that with a large
enough number of 9s in the expansion, you can get arbitrarily close to
1, and here's the key:
Thus, if you are going to assign a value to .9999... (going on
forever), the only sensible value is 1.
There is nothing special about .999... The idea that 1/3 = .3333...
is the same. None of .3, .33, .333333, etc. is exactly equal to 1/3,
but with each 3 added, the fraction is closer than the previous
approximation. In addition, 1/3 is the ONLY number that the series
gets arbitrarily close to.
And it doesn't limit itself to single repeated decimals. When we say:
1/7 = .142857142857142857...
none of the finite parts of the decimal is equal to 1/7; it's just
that the more you add, the closer you get to 1/7, and in addition, 1/7
is the UNIQUE number that they all get closer to.
Finally, you can show for all such examples that doing the arithmetic
on the series produces "reasonable" results:
1/3 = .333333...
2/3 = .666666...
1/3 + 2/3 = .999999... = 1.
By the way, there is nothing special about 1 as being a non-unique
decimal expansion. Here are a couple of others:
2 = 1.9999...
3.71 = 3.709999999...
2.778 = 2.77799999999999...
...and the student who says you're trying to show that 1 = 1/infinity
is wrong.
- Doctor Tom, The Math Forum
Date: 01/12/2002 at 16:40:11
From: Dan T
Subject: The Infamous .999...=1
Sorry, the equation that the student meant was:
1 = 1 - 1/infinity
Date: 01/12/2002 at 16:48:09
From: Doctor Tom
Subject: Re: The Infamous .999...=1
Hi Dan,
Then he's basically right. As you add each new 9 to the expansion, the
errors look more and more like:
1/100, 1/1000, 1/10000, ...
Thus, in a sense, the error begins to look like "1/infinity," which
semms as if it should be zero.
The 1/infinity is meaningless, but the concept of limit is not. We can
say that the limit of the sequence above is zero, and can rigorously
prove it.
- Doctor Tom, The Math Forum
|
{"url":"http://mathforum.org/library/drmath/view/55748.html","timestamp":"2014-04-17T01:27:19Z","content_type":null,"content_length":"8150","record_id":"<urn:uuid:55c43ddb-0800-413f-880f-cc058dd3d001>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00036-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Why brakes lack power?
09-19-2011 #1
Nerdy Jock
Join Date
Jun 2008
Why brakes lack power?
I understand the basics of hydraulic disc brakes, but I don't understand how the amount if power and modulation varies so much between brakes that are fundamentally similar.
These are the factors I can think of: rotor size, number of pistons, consistent and air free fluid, heat dissipation... And not much else.
Approximately how much more braking power would a 160mm rotor have over 140? Or 180 over 160? 20% possibly?
I'm thinking about all of this because my Juicy 7's just don't seem to have enough power and I can't think why.
Sent from my fingers.
Current bike: Rocky Mountain ETS-X 70
Factors that affect brake performance....
Brake pad composition
Brake pad contamination
Hydraulic mechanical advantage
Brake hose expansion
Rotor surface
Rotor contamination
Water contamination in brake fluid
and....something that is seldom discussed, TIREs
besides factors you already mentioned. Im sure there are more.
And piston diameter and the size of the ports the fluid flow through and the diameter of the master cylinder and......and...... You can go on for eons about factors but in the end it is how much
it was desinged to have
The percent change in the power due to rotor size is a basic calculation. 140/160 = _%, 160/180 = _% at least approximately
... it is how much it was desinged to have
Why would some brakes be designed to have less power? It doesn't seem to me like having larger ports would increase weight or do anything else negative. But there must be some downside.
Current bike: Rocky Mountain ETS-X 70
Yeah having larger pistons and higher flowing ports doesn't add much weight if any but its is all about making money, why would they sell a bargain basement xc brake for $50 that has the same
power as their $200 top of the line dh brake. Stupid yet its how they make money. Also I don't think youd aNy to start speccing saint level of power brakes on pave path brakes, can you say endo.
I'm sure their are other factors but that is just why comes to mind
Why would some brakes be designed to have less power? It doesn't seem to me like having larger ports would increase weight or do anything else negative. But there must be some downside.
Its about the design process. Nothing ever works out as designed. It can comes close, it can fall short, or exceed expectations big time. Complex systems can be quite unpredictable. Excellent
products are the product of one of two things, relentless design improvement, or years of experience.
09-19-2011 #2
mtbr member
Join Date
Jul 2010
09-19-2011 #3
mtbr member
Join Date
Oct 2009
09-19-2011 #4
Nerdy Jock
Join Date
Jun 2008
09-19-2011 #5
mtbr member
Join Date
Oct 2009
09-19-2011 #6
mtbr member
Join Date
Jul 2010
|
{"url":"http://forums.mtbr.com/brake-time/why-brakes-lack-power-739368.html","timestamp":"2014-04-19T23:12:47Z","content_type":null,"content_length":"82532","record_id":"<urn:uuid:14097046-a5eb-42d2-b4af-e17a8fb4c1e7>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00363-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Windsor, NJ Algebra 1 Tutor
Find a Windsor, NJ Algebra 1 Tutor
...I can remember strict teachers drilling proper English usage into my head: diagramming sentences, looking up words in the dictionary, re-writing papers that my teachers knew I didn't put much
effort into. It's no wonder that my English skills exceed those of most of today's English teachers. Un...
23 Subjects: including algebra 1, English, calculus, geometry
...As a lifelong Language Arts teacher and high school guidance counselor, helping students with Study Skills has been an integral part of my work throughout my professional career. Test
preparation, Organization and Time Management, Note-taking, Learning Styles and Studying Smarter are all themes ...
48 Subjects: including algebra 1, English, reading, writing
...I participated in a chess team in school that became finalists for NYC in a competition against other schools. Prior to attending college for music technology, I was in a rock band for 4 years
and was one of the primary co-composers on all the material. As of late I have started to compose slightly within the jazz idiom, and I have also composed for some short film clips.
33 Subjects: including algebra 1, physics, calculus, GRE
...I look forward to helping students explore, learn, and grow even more!I am a PreK teacher in Manasquan, NJ. Our curriculum is based off of the program Handwriting Without Tears. This program
involves numerous techniques such as tracing, play dough letters, chalkboard, white board, and shapes.
45 Subjects: including algebra 1, reading, English, chemistry
...I analyze and approach each student encounter with the primary goal of helping students succeed. I maintain a calm and even demeanor throughout and really try to build up a student's
confidence level - it is my opinion that this is where most hurdles are founded. Most of my tutoring success sto...
2 Subjects: including algebra 1, algebra 2
Related Windsor, NJ Tutors
Windsor, NJ Accounting Tutors
Windsor, NJ ACT Tutors
Windsor, NJ Algebra Tutors
Windsor, NJ Algebra 2 Tutors
Windsor, NJ Calculus Tutors
Windsor, NJ Geometry Tutors
Windsor, NJ Math Tutors
Windsor, NJ Prealgebra Tutors
Windsor, NJ Precalculus Tutors
Windsor, NJ SAT Tutors
Windsor, NJ SAT Math Tutors
Windsor, NJ Science Tutors
Windsor, NJ Statistics Tutors
Windsor, NJ Trigonometry Tutors
Nearby Cities With algebra 1 Tutor
Clarksburg, NJ algebra 1 Tutors
Columbus, NJ algebra 1 Tutors
Cranbury algebra 1 Tutors
Cream Ridge algebra 1 Tutors
Englishtown algebra 1 Tutors
Fieldsboro, NJ algebra 1 Tutors
Hopewell, NJ algebra 1 Tutors
Jamesburg, NJ algebra 1 Tutors
Pennington, NJ algebra 1 Tutors
Perrineville algebra 1 Tutors
Princeton Township, NJ algebra 1 Tutors
Robbinsville, NJ algebra 1 Tutors
Rocky Hill, NJ algebra 1 Tutors
Roebling algebra 1 Tutors
Uppr Free Twp, NJ algebra 1 Tutors
|
{"url":"http://www.purplemath.com/windsor_nj_algebra_1_tutors.php","timestamp":"2014-04-21T02:04:51Z","content_type":null,"content_length":"24135","record_id":"<urn:uuid:0b917d6e-60e9-4497-adcb-91fe2dd9fc72>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00070-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Unmanned Spaceflight.com > Asteroid Grand Tour
Yeah, Emily. It was notably short of specific targets and/or down-selects, wasn't it? I interpreted this as strictly early concept development activity...nobody's gonna get assertive unless & until a
feasible mission strategy emerges.
Classic example of a press release about a science/tech item that leaves out the most important piece of info. I see them all the time, and they're not all by ESA press flunkies... not by a long
30-year mission duration for those long hauls using only 60 kg of propellant...amazing!
Yeah...seems as if the prime filter would be choosing specific asteroids of given types, then running optimization NLPs on that set & comparing it to others. Not a trivial problem at all...I just
survived two quarters of numerical systems optimization, can really appreciate the work that went into this effort!
I think they were more thinking about tweaking its trajectory to take advantage of serendipitous flyby when it is in the asteroid belt but not studying Vesta or Ceres.
Very interesting work. I wonder if it would be worthwhile for such a spacecraft to carry small impactors. They wouldn't need to have that much mass what with the huge velocity differences involved.
Such impactors would be excellent for spectral studies, but you wouldn't be sticking around for very long to study the crater.
|
{"url":"http://www.unmannedspaceflight.com/lofiversion/index.php/t4087.html","timestamp":"2014-04-20T08:18:48Z","content_type":null,"content_length":"25270","record_id":"<urn:uuid:6ce97756-3ead-44ef-96ee-1ca89e88febb>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00044-ip-10-147-4-33.ec2.internal.warc.gz"}
|
-- |
-- Module : Control.Search.Local.Transformation
-- Copyright : (c) Richard Senington & David Duke 2010
-- License : GPL-style
-- Maintainer : Richard Senington <sc06r2s@leeds.ac.uk>
-- Stability : provisional
-- Portability : portable
-- Transformations for capturing characteristics of algorithms.
module Control.Search.Local.Transformation (
) where
import Control.Search.Local.Tree
import Control.Search.Local.Neighbourhood
import Data.List
import System.Random
{- | A basic recursive filter. This will check every neighbourhood, and remove those
neighbours that do not improve upon their parent solution. -}
improvement :: Ord nme=>LSTree nme -> LSTree nme
improvement = multiLevelApply (repeat sImprovement)
{- | A single level improvement transformation, that will remove from the top neighbourhood
of the tree those solutions that do not improve upon the parent solution. It is
used by both the recursive improvement transformation, and one of the
attempts to encode Simulated Annealing. -}
sImprovement :: Ord nme=>LSTree nme -> LSTree nme
sImprovement t = let ns' = filter (<t) (treeNodeChildren t)
in LSTree (treeNodeName t) ns'
{- | A helper function for shuffling lists, based upon a
randomised sequence of numbers (expected). -}
shuffle :: (Ord b)=>[b]->[a]->[a]
shuffle rs xs = map snd (sortBy (\(a,_) (b,_)->compare a b) $ zip rs xs)
{- | Another helper, to generate a specific number of random values from a
generator, and return them with the updated generator. -}
makeLimitedRands :: (Random a,RandomGen g)=>g->Int->([a],g)
makeLimitedRands g l = foldl f ([],g) [1..l]
f (a,b) _ = let (c,b') = random b
in (c:a,b')
-- | Recursive neighbourhood shuffling transformation, all neighbourhoods will become randomised.
nShuffle :: RandomGen g=>g->LSTree nme -> LSTree nme
nShuffle g t = LSTree (treeNodeName t) ns'
ns = treeNodeChildren t
(rs,g') = makeLimitedRands g $ length ns
ns' = map (nShuffle g') (shuffle (rs :: [Int]) ns)
{- | Single level neighbourhood ordering transformation. -}
sSort :: Ord nme=>LSTree nme -> LSTree nme
sSort t = LSTree (treeNodeName t) (sort (treeNodeChildren t))
{- | Recursive neighbourhood ordering transformation. Implemented using multi-apply. -}
nSort :: Ord nme=>LSTree nme -> LSTree nme
nSort = multiLevelApply (repeat sSort)
{- | Single level reversal of neighbourhood order. To be used in conjunction with sorting for moving
between finding largest and smallest elements. -}
sReverse :: LSTree nme -> LSTree nme
sReverse t = LSTree (treeNodeName t) (reverse $ treeNodeChildren t)
{- | Recursive neighbourhood reversal transformation. Implemented using multi-apply. -}
nReverse :: LSTree nme -> LSTree nme
nReverse = multiLevelApply (repeat sReverse)
{- | A simple (very simple) TABU system. Based upon a limited Queue, and
direct node comparison (not the way it is usually used in the OR
community). Acts as a recursive filter based upon memory. -}
tabu :: Eq nme=>Int->[nme]->LSTree nme->LSTree nme
tabu queueSize q t = LSTree nme ns''
nme = treeNodeName t
q' = take queueSize $ nme:q
ns' = filter (\n->not $ elem (treeNodeName n) q') (treeNodeChildren t)
ns'' = map (tabu queueSize q') ns'
{- | Takes advantage of numerically priced solutions, rather than just ordering,
to allow through solutions that are worse than the current solution, but
only to a limited extent. Would require some understanding of the maximum
and minimum differences likely in a solution set. -}
thresholdWorsening :: NumericallyPriced nme a=>a->LSTree nme->LSTree nme
thresholdWorsening thresh t = LSTree nme ns'
nme = treeNodeName t
tP = priceSolution nme
ns = filter (\n->(priceSolution.treeNodeName) n - tP<thresh) $ treeNodeChildren t
ns' = map (thresholdWorsening thresh) ns
{- | An adaptation of the above. We now have a list of thresholds, constructed in
some way (user defined) and then applied each to a different level of the tree.
Used in one of the Simulated Annealing experiments. -}
varyingThresholdWorsening :: NumericallyPriced nme a=>[a]->LSTree nme->LSTree nme
varyingThresholdWorsening (thresh:thresh') t = LSTree nme ns'
nme = treeNodeName t
tP = priceSolution nme
ns = filter (\n->(priceSolution.treeNodeName) n - tP<thresh) $ treeNodeChildren t
ns' = map (varyingThresholdWorsening thresh') ns
{- | Takes a list of single level transformations, and applies them each to a different level
of a tree. These are also generated in a user defined way, and this function is used
in the other Simulated Annealing experiment. -}
multiLevelApply :: [LSTree nme->LSTree nme]->LSTree nme->LSTree nme
multiLevelApply (x:xs) t = let ns = map (multiLevelApply xs) (treeNodeChildren $ x t)
in LSTree (treeNodeName t) ns
|
{"url":"http://hackage.haskell.org/package/local-search-0.0.3/docs/src/Control-Search-Local-Transformation.html","timestamp":"2014-04-19T21:12:04Z","content_type":null,"content_length":"28085","record_id":"<urn:uuid:8cf813f7-ead0-4a0f-ac2f-bcb4a13996cd>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00044-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Universe of Discourse : Flipping coins, corrected
Flipping coins, corrected
In a recent article about coin flipping, I said:
After a million tosses of a fair coin, you can expect that the numbers of heads and tails will differ by about 1,000.
In general, if you flip the coin n times, the expected difference between the numbers of heads and tails will be about √n.
In fact, the expected difference is actually !!\sqrt{2n/\pi}!!. For n=1,000,000, this gives an expected difference of about 798, not 1,000 as I said.
I correctly remembered that the expected difference is on the order of √n, but forgot that the proportionality constant was not 1.
The main point of my article, however, is still correct. I said that the following assertion is not quite right (although not quite wrong either):
Over a million tosses you'll have almost the same amount of heads as tails
I pointed out that although the relative difference tends to get small, the absolute difference tends to infinity. This is still true.
Thanks to James Wetterau for pointing out my error.
[Other articles in category /math] permanent link
|
{"url":"http://blog.plover.com/math/coins-2.html","timestamp":"2014-04-21T07:04:37Z","content_type":null,"content_length":"13530","record_id":"<urn:uuid:92c84569-5720-42f2-aa84-420ed4a52809>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00064-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Exercises to guide students in emulating how experts think when solving problems
by Linda L. Davis and Bill Rose
(Many of the ideas presented are built upon those discussed earlier in the day by Jen Sablock, Peter Lea, Jimm Myers, Z. Demet Kirbulut, Ji-Sook Han, Sister Gertrude Hennessey, and Dexter Perkins)
An unscripted effort first
I think many of us value the conclusions that students come up with on their own without our guidance, particularly when we ask them to work together. I often wonder when I give instructions how the
exact wording of my instructions truly affects they way the students approach a problem.
Yet, from this workshop, in particular, from Karl Wirth's presentation entitled A Metacurriculum on Metacognition (PowerPoint 16.9MB Nov20 08), where work by Schoenfeld (1987) was illustrated, many
of us agreed that many students would benefit greatly by modeling Expert Problem Solvers behaviors.
A compromise: assuming one is practicing problem-based learning, in a series of problems where the level of difficulty increases and the amount of guided instruction decreases through the series, one
starts by presenting a problem (perhaps one that is difficult to very difficult) to the students and having them solve it cold. This problem then will be revisited at the end of the term, after you
have stepped them through a series of problems where they learn about the cyclical way in which expert geoscientists solve problems.
Then, an expert example
The next step is to bring in an expert who is capable of telling the students how they go about solving a problem and preferably one who can talk out loud about their thought processes.
Follow with a series of problems that increase in difficulty, but decrease in guidance
-I think that many students have a severe problem with reading comprehension. On top of that many students then have trouble figuring out exactly what the problem is in a given situation.
- Reading comprehension exercises could be assigned where maybe Just-in-Time Teaching (JiTT) or pair-share and class discussion and agreement is used. These can be very short sentences or
paragraphs. I love Shirley Yu's exercise which clearly showed that the reading instructions are critical, and that if the students can relate what they are reading to things they already know,
comprehension or recall is better.
- So it might help if they practice very short exercises where they practice making reasonable analogies so that they have some context or familiarity to use for understanding. Maybe just a brief
exposure to the vocabulary before class or before the problem is given would also help in the "decoding" of the problem, if new terminology is used and critical to the problem.
- If reading comprehension is targeted and dealt with, then the time spent in the first step used by Expert Problem Solvers is lessened, and I think also, analysis of the problem will be easier.
Analysis: This step can mean different things to different minds.
- Do the students need to analyze the problem? What is the problem? In the beginning of a semester of using Problem-Based Learning (PBL) to teach students how experts problem solve, one could
state the problems explicitly, and as the semester (or even year if you have two classes tied together, e.g., mineralogy and petrology) wears on, the definition of the problem can be murkier so
that they figure it out.
- Others may look at this step as analyzing the data. Terminology means different things.
- A deeper level here is to analyze what will be needed to solve the problem. Give them practice by using problems of all kinds and ask them what the problem is. It could be a calculus problem,
or what I learned as a "study problem." Just short little 1 minute exercises, on a par with the short reading comprehension exercises.
To analyze then, either they have to determine the factors to consider or they begin to evaluate the data given to them.
- So as an instructor per certain problems, you give them the data to work with, but give them extraneous data, so that they have to winnow out the chaff, and figure out which data is the data to
- If you do not give them the data, and they have to first determine what data to use, analyzing the problem at hand takes on more meaning. And the students then have to search out the data after
figuring out which data to use.
Plan and Implement
Once the factors are determined, one sets up a plan to solve the problem then implement the plan.
- At times, depending up on the problem, the size of the data set for each variable has to be considered – when is the amount of data statistically significant? Somehow one has to get across the
idea that for a particular variable the size of the data set has to be chosen....it may not be enough, but one can't know this until they test it. An example, to make this concrete would be
trying to predict something like the eruption of Old Faithful given a range of data for as many days as one wishes (c.f. Carol Ormand's exercise...) (This may be one of the beautiful things to
use later in this progression about teaching them to stop and assess what they did, e.g., the exploration of ideas that comes later part of this cyclic style of problem-solving. Was the data set
large enough?)
- Then they either set up an equation or group of functions or set up a way to test their ideas.
- The plan has to be explicitly stated, particularly in group work because they so often lose sight of what they are trying to do. "What are we going to do to solve this problem, and how are we
going to attack or implement the plan?" This is one way for the instructor to evaluate the exercises early on, or to intervene and help: did they keep with the plan and did they attack it in the
way they said they would?
This step seems "squishy" but can be defined many ways. I "hate" when students ask me, "am I doing this right" or "is this right," because I think that this inhibits the learning process and I
want them to take the plunge down a pathway that is unsure to them. Yet, I understand not wanting to waste time (or maybe appear stupid). Ignoring that, one could use the questions that Dave Mogk
posited this workshop, for example: "Is this a reasonable answer?" "Does this fit with what we/you already know about the world/the situation/ the problem?" "If not, it's a 'flier,' and is the
'flier' significant or is it to be tossed?"
If they are flat out wrong, early in the semester, the instructor could here step in and let them know that the answer is incorrect.
The cyclical nature begins: Go back to analyzing the problem
Rethink this thing! What did they do wrong? Can they restate the problem? Did they go down the wrong path, or just not include all variables?
- Explore these new ways to analyze the problem. Test the water by trying out whether the new ideas or variables dreamed up after "verifying" might make a difference.
- Gain an idea of whether or not the new information is going to take them anywhere.
- Ok, decide that it will, and re do the plan and implement.
The instructor in the beginning should likely query the students here or have a check. This new plan should not only be written down, but compared and contrasted to the first plan.
I want to make sure that there is a change of plan. I have had students try the same approach to solving a problem for ten hours (or so they tell me). I think that this is someone's definition of
insanity. I want it to become second nature to them to figure out when a plan is not working and that something has to be changed in order to guarantee success.
The second time around "planning and implementing" should take less time, unless new variables in fact make the problem even more difficult. Why? Well, maybe one could encourage "back of the
envelope" implementation of the plan just to test it. If it is immediately clear that this is again the wrong path, they go back to more in depth analysis.
I think that depending upon the problem, there needs to be evidence or documentation as to how the plan was implemented. This is also great practice for them: if you write down what you did,
explicitly and well, you can go back and check for mistakes, or areas where clues are provided so that one can proceed in the iterative process of problem solving by analyzing, pausing to think,
double-checking what you have done.
I think that I need to have a "real" way to check that they verified this, and then have them evaluate that verification in a way that can be improved by bringing it into the limelight. For
example, on lower levels, how many of the students read over their essays, or double-check their answers on Scantron-type tests? The number of silly mistakes could be greatly decreased just by
proofing their work, but they don't. This is a great life-learning exercise to make second-nature.
Practice Practice Practice
I always have good intentions, but often drop the ball by the end of the semester. So, I have to INTENTIONALLY create these problems and work more and more of them into my detailed syllabus, so that
I give them the practice necessary to really master this way of problem solving.
Take away the "scaffolding" slowly yet surely, so that by the end of the time you have with them, they need little help in setting up a way to approach solving a problem, and so that it becomes first
nature to pause and think about the problem; first nature to not give up but to change the plan of action.
Does this expert way of thinking make a difference?
Go back to the original "difficult" problem (making sure you never solved it for them), and have them "re-solve" it after learning these new tools.
|
{"url":"http://serc.carleton.edu/NAGTWorkshops/metacognition/working_groups/scaffolding_how_experts_think.html","timestamp":"2014-04-20T04:07:18Z","content_type":null,"content_length":"37332","record_id":"<urn:uuid:32ff5b41-f5cb-496a-bddf-8c2497424e36>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00376-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mathematical English Usage - a Dictionary
by Jerzy Trzeciak
[see also: which, namely
That (2) implies (1) is contained in the proof of Theorem 1 of [4].
Clearly, A[∞] weights are sharp weights. That there are no others is the main result of Section 2.
The degree of P equals that of Q.
The continuity of f implies that of g.
The diameter of F is about twice that of G.
It is this point of view which is close to that used in C^*-algebras.
Define f(z) to be that y for which......
Where there is a choice of several acceptable forms, that form is selected which......
Associated with each Steiner system is its automorphism group, that is, the set of all......
The usefulness and interest of this correspondence will of course be enhanced if there is a way of returning from the transforms to the functions, that is to say, if there is an inversion formula.
We now state a result that will be of use later.
A principal ideal is one that is generated by a single element.
Let I be the family of all subalgebras that contain F. [Or: which contain F; you can use either that or which in defining clauses.]
Back to main page
|
{"url":"http://www.emis.de/monographs/Trzeciak/glossae/that.html","timestamp":"2014-04-18T08:15:39Z","content_type":null,"content_length":"1941","record_id":"<urn:uuid:a49e6690-94a0-45af-8190-2c224bd96c44>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00280-ip-10-147-4-33.ec2.internal.warc.gz"}
|
When does a "constant of the motion" imply a Noether current in a quantum field theory?
up vote 1 down vote favorite
Assume we are given a quantum field theory described by some functional. If $J$ is a Noether current, i.e. it is associated with a symmetry of the functional and satisfies $\partial_s J^s=0$ (Noether
theorem), we can always obtain a conserved quantity as $Q(x^0) = \int J^0(x^0, x^1,...,x^n)d^{n}x$. What is the converse to the previous statement? I would like to know what assumptions are necessary
to build a conserved current $J$ given $Q$ swith $\dot Q = [H,Q]=0$ where $H$ is the Hamiltonian of the (quantum field) theory.
quantum-field-theory mp.mathematical-physics
2 In addition to Carlo's answer below consider the conserved operator $Q$ from your question and consider $Q^2$. $Q^2$ is an equally good conserved operator, but it is no local given by a single
integral of a local functional. So local conserved operator are rather special among all conserved operators. – Igor Khavkine May 28 '13 at 22:10
@Igor: Your comment is pointing in what I think it is a right direction. I have seen a Noether current built from a conserved operator which is assumed to be local. My failure is understanding
what is meant by locality in this case. Could you elaborate or give some references? – Daniel May 28 '13 at 23:10
If your question is actually about quantum field theory, then even the direction you stated is not true in general. The $Q(x^0)$ is the classical conserved Noether chargeIn quantum field theory it
is a heuristic to transfer these charges into an operator (quantisation), but you still have to prove that it is actually a conserved quantity. There are a lot of “anomalies” where this is not the
case (conformal anomaly, chiral anomaly…). – The User May 29 '13 at 0:02
1 Locality is easier to understand in the classical theory. Any conserved quantity $Q$ is a functional of the field, say $\psi(x)$. The functional $Q$ is local if its variational derivative $\delta
Q/\delta \psi(x)$ depends only on $\psi(x)$ and finitely many derivatives at the same point $x$. Using this definition, you can check that your $Q$ is a local functional while $Q^2$ is not (though
it could be said to be "bilocal", $Q^3$ would be "trilocal" and any polynomial in $Q$ would be "multilocal"). – Igor Khavkine May 29 '13 at 0:03
Conversely, in general it is not possible to translate a quantum observable into a classical observable (consider parity). Thus you should not expect that there is even an associated current (from
which you would get a classical observable, the charge). I think your question makes sense in classical field theory (are you interested in that case?): conserved currents induce conserved charges
and you can ask whether a conserved quantity is given as a charge associated to a current. – The User May 29 '13 at 0:20
add comment
1 Answer
active oldest votes
The answer to the question in your title is "no" in general. Noether current and conserved charge only go hand in hand if the symmetry that gives rise to them is a continuous symmetry, such
as translation (conserved momentum) or a $U(1)$ symmetry (conserved charge). A discrete symmetry will, in general, give rise to a conserved quantity that can only take on discrete values,
up vote 2 so it cannot "flow" and there is no current associated with that conservation law. An example is inversion symmetry, with parity as a conserved quantity. There is no current associated with
down vote parity.
thanks for the response. i had an imprecise title which asked whether integral of the motion --> noether current. i meant to ask instead what assumptions make the conclusion true. i
changed the title to reflect that. – Daniel May 28 '13 at 23:17
add comment
Not the answer you're looking for? Browse other questions tagged quantum-field-theory mp.mathematical-physics or ask your own question.
|
{"url":"http://mathoverflow.net/questions/132137/when-does-a-constant-of-the-motion-imply-a-noether-current-in-a-quantum-field?answertab=votes","timestamp":"2014-04-18T10:47:14Z","content_type":null,"content_length":"59689","record_id":"<urn:uuid:da7fda3c-7bcd-47e6-b24c-bdd79fec6307>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00050-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Prime numbers News, Videos, Reviews and Gossip - io9
A partial solution to a centuries-old problem known as the twin prime conjecture now affirms the idea that an infinite number of prime numbers have companions — and that a maximum distance between
these pairs does in fact exist. »
This is the world's largest known prime number
You're looking at the largest prime number ever discovered. That's 2^57,885,161 – 1, to be exact. If you're looking for the the individual numbers, you'll have to work for it. "Former designer"
Philip Bump dissected the prime six digits at a time and converted each chunk into RGB colors. The end result is an image… »
Have mathematicians finally discovered the hidden relationship between …
Okay, math lovers, this is the one you've been waiting for: Shinichi Mochizuki of Kyoto University in Japan is claiming to have found proof (divided into four separate studies with 500+ pages) of the
so-called abc conjecture, a longstanding problem in number theory which predicts that a relationship exists between… »
The bizarre mathematical conundrum of Ulam's Spiral
If there's anything we learn from math teachers and the Da Vinci Code, it's that prime numbers are magic. They can do anything, and be anywhere. Including a doodle on a math paper. »
Take a look at the world's oldest mathematical object
Archaeologists have dug up many ancient, notched bones all around the world, but the Ishango bone is different. On it, there are markings that suggest its owners were making the first attempts at
actual mathematics. »
|
{"url":"http://io9.com/tag/prime-numbers","timestamp":"2014-04-23T20:38:19Z","content_type":null,"content_length":"99694","record_id":"<urn:uuid:7f083d32-027b-4aae-8e14-0d0335368ad8>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00221-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: XPath filter 2.0
From: merlin <merlin@baltimore.ie> Date: Thu, 06 Jun 2002 12:22:47 +0100 To: Christian Geuer-Pollmann <geuer-pollmann@nue.et-inf.uni-siegen.de> Cc: w3c-ietf-xmldsig@w3.org Message-Id:
>> * Implementations SHOULD note that an efficient realization
>> of this transform will not compute a node set at each
>> point in this transform, but instead make a list of
>> the operations and the XPath-selected nodes, and then
>> iterate through the input document once, in document
>> order, constructing a filtering node set N based upon
>> the operations and selected nodes.
>> .......
>> . Efficiency is as with the current spec. Basically this
>> fixes union.
>Does this mean that based on the operations, you make some tree-labeling
>;-)) and then you make one tree-traversal to output the selected nodes?
Yes, but this is the exact algorithm that implementations
of the _current_ XPath Filter 2.0 transform _should_ use
if a sequence of the current XPath Filter 2.0 transforms
precedes c14n. This formulation of the filter simply makes
it easier to express in terms of SHOULD language.
>Sounds cool, in that case, the efficiency would be much better then the
>current results of v2.0. What kind of algorithm do you use for
> - make a list of operations and selected nodes,
> - decide based on this data which nodes are the result.
If you're asking for a normative algorithm for the general
case, then I would have to think for a while. Restricting
myself somewhat:
First, let's characterize a sequence of filters:
Filter ::= (INTERSECT | SUBTRACT | UNION)+
You will observe that adjacent SUBTRACT and INTERSECT
operations can be idempotently reordered, and that
adjacent operations of the same type can be computationally
merged. However, I would expect that type of thing to be
done in the XPath expressions so I will ignore this.
Restricting myself to the following production, which captures
all _common_ (in my opinion) use cases:
SimpleFilter ::= A:UNION* (B:INTERSECT | C:SUBTRACT)* D:UNION*
Iterate over the document.
(define (include-node) ;; whether or not to include a node
(cond ;; returns the first match
((encountered-any D) t)
;; if you've encountered a node in a trailing union
((encountered-any C) nil)
;; not if you've encountered a subtraction node
((null B) t)
;; if there are no intersect operations
((encountered-all B) t)
;; if you've encountered a node in each intersect node set
(t nil))) ;; not otherwise
Note that encountered-any returns nil if its parameter is an
empty list of node sets, but encountered-all is true in this
case. We can therefore express this concisely:
(define (include-node)
(or (encountered-any D)
(and (not (encountered-any C)) (encountered-all B))))
You can implement encountered-foo and therefore include-node
strictly in terms of node labeling, a stack and iteration.
Obviously you must also consider the input node set.
Going from this to a fully general solution is fairly
straightforward. Observe that UNIONs are subject to ALL
subsequent INTERSECT and SUBTRACT operations, but no preceding
ones, and that the entire filter is equivalent to:
UNION/ Filter
I can then say that a node is included if I have encountered
a node in ANY ( UNION operation AND NOT ANY SUBSEQUENT SUBTRACT
operation AND ALL SUBSEQUENT INTERSECT operations ).
Work done to compute this is proportional to the number of
filters but only done at a labeled node.
>--On Donnerstag, 6. Juni 2002 01:09 +0100 merlin <merlin@baltimore.ie>
>> Hi,
>> Quick summary of options:
>> 1. Current Spec
>> . This is intuitive (in my opinion) because it is based on a
>> linear sequence of set operations.
>> . Typical (IMHO) use cases require 2 XPath evaluations.
>> However, increasingly complex filtering requirements incur
>> increasing cost; an arbitrarily complex expression requires
>> an arbitrarily large number of simple XPath expressions.
>> However, the standard XPath filter may be more useful for
>> these anyway.
>> . Operation can, in most cases, be commingled with c14n for
>> efficiency, but:
>> . The union operator is really ugly and unintuitive.
>> 2. Christian's Spec
>> . *I* do not believe this is as intuitive; it involves labeling
>> nodes and then traversing the document, proceeding based
>> on node labels (e.g., omit-but-traverse).
>> . Typical use cases require 2 XPath evaluations. Increasingly
>> complex filtering requirements can be solved in a fixed
>> number (2/3) of increasingly complex XPath expressions.
>> . Operation can be commingled with c14n for effiency.
>> 3. Or, we can take a variant of the current spec. I won't
>> detail it horrendously, but basically:
>> . The XPath Filter 2.1 takes, as a parameter, a sequence
>> of operations, each of which is characterized as a
>> set operation (intersect, subtract, union) and an
>> XPath expression.
>> . Operation over an input node set is as follows:
>> * Construct a node set N consisting of all the
>> nodes in the input document.
>> * Iterate through each of the operations.
>> # Evaluate the XPath expression; the result is X.
>> # Expand all identified nodes to include their
>> subtrees; the result is Y.
>> # Assign N = N op Y
>> * Use the resulting node set N as a filter to select
>> which nodes from the input node set will remain in the
>> output node set, just as the XPath 1.0 filter. This is
>> tantamount to intersection with the input node set.
>> * Implementations SHOULD note that an efficient realization
>> of this transform will not compute a node set at each
>> point in this transform, but instead make a list of
>> the operations and the XPath-selected nodes, and then
>> iterate through the input document once, in document
>> order, constructing a filtering node set N based upon
>> the operations and selected nodes.
>> * Implementations SHOULD note that iterating through the
>> document and constructing a filtering node set N can
>> be efficiently commingled with the canonicalization
>> transform if canonicalization is performed immediately
>> after this transform.
>> . With this formulation, intersection and subtraction
>> are IDENTICAL to the existing spec, with the only
>> change being that you can put them in one transform
>> or many.
>> . Union is, however, much improved (in my opinion). You
>> can only use it to include nodes that would be
>> removed by a previous operation in the same transform.
>> As a result, the output node set will only include
>> nodes from the input node set.
>> . Efficiency is as with the current spec. Basically this
>> fixes union.
>> I write this a while ago; thought I'd send it rather
>> than delete it. It's probably wasteful to propose yet
>> another option.
>> Merlin
Received on Thursday, 6 June 2002 07:23:22 GMT
|
{"url":"http://lists.w3.org/Archives/Public/w3c-ietf-xmldsig/2002AprJun/0291.html","timestamp":"2014-04-19T20:31:15Z","content_type":null,"content_length":"14706","record_id":"<urn:uuid:5940f09c-ea7c-4052-93a8-6fa985b4d981>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00321-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
please help with this passage:
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/515716dee4b07077e0c047e4","timestamp":"2014-04-19T02:23:18Z","content_type":null,"content_length":"58367","record_id":"<urn:uuid:8a5ab424-4a02-4e5a-883c-80de7784c990>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00359-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: st: Maximum likelihood estimation
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: Maximum likelihood estimation
From Joseph Monte <hmjc66@gmail.com>
To statalist <statalist@hsphsun2.harvard.edu>
Subject Re: st: Maximum likelihood estimation
Date Wed, 13 Feb 2013 09:49:00 +0000
Dear Statalisters,
I'll try again with a little more info since I did not get any
responses. Here is the code I have so far (based on previous Statalist
posts and the book "Maximum Likelihood estimation with Stata" by
Gould, Pitbaldo and Poi, 4th ed.). The paper I cited in my first email
below models the log of the variance of the regression error in
equation 2 while I believe I have modelled the log of sigma. I would
preferably like to model the log of the variance as in the paper cited
but am not sure how.
program mynormal_lf1
version 12
args todo b lnfj g1 g2
tempvar mu lnsigma sigma
mleval `mu' = `b', eq(1)
mleval `lnsigma' = `b', eq(2)
quietly {
gen double `sigma' = exp(`lnsigma')
replace `lnfj' = ln(normalden($ML_y1,`mu',`sigma'))
if (`todo'==0) exit
tempvar z
tempname dmu dlnsigma
gen double `z' = ($ML_y1-`mu')/`sigma'
replace `g1' = `z'/`sigma'
replace `g2' = `z'*`z'-1
ml model lf1 mynormal_lf1 (mu: y = x1 x2 x3 x4 x5 x6 x7 x8 x9)
(lnsigma: y = x1 x2 x3 x4 x5 x6 x7 x8 x9)
ml max
On Tue, Feb 12, 2013 at 9:54 AM, Joseph Monte <hmjc66@gmail.com> wrote:
> Dear Statalisters,
> I need to do a maximum likelihood estimation very similar to that in
> equations (1) and (2) on page 439 of Lowry et al. (2010). Note that
> equation 2 has the same independent variables as equation 1. I would
> appreciate it if someone would let me know the code I need to use with
> the help of an example. I use Stata 12.
> References
> Lowry, M., Officer, M.S., Schwert, G.W., 2010. The variability of IPO
> initial returns. The Journal of Finance 65, 425-465
> Thanks,
> Joe
> *
> * For searches and help try:
> * http://www.stata.com/help.cgi?search
> * http://www.stata.com/support/faqs/resources/statalist-faq/
> * http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/faqs/resources/statalist-faq/
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2013-02/msg00505.html","timestamp":"2014-04-19T00:01:44Z","content_type":null,"content_length":"9543","record_id":"<urn:uuid:10551a41-6655-4d98-980e-ba5ec0fcd6a0>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00264-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the encyclopedic entry of third quantum number
Azimuthal quantum number
orbital angular momentum quantum number
second quantum number
) symbolized as
(lower-case L) is a
quantum number
for an
atomic orbital
that determines its orbital
angular momentum
. The azimuthal quantum number is the second of a set of quantum numbers (the
principal quantum number
, following
spectroscopic notation
, the azimuthal quantum number, the
magnetic quantum number
, and the
spin quantum number
) which describe the unique
quantum state
of an electron and is designated by the letter
There is a set of quantum numbers associated with the energy states of the electrons of an atom. The four quantum numbers
, and
specify the complete and unique
quantum state
of a single electron in an atom called its
. The wavefunction of the
Schrödinger wave equation
reduces to the three equations that when solved lead to the first three quantum numbers. Therefore, the equations for the first three quantum numbers are all interrelated. The azimuthal quantum
number arose in the solution of the polar part of the wave equation as shown below. In addition to the understanding of this concept of the azimuth, one may also find it necessary to review or learn
more about
spherical coordinate systems
, and or other alternative mathematical coordinate systems other than the cartesian system. It is known that the spherical coordinate system works best with spherical models, the cylindrical system
with cylinders, the cartesian with general areas, etc.. The concept of the azimuth and how it used to explain electrons may be more understandable after such a review.
An atomic electron's angular momentum, L, which is related to its quantum number $mathbf\left\{\right\}l$ is described by the following equation:
$mathbf\left\{L^2boldsymbol\left\{psi\right\}\right\} = hbar^2\left\{l\left(l+1\right)\right\}boldsymbol\left\{psi\right\}$
where $hbar = h/2pi$ is the reduced Planck's constant, also called Dirac's constant, $mathbf\left\{L^2\right\}$ is the orbital angular momentum operator and $boldsymbol\left\{psi\right\}$ is the
wavefunction of the electron. While many introductory text books on quantum mechanics will refer to L by itself, L has no real meaning except in its use as the angular momentum operator. When
referring to angular momentum, it is best to simply use the quantum number $l,$.
The energy of any wave is the frequency multiplied by Planck's constant. This causes the wave to display particle-like packets of energy called quanta. To show each of the quantum numbers in the
quantum state, the formulae for each quantum number include Planck's reduced constant which only allows particular or discrete or quantized energy levels.
This behavior manifests itself as the "shape" of the orbital.
Electron shells have distinctive shapes denoted by letters. In the illustration, the letters s, p, and d describe the shape of the atomic orbital.
Their wavefunctions take the form of spherical harmonics, and so are described by Legendre polynomials. The various orbitals relating to different values of l are sometimes called sub-shells, and
(mainly for historical reasons) are referred to by letters, as follows:
$l$ Letter Max electrons Shape Name
0 s 2 sphere sharp
1 p 6 two dumbbells principal
2 d 10 four dumbbells diffuse
3 f 14 eight dumbbells fundamental
4 g 18
5 h 22
6 i 26
A mnemonic for the order of the "shells" is some poor dumb fool. Another mnemonic for the order of the "shells" is silly professors dance funny. The letters after the F subshell just follow F in
alphabetical order.
Each of the different angular momentum states can take 2(2l+1) electrons. This is because the third quantum number m[l] (which can be thought of loosely as the quantized projection of the angular
momentum vector on the z-axis) runs from −l to l in integer units, and so there are 2l+1 possible states. Each distinct nlm[l] orbital can be occupied by two electrons with opposing spins (given by
the quantum number m[s]), giving 2(2l+1) electrons overall. Orbitals with higher l than given in the table are perfectly permissible, but these values cover all atoms so far discovered.
For a given value of the principal quantum number, n, the possible values of l range from 0 to n−1; therefore, the n=1 shell only possesses an s subshell and can only take 2 electrons, the n=2 shell
possesses an s and a p subshell and can take 8 electrons overall, the n=3 shell possesses s, p and d subshells and has a maximum of 18 electrons, and so on (generally speaking, the maximum number of
electrons in the nth energy level is 2n^2).
The angular momentum quantum number, l, governs the number of planar nodes going through the nucleus. A planar node can be described in an electromagnetic wave as the midpoint between crest and
trough, which has zero magnitude. In an s orbital, no nodes go through the nucleus, therefore the corresponding azimuthal quantum number l takes the value of zero. In a p orbital, one node traverses
the nucleus and therefore l has the value 1.
Depending on the value of n, the principal quantum number, there is an angular momentum quantum number l and the following series. The wavelengths listed are for a hydrogen atom:
n = 1, l = 0, Lyman series (ultraviolet)
n = 2, l = ħ, Balmer series (visible) Wavelength vary from 400 to 700 nm
n = 3, l = 2ħ, Ritz-Paschen series (short wave infrared)
n = 4, l = 3ħ, Pfund series (long wave infrared)
Addition of quantized angular momenta
Given a quantized total angular momentum
which is the sum of two individual quantized angular momenta
$overrightarrow\left\{j\right\} = overrightarrow\left\{l_1\right\} + overrightarrow\left\{l_2\right\}$
the quantum number $j$ associated with its magnitude can range from $|l_1 - l_2|$ to $l_1 + l_2$ in integer steps where $l_1$ and $l_2$ are quantum numbers corresponding to the magnitudes of the
individual angular momenta.
Total angular momentum of an electron in the atom
Due to the spin-orbit interaction in the atom, the orbital angular momentum no longer commutes with the Hamiltonian, nor does the spin. These therefore change over time. However the total angular
momentum J does commute with the Hamiltonian and so is constant. J is defined through
$overrightarrow\left\{J\right\} = overrightarrow\left\{L\right\} + overrightarrow\left\{S\right\}$
L being the orbital angular momentum and S the spin. The total angular momentum satisfies the same
commutation relations as angular momentum
, namely
$\left[J_i, J_j \right] = i hbar epsilon_\left\{ijk\right\} J_k$
from which follows
$left\left[J_i, J^2 right\right] = 0$
stand for
The quantum numbers describing the system, which are constant over time, are now j and $m_j$, defined through the action of J on the wavefunction $psi$
$mathbf\left\{J^2boldsymbol\left\{psi\right\}\right\} = hbar^2\left\{j\left(j+1\right)\right\}boldsymbol\left\{psi\right\}$
$mathbf\left\{J_zboldsymbol\left\{psi\right\}\right\} = hbar\left\{m_j\right\}boldsymbol\left\{psi\right\}$
So that j is related to the norm of the total angular momentum and $m_j$ to its projection along a specified axis.
As with any angular momentum in quantum mechanics, the projection of J along other axes cannot be co-defined with $J_z$, because they do not commute.
Relation between new and old quantum numbers
, together with the
of the
quantum state
, replace the three
quantum numbers l
(the projection of the
along the specified axis). The former quantum numbers can be related to the latter.
Furthermore, the eigenvectors of j, m[j] and parity, which are also eigenvectors of the Hamiltonian, are linear combinations of the eigenvectors of l, m[l] and m[s].
List of angular momentum quantum numbers
• Intrinsic (or spin) angular momentum quantum number, or simply spin quantum number
• orbital angular momentum quantum number (the subject of this article)
• magnetic quantum number, related to the orbital momentum quantum number
quantum number was carried over from the
Bohr model of the atom
. The Bohr model was derived from
spectroscopic analysis
of the atom in combination with the
atomic model. The lowest quantum level was found to have an angular momentum of zero. To simplify the mathematics, orbits were considered as oscillating charges in one dimension and so described as
"pendulum" orbits. In three-dimensions the orbit becomes spherical without any
crossing the nucleus, similar to a jump rope that oscillates in one large circle.
See also
External links
|
{"url":"http://www.reference.com/browse/third+quantum+number","timestamp":"2014-04-18T11:51:02Z","content_type":null,"content_length":"92072","record_id":"<urn:uuid:9c00f866-9fad-4d92-912a-e16313d20880>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00649-ip-10-147-4-33.ec2.internal.warc.gz"}
|
If a $d \log$ form is exact, is it zero?
up vote 5 down vote favorite
Let $T = \mathrm{Spec}\ \mathbb{C}[x_1^{\pm 1}, x_2^{\pm 1}, \ldots, x_n^{\pm 1}]$ be an algebraic torus and $X$ a closed subvariety. Let $\eta$ be a differential form on $T$ of the form $$\sum_I a_I
\cdot \bigwedge_{i \in I} d \log x_i $$ where $I$ runs over subsets of $\{ 1,2, \ldots, n\}$ and $a_I$ are various constants. If the cohomology class of $\eta$ restricts to $0$ on $X$, is $\eta$
identically zero?
Cases where this is true: $X$ a subtorus (easy), $X$ a curve (easy), $X$ cut out by linear equations in the $x_i$ (Orlik-Solomon).
ag.algebraic-geometry derham-cohomology
This is a sort of Hodge theorem for subvarieties of tori, I guess? That each cohomology class should have but one representative of this form? – Allen Knutson May 9 '13 at 1:42
If we add extra variables, $X$ can always be seen as the intersection of a subtorus and a subvariety cut out by linear equations. If it is a complete intersection when so described, I think we can
wedge with a form that is perpendicular to the subtorus to reduce to the linear equations case, but that might be nonsense. – Will Sawin May 9 '13 at 4:53
add comment
2 Answers
active oldest votes
To expand on my comment: Suppose $X$ is a complete intersection of a subtorus and a linear subspace. View the form as a form on the linear subspace. If its restriction to a subvariety like
the $X$ is cohomologous to $0$, then its cup product with the class of $X$ is cohomologous to $0$. But the class of $X$, which is the pullback of the class of the subtorus, can be represented
by a form of this type. One simply pulls back the "volume form" ($I=\{1,\dots,n\}$) from the quotient torus.
So the original form wedge this new form is cohomologous to $0$ on the linear subspace, so by Orlik-Solomon it is identically zero. Now looking locally at a point of the intersection, the new
form is nonzero and completely transverse to $X$, so if wedging with it produces the zero form, then restricting to $X$ will as well.
If $X$ is any complete intersection, then by adding variables we can make it a complete intersection of a torus and a linear subspace. Simply replace each monomial that occurs in the
up vote polynomial equations of $X$ with a new variable, making them linear equations, and then set the new variables equal to the old monomials, creating new toric equations. Clearly this preserves
4 down the property of being a complete intersection.
EDIT: We use the duality between regular cohomology and compactly supported cohomology. Let $a$ be the original form, let $b$ be the new form, and let $c$ be any compactly supported form. By
non-compact Poincare duality, showing that $a \wedge b \wedge c$ integrates to $0$ for all $c$ of the appropriate degree is sufficient to show $a\wedge b$ is $0$. This is the same as showing
$(c \wedge a) \wedge b$ integrates to $0$. But this is another example of integrating a compact form against a non-compact form, so it's another Ponicare duality. But we can also express
Poincare duality as integrating non-compact forms over compact submanifolds, or non-compact forms over compact submanifolds. Clearly $b$ corresponds to the class of $X$, by pullback from the
torus , so we need $c \wedge a$ to be cohomologous to $0$ when restricted to $X$, which happens as long as $c$ is cohomologous to $0$ when restricted to $X$.
Wait, I'm confused. When you say that $\omega|_X$ is cohomologous to $0$ if and only if $\omega \wedge \eta$ is cohomologous to $0$ for a properly chosen $\eta$, you seem to be using some
sort of Poincare duality. But this is not a compact setting... – David Speyer May 9 '13 at 17:02
I think I can do it using duality between compact and regular cohomology. – Will Sawin May 9 '13 at 18:44
Please add the details when you figure them out! – David Speyer May 10 '13 at 11:56
add comment
I think this works. We prove a slightly stronger statement by induction on $\dim X$: Let $X$ be a quasi-projective variety over $\mathbb{C}$. Let $x_1$, $x_2$, ..., $x_n$ be units in $H^0
(X, \mathcal{O})$. By "$d \log x_i$" I mean the class in $H^1(X)$ pulled back from the generator of $H^1(\mathbb{G}_m)$ by the map $x_i : X \to \mathbb{G}_m$. (I say this to address the
possibility that $X$ isn't smooth, since I'm not sure whether Kahler differentials give cohomology classes in general.) Then, as before, I claim that:
If a polynomial $\eta$ in the $d \log x_i$ is exact, then it is $0$.
The base case, $\dim X=0$, is trivial.
Reduction We may (and do) assume that $X$ is smooth.
Proof Let $\tilde{X} \to X$ be a resolution of singularities. Since the class of $\eta$ is $0$ in $H^{\ast}(X)$, it pulls back to $0$ in $H^{\ast}(\tilde{X})$. So $\eta$ pulls back to $0$
on $\tilde{X}$, and is thus $0$. $\square$
Now, choose a simple normal crossing compactification $\bar{X}$ of $X$. Let $D_1$, $D_2$, ..., $D_m$ be the components of $D$. Our next goal is to prove
Claim For any component $D_1$ of $D$, the form $\eta$ has no pole on $D_1$.
Reduction We may assume that $x_2$, $x_3$, ...., $x_n$ don't have poles or zeroes along $D_1$, and $x_1$ vanishes on $D_1$.
Proof Let $d_i$ be the order to which $x_i$ vanishes on $D_1$. Let $d = GCD(d_1, d_2, \dots, d_m)$. Then by applying a monomial transformation to the $x$'s, we may assume that $x_1$
vanishes to order $d$ on $D_1$. $\square$
up vote 4 Let $Y = D_1 \setminus \bigcup_{i \geq 2} D_1 \cap D_i$ and let $Z$ be the union of $X$ and $Y$ inside $\bar{X}$. So $Y$ is a smooth hypersurface in $Z$. We claim that the $x_i$ extend to
down vote homolomorphic function on $Z$, and that $x_2$, ..., $x_n$ extend to nonvanishing functions. Proof: Suppose $x_i$ does not extend to $Z$. Then $x_i$ has a pole along some hypersurface in
$Z$. By the previous Reduction, that hypersurface is not $Y$, so it must meet $X$. But $x_i$ is well defined on $X$. For $i \geq 2$, this argument also shows that $x_i^{-1}$ extends.
Proof of Claim Let $\omega = \mathrm{Res}_{Y} \eta$. The form $\omega$ is a $d \log$ form on $Y$, in the ring generated by $d \log x_2$, $d \log x_3$, ..., $d \log x_n$. The residue map is
a well defined map $H^k(X) \to H^{k-1}(Y)$. More specifically, let $U$ be a tubular neighborhood of $Y$, so $\partial U \subset X$ is a circle bundle over $Y$. Then $\mathrm{Res}$ is the
composition of restriction $H^k(X) \to H^k(\partial U)$ and the Gysin map $H^k(\partial U) \to H^{k-1}(Y)$.
So $\omega$ is exact on $Y$. By induction, $\omega=0$. We have now established that the residue of $\eta$ along $Y$ is $0$. But $\eta$ clearly has at most a first order pole on $Y$, so it
has no pole at all. $\square$.
Proof of theorem We have shown that $\eta$, as a form on $\bar{X}$, does not blow up on any of $D_i$. Since $\bar{X}$ is smooth, this shows that $\eta$ extends to $\bar{X}$. But, by Hodge
theory, on a smooth projective variety, any exact holomorphic form is $0$.
Alternate proof As above, reduce to $X$ smooth; let $\overline{X}$ be a normal crossing compactification; let $D = \overline{X} \setminus X$. Let $\eta$ be a $k$-form. The condition that $\
eta$ is generated by $d \log$ forms implies that $\eta$ is an element of $H^0(\overline{X}, \Omega^k(\log D))$. The condition that $\eta$ is exact says that the image of $\eta$ in $H^k(X, \
mathbb{C})$ is $0$.
Conveniently, there is a spectral sequence $H^q(\overline{X}, \Omega^p(\log D)) \Rightarrow H^{p+q}(X, \mathbb{C})$ which degenerates at $E_1$. (I'm looking at Voisin's book, volume 1,
Theorem 8.35, she cites Deligne Theorie de Hodge II.) So $H^0(\overline{X}, \Omega^k(\log D))$ injects into $H^k(X, \mathbb{C})$, which was the desired claim.
Thomas Lam and I have finally started writing the paper where we needed this lemma; this is probably the proof we will use.
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry derham-cohomology or ask your own question.
|
{"url":"http://mathoverflow.net/questions/130129/if-a-d-log-form-is-exact-is-it-zero","timestamp":"2014-04-16T08:03:46Z","content_type":null,"content_length":"66466","record_id":"<urn:uuid:dfc62623-83a2-48d2-b80d-72e19730040c>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00383-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Algebraic and Geometric Topology 4 (2004), paper no. 17, pages 311-332.
Shadow world evaluation of the Yang-Mills measure
Charles Frohman, Joanna Kania-Bartoszynska
Abstract. A new state-sum formula for the evaluation of the Yang-Mills measure in the Kauffman bracket skein algebra of a closed surface is derived. The formula extends the Kauffman bracket to
diagrams that lie in surfaces other than the plane. It also extends Turaev's shadow world invariant of links in a circle bundle over a surface away from roots of unity. The limiting behavior of the
Yang-Mills measure when the complex parameter approaches $-1$ is studied. The formula is applied to compute integrals of simple closed curves over the character variety of the surface against
Goldman's symplectic measure.
Keywords. Yang-Mills measure, shadows, links, skeins, SU(2)-characters of a surface
AMS subject classification. Primary: 57M27. Secondary: 57R56, 81T13.
DOI: 10.2140/agt.2004.4.311
E-print: arXiv:math.GT/0205193
Submitted: 17 April 2003. (Revised: 26 March 2004.) Accepted: 28 April 2004. Published: 21 May 2004.
Notes on file formats
Charles Frohman, Joanna Kania-Bartoszynska
Department of Mathematics, University of Iowa, Iowa City, IA 52242, USA
Department of Mathematics, Boise State University, Boise, ID 83725, USA
Email: frohman@math.uiowa.edu, kania@math.boisestate.edu
URL: http://www.math.uiowa.edu/~frohman, http://math.boisestate.edu/~kania
AGT home page
Archival Version
These pages are not updated anymore. They reflect the state of . For the current production of this journal, please refer to http://msp.warwick.ac.uk/.
|
{"url":"http://www.emis.de/journals/UW/agt/AGTVol4/agt-4-17.abs.html","timestamp":"2014-04-18T10:42:53Z","content_type":null,"content_length":"3522","record_id":"<urn:uuid:7f7a5028-cf31-4faf-b1f5-0b138dadc354>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00034-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: September 2012 [00154]
[Date Index] [Thread Index] [Author Index]
Re: Are some equations unsolvable?
• To: mathgroup at smc.vnet.net
• Subject: [mg128070] Re: Are some equations unsolvable?
• From: "Dr. Wolfgang Hintze" <weh at snafu.de>
• Date: Thu, 13 Sep 2012 03:39:24 -0400 (EDT)
• Delivered-to: l-mathgroup@mail-archive0.wolfram.com
• Delivered-to: l-mathgroup@wolfram.com
• Delivered-to: mathgroup-newout@smc.vnet.net
• Delivered-to: mathgroup-newsend@smc.vnet.net
• References: <k2pc54$99t$1@smc.vnet.net>
On 12 Sep., 09:05, Sergio Sergio <zerg... at gmail.com> wrote:
> Hi,
> This is what I have:
> f = (1/(sigma*Sqrt[2 Pi]))*Sqrt[(Pi/(-1/(2*sigma^2)))]*
> Exp[((m/sigma^2)^2)/4*(-1/2*sigma^2) + (m^2/(2*sigma^2))]
> Solve[f == 216, sigma]
> And I get this message: "This system cannot be solved with the methods available to Solve"
> Is it because there is no way to isolate sigma? Or am I doing something wrong?
> Thanks
As a first step try to tell Mathematica something about sigma, for
instance that sigma >0:
In[37]:= fs = Simplify[f, sigma > 0]
Out[37]= I*E^((3*m^2)/(8*sigma^2))
Then Mathematica can solve your equation easily:
In[38]:= Solve[216 == fs, sigma]
During evaluation of In[38]:= Solve::ifun:Inverse functions are being
used by Solve, so some solutions may not be found; use Reduce for
complete solution information. >>
Out[38]= {
{sigma -> -(((-1)^(1/4)*m)/(2*Sqrt[(1/3)*(Pi + 2*I*Log[216])]))},
{sigma -> ((-1)^(1/4)*m)/(2*Sqrt[(1/3)*(Pi + 2*I*Log[216])])}}
By the way your expression looks very similar to a normal distribution
but with strange signs. Are you sure that this is the expression you
had in mind?
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2012/Sep/msg00154.html","timestamp":"2014-04-18T08:20:42Z","content_type":null,"content_length":"26584","record_id":"<urn:uuid:184be30c-09ee-4cc8-8aeb-75d18f533ac2>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00329-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Demand analyser in GHC
This page explains basics of the so-called demand analysis in GHC, comprising strictness and absence analyses. Meanings of demand signatures are explained and examples are provided. Also, components
of the compiler possibly affected by the results of the demand analysis are listed with explanations provided.
Demand signatures
Let us compile the following program with -O2 -ddump-stranal flags:
f c p = case p
of (a, b) -> if c
then (a, True)
else (True, False)
The resulting demand signature for function f will be the following one:
Str=DmdType <S,U><S,U(UA)>m
This should be read as "f puts stricts demands on both its arguments (hence, S); f might use its first and second arguments. but in the second argument (which is a product), the second component is
ignored". The suffix m in the demand signature indicates that the function returns CPR, a constructed product result (for more information on CPR see the JFP paper Constructed Product Result
Analysis for Haskell).
Current implementation of demand analysis in Haskell performs annotation of all binders with demands, put on them in the context of their use. For functions, it is assumed, that the result of the
function is used strictly. The analysis infers strictness and usage information separately, as two components of a cartesian product domain. The same analysis also performs inference CPR and
bottoming properties for functions, which can be read from the suffix of the signature. Demand signatures of inner definitions may also include demand environments that indicate demands, which a
closure puts to its free variables, once strictly used, e.g. the signature
Str=DmdType <L,U> {skY-><S,U>}
indicates that the function has one parameter, which is used lazily (hence <L,U>), however, when its result is used strictly, the free variable skY in its body is also used strictly.
Demand descriptions
Strictness demands
• B -- a hyperstrict demand. The expression e puts this demand on its argument x if every evaluation of e is guaranteed to diverge, regardless of the value of the argument. We call this demand
hyperstrict because it is safe to evaluate x to arbitrary depth before evaluating e. This demand is polymorphic with respect to function calls and can be seen as B = C(B) = C(C(B)) = ... for an
arbitrary depth.
• L -- a lazy demand. If an expression e places demand L on a variable x, we can deduce nothing about how e uses x. L is the completely uninformative demand, the top element of the lattice.
• S -- a head-strict demand. If e places demand S on x then e evaluates x to at least head-normal form; that is, to the outermost constructor of x. This demand is typically placed by the seq
function on its first argument. The demand S(L ... L) places a lazy demand on all the components, and so is equivalent to S; hence the identity S = S(L ... L). Another identity is for functions,
which states that S = C(L). Indeed, if a function is certainly called, it is evaluated at lest up to the head normal form, i.e., strictly. However, its result may be used lazily.
• S(s1 ... sn) -- a structured strictness demand on a product. It is at least head-strict, and perhaps more.
• C(s) -- a call-demand, when placed on a binder x, indicates that the value is a function, which is always called and its result is used according to the demand s.
Absence/usage demands
• A -- when placed on a binder x it means that x is definitely unused.
• U -- the value is used on some execution path. This demand is a top of usage domain.
• H -- a head-used demand. Indicates that a product value is used itself, however its components are certainly ignored. This demand is typically placed by the seq function on its first argument.
This demand is polymorphic with respect to products and functions. For a product, the head-used demand is expanded as U(A, ..., A) and for functions it can be read as C(A), as the function is
called (i.e., evaluated to at least a head-normal form), but its result is ignored.
• U(u1 ... un) -- a structured usage demand on a product. It is at least head-used, and perhaps more.
• C(u) -- a call-demand for usage information. When put on a binder x, indicates that x in all executions paths where x is used, it is applied to some argument, and the result of the application is
used with a demand u.
Additional information (demand signature suffix)
• b -- the function is a bottoming one, i.e., some decoration of error and friends.
Worker-Wrapper split
Demand analysis in GHC drives the worker-wrapper transformation, which exposes specialised calling conventions to the rest of the compiler. In particular, the worker-wrapper transformation implements
the unboxing optimisation.
The worker-wrapper transformation splits each function f into a wrapper, with the ordinary calling convention, and a worker, with a specialised calling convention. The wrapper serves as an
impedance-matcher to the worker; it simply calls the worker using the specialised calling convention. The transformation can be expressed directly in GHC's intermediate language. Suppose that f is
defined thus:
f :: (Int,Int) -> Int
f p = <rhs>
and that we know that f is strict in its argument (the pair, that is), and uses its components. What worker-wrapper split shall we make? Here is one possibility:
f :: (Int,Int) -> Int
f p = case p of
(a,b) -> $wf a b
$wf :: Int -> Int -> Int
$wf a b = let p = (a,b) in <rhs>
Now the wrapper, f, can be inlined at every call site, so that the caller evaluates p, passing only the components to the worker $wf, thereby implementing the unboxing transformation.
But what if f did not use a, or b? Then it would be silly to pass them to the worker $wf. Hence the need for absence analysis. Suppose, then, that we know that b is not needed. Then we can transform
f :: (Int,Int) -> Int
f p = case p of (a,b) -> $wf a
$wf :: Int -> Int
$wf a = let p = (a,error "abs") in <rhs>
Since b is not needed, we can avoid passing it from the wrapper to the worker; while in the worker, we can use error "abs" instead of b.
In short, the worker-wrapper transformation allows the knowledge gained from strictness and absence analysis to be exposed to the rest of the compiler simply by performing a local transformation on
the function definition. Then ordinary inlining and case elimination will do the rest, transformations the compiler does anyway.
Relevant compiler parts
Multiple parts of GHC are sensitive to changes in the nature of demand signatures and results of the demand analysis, which might cause unexpected errors when hacking into demands. This list
enumerates the parts of the compiler that are sensitive to demand, with brief summaries of how so.
|
{"url":"https://ghc.haskell.org/trac/ghc/wiki/Commentary/Compiler/Demand","timestamp":"2014-04-16T22:36:21Z","content_type":null,"content_length":"19950","record_id":"<urn:uuid:29aeb258-f693-4397-b028-456571dee692>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00137-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Priority Queue
│ NB. Do not confuse the priority queue with the (non-priority) queue. │
The operations on a Priority Queue are:
1. create an empty Priority Queue,
2. insert an element having a certain priority to the Priority Queue,
3. remove that element having the highest priority.
The children, if they exist, of the element at `i' are at:
• left(i) = 2*i
• right(i) = 2*i + 1
a[i..j], where i>=1, is a heap iff every element is no smaller than its children, if any. Note that if a[1..j] is a heap, a[1] must hold the largest value in a[ ].
Note, there is a similar definition for a heap with the smallest value at the top. It is also possible to use a[0..j-1] as the heap, but the definitions of left and right must be changed [-Exercise].
If a[1..i-1] is already a heap, a new element could be placed at a[i] except that it might violate the heap property, i.e. it might be greater than its parent, assuming that i>=2. The parent is at
position floor(i/2). If the parent, p, is smaller than the new element, then p can be moved down to a[i] and the new element placed at p's old position. The new element is larger than p (which used
to be larger than its children). The only possible problem is that the new element might still be larger than its new parent, if that exists. No matter, work up the "tree" moving small parents down,
until either the top, a[1], is reached or until a parent no smaller than the new element is found and the new element can be placed.
function upHeap( child )
// PRE: a[1..child-1] is a Heap
// POST: a[1..child] is a Heap
{ var newElt = a[child];
var parent = Math.floor(child/2);
while( parent >= 1) // child has a parent
{ // INV: a[child .. ] is a Heap
if( a[parent] < newElt )
{ a[child] = a[parent]; // move parent down
child = parent;
parent = Math.floor(child/2);
else break;
// ASSERT: child == 1 || newElt <= a[parent]
a[child] = newElt;
// to insert:
N++; a[N] = some new value;
Remove Highest Priority Element
The highest priority element is a[1], and can be returned, but this leaves a hole at position 1. The hole is filled by moving a[n] to a[1] and decreasing n. This may violate the heap property, in
fact it is likely to do so because elements near the "bottom" tend to be of low priority. The solution is to move the element down the heap until it is no smaller than its children, if any.
function downHeap( parent )
// PRE: a[parent+1..N] is a Heap, and parent >= 1
// POST: a[parent ..N] is a Heap
{ var newElt = a[parent];
var child = 2*parent; // left(parent)
while( child <= N ) // parent has a child
{ // INV: a[1 .. parent] is a Heap
if( child < N ) // has 2 children
if( a[child+1] > a[child] )
child++; // right child is bigger
if( newElt < a[child] )
{ a[parent] = a[child];
parent = child;
child = 2*parent;
else break;
// ASSERT: child > N || newElt >= a[child]
a[parent] = newElt;
// to remove highest priority element
highest = a[1];
a[1] = a[N]; N--;
The HTML FORM below allows a Priority Queue to be manipulated. You can create an empty Priority Queue, add element(s), and remove the element with the highest priority. Experiment!
Elements to be inserted into the priority queue should be put in the `add' window, then use the add button. The output windows, `o/p', show the contents of the priority queue (a) as a linear array
and (b) as a notional tree structure with the root at the left. When the highest priority element is selected and removed - `get top' - the old and new contents of the array are also shown.
The height of a heap of n elements is ~log[2](n), and the maximum time for insertion and for removing the highest priority item is O(log(n)).
The heap operations take O(1)-space.
• The Heap data-structures is central to Heap Sort.
• Also see the (non-priority) [Queue] ADT. Do not confuse a queue with a priority queue; the former can be thought of as a special case of the latter in which priority is time of arrival, but a
queue is far simpler to implement than a priority queue.
│window on the wide world:│
│ │
│ │
|
{"url":"http://www.csse.monash.edu.au/~lloyd/tildeAlgDS/Priority-Q/","timestamp":"2014-04-21T02:14:58Z","content_type":null,"content_length":"23239","record_id":"<urn:uuid:3eee2b5d-7579-4ca3-81a5-5d6f7d7ac349>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00011-ip-10-147-4-33.ec2.internal.warc.gz"}
|
True OrFalse? Max And Moritz Are Twinastronauts ... | Chegg.com
True orFalse?
Max and Moritz are twinastronauts who go to different planets. When Max arrivesat
planet A, he finds theground potential to be 1,000,000 V. Moritz ends up at planetB
which is like the Earthwith a ground potential of 0 V. On A Max touches a goldbracelet
insulated from the groundand at a potential of 1,000,001 V. On B Moritz touches agold
bracelet insulated fromthe ground and at a potential of 200 V. Clearly Moritz suffersa
real jolt, while Max feelsnothing.
If this statement is true,prove it. If false, show why it is false.
|
{"url":"http://www.chegg.com/homework-help/questions-and-answers/true-orfalse-max-moritz-twinastronauts-go-different-planets-max-arrivesat-planet-finds-the-q470360","timestamp":"2014-04-16T18:06:40Z","content_type":null,"content_length":"22268","record_id":"<urn:uuid:3e8d53f1-a95e-4e27-8f50-08011d0ca4cb>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00610-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Trying to get a golden spiral from an overhead view of a helix
Replies: 3 Last Post: Apr 22, 2008 9:37 AM
Messages: [ Previous | Next ]
Re: Trying to get a golden spiral from an overhead view of a helix
Posted: Feb 19, 2005 8:26 AM
Thank you both for your thoughts. The reason I thought that an
overhead perspective view of a helix would resemble a logarithmic
spiral is that I've seen side-by-side comparisons, for example in
National Geographic, of a spiral galaxy and then the DNA double helix
from the view I'm describing. The curves appear to be very similar,
and I've read that the arms of a spiral galaxy are basically
logarithmic spirals. The drawing I have in mind is sort of inspired by
that fact. The idea is to have a ton of stylized stars spiraling at
the viewer. In 3d, they'd be following a helical path, and in 2d, they
would at least come close to forming a golden spiral. That's part of
the drawing, anyway.
However, after gleaning what I could from Google searches for "spiral"
and "helix," it seems to me that a top-down perspective view of a
helix actually gives you a different kind of spiral: a hyperbolic
spiral. I'm not positive, and if someone could confirm or deny that it
would be a huge help. But the thing about logarithmic spirals is that
they continue to circle the origin even as the loops get infinitely
farther out. The rules of perspective make this impossible. I'm not
sure I can explain why, but I'm pretty sure that the rules of
perspective dictate that as the 3d helix comes closer and closer to
the viewer, the 2d representation of it comes closer and closer to a
ray shooting straight out from the origin. I'm not so hot on
perspective either, but I know that weird things start happening when
you try drawing stuff that would be outside of the field of vision in
real life, and I believe that's what the result would be. And a
hyperbolic spiral seems to do that. It's also what I think I saw when
I looked down the axis of a helix in my 3d modeling program. Here's
MathWorld's entry, by the way:
So I may have to work with that shape, though it's kind of a shame,
because I think the golden spiral is especially beautiful. On the
other hand, I think that fact about a logarithmic spiral being formed
when you look down the central axis of a conical helix could prove
helpful. I believe that truly happens only in an orthographic
projection, not in perspective, but the perspective drawing would
resemble the orthographic projection close to the center. Maybe I can
work with a conical helix instead of a cylindrical one. I also think
it will be helpful as I continue to experiment to consider only
corresponding (similar? not sure what the word would be) points on
succesive loops, and the intervals between them.
Thanks again to both of you for helping, and sorry for the long post,
but any other insights would be greatly appreciated.
On Fri, 18 Feb 2005 14:47:51 -0500, Ed Wall wrote:
>Interesting question. Perhaps you might say something in a bit more
>detail on how other logarithmic spirals are approximated by helix
>projections. Enneper, I think, showed that the projection of a helix
>on a cone was a logarithmic spiral, but I'm not sure of the cite.
>Ed Wall
>>Hi, I wonder if someone could help me with a project I'm working on.
>>It's a perspective drawing with what I hope are interesting
>>underpinnings, but which are a bit beyond me, unfortunately. Here's
>>the situation. The viewer or camera or whatever is looking right
>>the central axis of a helix. It's my understanding that if you're
>>looking right down the central axis, a 2d representation of the
>>would approximate a logarithmic spiral. If it's a perspective and
>>an orthographic drawing, that is. I'm going for one logarithmic
>>in particular, what I think is called a golden spiral. The one shown
>>Now, it seems to me that the two properties of the helix that I can
>>adjust to make it appear that way are its radius and the distance
>>between its loops or coils. (Sorry, part of the problem is that I
>>don't really know the vocabulary.) My question is, can anyone help
>>figure out what those two attributes of the helix should be,
>>to each other, for the view I'm describing to come as close as
>>possible to a golden spiral? Would it be the golden ratio or
>>Finally, the entry for logarithmic spiral on MathWorld...
>>...has something about approximating a logarithmic spiral by
>>with equally spaced rays and drawing a perpendicular from one to the
>>next. That would seem to relate, but I just can't get my head around
>>it. Thanks so much for any help you can give me!
|
{"url":"http://mathforum.org/kb/thread.jspa?threadID=1119261&messageID=3672936","timestamp":"2014-04-16T13:25:41Z","content_type":null,"content_length":"25385","record_id":"<urn:uuid:f2d1baea-e2b1-40eb-b90b-478fad917db9>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00202-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Residue theorem for integral of real sinusodial function
June 8th 2007, 08:34 AM #1
Jun 2007
Residue theorem for integral of real sinusodial function
I've seen a few examples but don't understand how the contour is chosen.
We use the substitution z=e^(i*theta)
If the integral is over -pi to pi, or over 0 to 2*pi, then the contour is the unit circle centred on the origin.
My questions:
1.) Why?
2.) What would the contour be if we were integrating over 0 to pi?
My attempts at answers:
1.) Is it because the subtitution we use is the usual parameterization of the unit circle? (I read that somewhere but to be honest I don't really understand what it means). Or is it a full circle
because we are integrating over a full circle (0 to 2*pi) in the original limits of integration? In which case why is it a unit circle, why couldn't its radius be larger or smaller? How do we
chose the radius of the circle contour?
2.) Would it be a unit semi-circle centred on the origin? If so which 2 quartiles of the imaginary plane would it cover? Or would it still be a unit circle centred on the axis?
Also I think the Cauchy theorem tells us that any integral over a closed contour in the complex plane is equal to zero if the function is regular on or within that contour. If there are
singularities (poles) within the contour then we apply the residue theorem and find the value of the integral equal to 2*pi*i * [sum of residues of the poles]. Right?
All very well, but some integrals have several poles, and depending on which contour you choose you may or may not enclose these singularities. You have to choose a contour that is regular at all
points for that function. But in all the examples I looked at they chose the unit circle centred on the radius, and in one of them this meant the pole at z = 2 wasn't enclosed, but they could
have chosen a circle of radius 3 or 4 or even infinity and the function would still be regular at all points on the contour. I'm confused.
Any answers/help/hints/tips would be much appreciated. Thanks.
$z = e^{i \theta} = \cos \theta + i \sin \theta$
Now if $0\leq \theta \leq 2\pi$
That means the curve follows the path:
$x = \cos \theta \mbox{ and } y=\sin \theta$
Which is indeed a unit circle.
^^ Thanks again ThePerfetHacker.
So if we were integrating from 0 to pi now the curve would follow an arc from (1,0) to (0,1) to (-1,0). That isn't a closed contour. Don't we need a closed contour to apply the residue theorem?
What would be the correct closed contour for such an integral over 0 to pi, and how would we choose it?
Last edited by Madmax; June 8th 2007 at 09:09 AM.
^^ Thanks again ThePerfetHacker.
So if we were integrating from 0 to pi now the curve would follow an arc from (1,0) to (0,1) to (-1,0). That isn't a closed contour. Don't we need a closed contour to apply the residue theorem?
What would be the correct closed contour for such an integral over 0 to pi, and how would we chose it?
You can connect the line segment from (-1,0) to (0,1). If you do that you will form a peicewise smooth simple closed curve and residue theorem would apply.
Look at the picture below.
Hmm thank you.
I suppose that we can close it like that because a) as you said the contour is peicewise smooth and simple, and b) because the range from (-1,0) to (1,0) isn't part of the z space...
On the other hand I've seen simple examples, which I'm pretty sure can be integrated with this method from 0 to pi, where the poles reside on the real axis within -1 = x and x = 1. This would
mean that the contour shown above would cut through those singularities. Isn't that a problem? But then again is it true that you can simply divide the value of the residue by 2 if the contour
cuts through a pole?
On the other hand I've seen simple examples, which I'm pretty sure can be integrated with this method from 0 to pi, where the poles reside on the real axis within -1 = x and x = 1. This would
mean that the contour shown above would cut through those singularities. Isn't that a problem? But then again is it true that you can simply divide the value of the residue by 2 if the contour
cuts through a pole?
What you do is circle around the singularity on the real line by either keeping the singularity within your contour or by excluding it. There are various and sundry ways of doing this.
Take, for example a function f(x) where there is a pole at x = 1 and we are integrating f(x) over the whole real line.
We may use a semicircle in the upper half plane where we integrate our contour thusly:
$\lim_{\epsilon \to 0} \int_{-\infty}^{1 - \epsilon}dz f(z) + \lim_{\epsilon \to 0} \int_{\pi}^0 d \theta (1 + \epsilon)i f( (1 + \epsilon e^{i \theta})) + \lim_{\epsilon \to 0} \int_{1 + \
epsilon}^{\infty} dz f(z)$ + (semicircle at R = $\infty$ from $\theta = 0$ to $\theta = \pi$)
(This contour excludes our singularity. If we wanted to include it we'd integrate the second term from $\pi$ to $2 \pi$.)
The sum of the first and third terms is called the "principle value" of the integral and is usually written as:
$P \int_{-\infty}^{\infty} dz f(z) = <br /> \lim_{\epsilon \to 0} \int_{-\infty}^{1 - \epsilon}dz f(z) + \lim_{\epsilon \to 0} \int_{1 + \epsilon}^{\infty} dz f(z)$
Hmm thanks again topsquark, but to be honest I don't understand :/
How do we evaluate $\int^{\pi}_{0}f(\cos x, \sin x)dx$ using the residue theorem in complex analysis?
If we were integrating from 0 to 2*pi we could use the unit circle centred on the origin as the contour, because z follows that contour, and then apply the residue theorem to calculate 2*pi*i*
[sum of enclosed poles] = answer.
I'm sure it can be done easily for 0 to pi because I read from http://www.math.gatech.edu/~cain/win...supplement.pdf that "Our method is easily adaptable for integrals over a different range, for
between 0 and pi or between ąpi." Unfortunately he doesn't give an example.
So how de we adapt the simple z = e^(i x) substitution method integrating from 0 to 2*pi, i.e over the unit circle, to integrate from 0 to pi?
I'm just looking for a simple example of $\int^{\pi}_{0}f(\cos x, \sin x)dx$ using the residue theorem in complex analysis, please.
Let me show you an example, perhaps that will suffice for you.
(For posterity's sake, this example is taken from Arfken, "Mathematical Methods for Physicists, 3rd ed." pg 408)
$I = \int_0^{\infty} \frac{sin(x)}{x} dx$
We may take this integral to be half of the imaginary part of
$I_z = P \int_{-\infty}^{\infty} \frac{e^{iz}}{z} dz$ <-- P means the "principle value" here.
There is a simple pole at z = 0 here.
I'll have to describe the contour since I'm lousy at drawing. I'm going to take the integral over the line from negative infinity to -r (where r is a small number) and pick it back up from r to
positive infinity. Connecting these two rays is the contour C1, which is a semicircle of radius r in the upper half plane. To close the contour I'm going to use a semicircle of radius R (where R
is very large) in the upper half plane. Naturally we'll be going around the closed contour in a clockwise fashion.
We choose this contour to avoid the pole at z = 0, to include the whole real axis (excepting a vanishingly small contribution near z = 0), and to yield a vanishingly small integrand for the C2
contour as $R \to \infty$.
$\oint \frac{e^{iz}}{z} dz = \int_{-R}^{-r} \frac{e^{ix}}{x} dx + \int_{C_1} \frac{e^{iz}}{z} dz + \int_r^R \frac{e^{ix}}{x} dx + \int_{C_2} \frac{e^{iz}}{z} dz = 0$
(Since there are no poles enclosed by the contour the sum is 0 according to the residue theorem.)
By Jordan's Lemma
$\int_{C_2} \frac{e^{iz}}{z} dz = 0$
$\oint \frac{e^{iz}}{z} dz = \int_{C_1} \frac{e^{iz}}{z} dz + P \int_{-\infty}^{\infty} \frac{e^{iz}}{z} dz = 0$
$P \int_{-\infty}^{\infty} \frac{e^{iz}}{z} dz = -\int_{C_1} \frac{e^{iz}}{z} dz$
Let's do the C1 integral.
$z = re^{i \theta}$
as we go from $\theta = \pi$ to $\theta = 0$.
$dz = rie^{i \theta} d \theta$
$\int_{C_1} \frac{e^{iz}}{z} dz = \lim_{r \to 0} \int_{\pi}^0 \frac{e^{ire^{i \theta}}}{re^{i \theta}} d \theta$
Since r is small in the limit, we may expand the exponential funtion as a Taylor series about r = 0:
$e^{re^{i \theta}} \approx 1 + ire^{i \theta}$
$\lim_{r \to 0} \int_{\pi}^0 \frac{e^{ire^{i \theta}}}{re^{i \theta}} d \theta \to \lim_{r \to 0} \int_{\pi}^0 \frac{1 + ire^{i \theta}}{re^{i \theta}} d \theta$
$= \lim_{r \to 0} \frac{1}{r} \int_{\pi}^0 e^{-i \theta} d \theta + i \lim_{r \to 0} \int_{\pi}^0 d \theta$
The first integral is a real number not dependent on r, so in the limit the first term is zero. Again, the second integral is a real number not dependent on r, so the limit goes away. Thus:
$\int_{C_1} \frac{e^{iz}}{z} dz = i \int_{\pi}^0 d \theta = -i \pi$
$P \int_{-\infty}^{\infty} \frac{e^{iz}}{z} dz = i \pi$
Now, our original integral is half of the imaginary part of this expression, so
$I = \int_0^{\infty} \frac{sin(x)}{x} dx = \frac{\pi}{2}$
Hmm thanks again topsquark, but to be honest I don't understand :/
How do we evaluate $\int^{\pi}_{0}f(\cos x, \sin x)dx$ using the residue theorem in complex analysis?
If we were integrating from 0 to 2*pi we could use the unit circle centred on the origin as the contour, because z follows that contour, and then apply the residue theorem to calculate 2*pi*i*
[sum of enclosed poles] = answer.
I'm sure it can be done easily for 0 to pi because I read from http://www.math.gatech.edu/~cain/winter99/supplement.pdf that "Our method is easily adaptable for integrals over a different range,
for example
between 0 and pi or between ąpi." Unfortunately he doesn't give an example.
So how de we adapt the simple z = e^(i x) substitution method integrating from 0 to 2*pi, i.e over the unit circle, to integrate from 0 to pi?
I'm just looking for a simple example of $\int^{\pi}_{0}f(\cos x, \sin x)dx$ using the residue theorem in complex analysis, please.
To answer your question specifically, I would include an integral over the real line, excluding poles, to close the contour, which is similar in principle to the integral I just posted.
thank you very much for your time topsquark. I have that very good Arfken and Webber book you mentioned open infront of me, I feel so thick
I've been thinking about the problem and I'm trying the subsitution z = e^(i2x) so that the conotur is closed by default, (as the unit circle around the origin).
June 8th 2007, 08:39 AM #2
Global Moderator
Nov 2005
New York City
June 8th 2007, 08:59 AM #3
Jun 2007
June 8th 2007, 09:30 AM #4
Global Moderator
Nov 2005
New York City
June 8th 2007, 09:49 AM #5
Jun 2007
June 8th 2007, 09:55 AM #6
Global Moderator
Nov 2005
New York City
June 8th 2007, 11:00 AM #7
June 9th 2007, 09:12 AM #8
Jun 2007
June 9th 2007, 11:13 AM #9
June 9th 2007, 11:16 AM #10
June 9th 2007, 11:28 AM #11
Jun 2007
|
{"url":"http://mathhelpforum.com/calculus/15758-residue-theorem-integral-real-sinusodial-function.html","timestamp":"2014-04-17T17:13:42Z","content_type":null,"content_length":"82057","record_id":"<urn:uuid:43cd5aeb-3434-4dc4-beb7-301d49bbb8bc>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00429-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Derive the relation of Force between any two masses. - Homework Help - eNotes.com
Derive the relation of Force between any two masses.
Let two object of mass `m_1,m_2` are separated by distance r.Let F be the force acting between these two objects. Then
force F is directly proportional to the product of masses and Force F is inversely proportinal to squar of distance between them.ie.
`F prop m_1 xxm_2`
`F prop(1/r^2)`
Thus combining these two laws
`F prop(m_1m_2)/(r^2)`
where G is universal gravitational constant.
Sorry it was slip /software problem,
mass is in Kg and distance taken in metre.
result force came in Newton.
I mean to ask by the method of dimension
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes
|
{"url":"http://www.enotes.com/homework-help/derive-relation-force-between-any-two-masses-431705","timestamp":"2014-04-19T05:59:08Z","content_type":null,"content_length":"28877","record_id":"<urn:uuid:fbd7305d-4ff2-4f60-854a-46d626764494>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00032-ip-10-147-4-33.ec2.internal.warc.gz"}
|
CS 503
CS 503 Foundations of Computer Science
01/18-03/06 2012 at Cisco
Course Syllabus Homework Assignment 1, Due January 25
Solution for Homework Assignment 1
Homework Assignment 2, Due February 1
Solution for Homework Assignment 2
Homework Assignment 3, Due February 8
Solution for Homework Assignment 3
Homework Assignment 4, Due February 22
Solution for Homework Assignment 4
Homework Assignment 5, Due February 29
Solution for Homework Assignment 5
Practice Midterm Exam
Solutions for the Practice Midterm Exam
Solutions for the Midterm Exam
Practice Final Exam
Solutions for the Practice Final Exam
The Status of the P versus NP Problem
|
{"url":"http://web.cs.wpi.edu/~gsarkozy/503/cs503.html","timestamp":"2014-04-20T05:42:27Z","content_type":null,"content_length":"3349","record_id":"<urn:uuid:241b4817-7871-48f9-8d46-f0d9e6c3e76e>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00459-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Tutor] prime test problem
Evert Rol evert.rol at gmail.com
Sat Aug 21 17:11:53 CEST 2010
> Hello,
> I know.
> I have read them all I believe but I can't see how I can convert the algebra to a working programm.
And if you just search Google for "Python prime number algorithm"? Perhaps it's cheating, so you'll have to try and fully understand the code first before you run it (be sure to read comments if there are any, eg the activatestate recipes; can have tons of useful extra info & links).
Generally, try to start with a program yourself, and then see where you get stuck. You could then post your stuck algorithm to the list and ask for advice; your current question is a bit too general, and thus harder to answer.
> Roelof
> Date: Sat, 21 Aug 2010 19:15:03 +0530
> Subject: Re: [Tutor] prime test problem
> From: nitin.162 at gmail.com
> To: rwobben at hotmail.com
> CC: tutor at python.org
> For this problem u will get lots of solutions on the net.
> e.g wilson's theorem , sieve of eranthoses etc.
> --nitin
> On Sat, Aug 21, 2010 at 7:05 PM, Roelof Wobben <rwobben at hotmail.com> wrote:
> Hello,
> I have to make a programm which can test if a number is a prime.
> I know a prime is a number which can only be diveded by 1 and itself.
> One way is was thinking about is to make a loop which try if % has output 0.
> But that don't work.
> Can someone give me a hint what's the best approach is.
> Roelof
More information about the Tutor mailing list
|
{"url":"https://mail.python.org/pipermail/tutor/2010-August/077974.html","timestamp":"2014-04-20T04:26:03Z","content_type":null,"content_length":"4392","record_id":"<urn:uuid:37da9c93-2060-4e98-9421-d40cf4c8bf2d>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00572-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Comment on
(Sorry for be rather off-topic, but it's an important topic for me, so... no real excuse, I know ;-)
Programming is harder than rocket science, because it still doesn't have a firm mathematical machinery behind it.
I don't know about engineering, but in physics the mathematical machinery isn't as firm as one would think. Well in research papers it usually is, but not in the typical course.
I've witnessed this several times, at various universities:
teacher: Now we can write $this integral like $that
student: Wait, /can/ we even do this transformation
teacher: Well, the mathematicians know a list of
conditions that determine if it's allowed, but
I don't know them because it's allowed for all
functions that physicists ever use.
(For the interested, those conditions are usually "only a finite number of discontinuities).
Which is perfectly fine, because I'd never be able to finish my studies if I only did mathematical operations that I have proven myself and that I know are allowed, but it doesn't really raise my
level of confidence in mathematical foundations.
When you talk about mathematics, keep in mind that it's still only humans that do it, and they can make mistakes, and even in mathematics they can have varying opinions.
Posts are HTML formatted. Put <p> </p> tags around your paragraphs. Put <code> </code> tags around your code and data!
Read Where should I post X? if you're not absolutely sure you're posting in the right place.
Please read these before you post! —
Posts may use any of the Perl Monks Approved HTML tags:
a, abbr, b, big, blockquote, br, caption, center, col, colgroup, dd, del, div, dl, dt, em, font, h1, h2, h3, h4, h5, h6, hr, i, ins, li, ol, p, pre, readmore, small, span, spoiler, strike,
strong, sub, sup, table, tbody, td, tfoot, th, thead, tr, tt, u, ul, wbr
Outside of code tags, you may need to use entities for some characters:
For: Use:
& &
< <
> >
[ [
] ]
Link using PerlMonks shortcuts! What shortcuts can I use for linking?
See Writeup Formatting Tips and other pages linked from there for more info.
Log In^?
How do I use this? | Other CB clients
Other Users^?
Others lurking in the Monastery: (7)
As of 2014-04-19 18:59 GMT
Find Nodes^?
Voting Booth^?
April first is:
Results (483 votes), past polls
|
{"url":"http://www.perlmonks.org/index.pl/jacques?parent=710350;node_id=3333","timestamp":"2014-04-19T19:00:10Z","content_type":null,"content_length":"20578","record_id":"<urn:uuid:421fe403-1d95-4013-a61d-54f8092814a3>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00545-ip-10-147-4-33.ec2.internal.warc.gz"}
|
sample an array form a distribution
July 19th 2010, 09:59 PM #1
Apr 2010
sample an array form a distribution
I have a somewhat basic question. Suppose I need to sample
$(\theta_1,\theta_2) \sim q(\theta_1,\theta_2)$.
Is this achieved if I first sample
$\theta_1 \sim q(\theta_1)$
and then sample
$\theta_2 \sim q(\theta_2|\theta_2)$, or does that correspond to something else?
That should be OK
July 19th 2010, 10:04 PM #2
Grand Panjandrum
Nov 2005
|
{"url":"http://mathhelpforum.com/advanced-statistics/151416-sample-array-form-distribution.html","timestamp":"2014-04-19T19:59:06Z","content_type":null,"content_length":"34855","record_id":"<urn:uuid:df8fd20c-82b8-4ba5-8af6-113f05b63bab>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00600-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Spindle math
As I anticipate another rainy weekend where I may or may not be able to finish the carpentry on the porch, I thought I would explain some of the math involved with the geometry of the spindle
spacing. You might recall that the only thing I have left to build and install are the spindles for the railings over the steps. I’ve decided to go with the angled stop chamfers and I built a
prototype to convince myself that they will work. However, how many spindles do I need and where should they go?
If you think about it, if there are N spindles then there are N-1 gaps between the spindles and then some room left at each end. For example, if there are two spindles then there is the one gap
between them, if there are 3 spindles then we have 2 gaps, and so on. The gaps between the spindles should all be the same size per section of the railing and these gaps should also ideally be the
same size for all sections of railing. As you install them you may have some small variations but you should try to minimize these as much as possible.
Given a width for the railing section, how do you calculate how many spindles you need and where they should go? A very important consideration is the space left at each end. As a rule, it should be
less than or equal to the space between the spindles and it shouldn’t be smaller than half the space between the spindles (what I call the “inter-spindle gap”).
In my situation for the railing on the left side of the steps, the total width of the railing section is 35 3/8 inches. This is the level horizontal distance between the posts. My spindles are 1 1/2
inches wide and my design inter-spindle gap is 3 inches. That is, the gap between the spindles is twice the width of the spindles. Let’s do some algebra.
Let N be the number of spindles that I will use to fit in this gap. The total space taken up by the spindles themselves is 1.5N and the total space taken up by the inter-spindle gaps is 3(N-1).
Remember that there is one fewer gap than number of spindles. So if 35 3/8 (or 35.375 in decimals) is the total width, we have
1.5N + 3(N – 1) ≤ 35.75
which means
1.5N + 3N – 3 ≤ 35.75
which means
4.5N – 3 ≤ 35.75
which means
4.5N ≤ 38.75
which approximately means
N ≤ 8.62
and since we don’t want a fractional part of a spindle
N ≤ 8
This means we will have 8 or fewer spindles and since we need to experiment a bit to optimize how much space we’ll have left on the ends, we’ll consider the cases where we use 7 or 8 spindles. Here’s
part of
a spreadsheet
that shows some calculations regarding the spacing. Incidentally, the spreadsheet was created in the
Workplace Managed Client and was saved in OpenDocument Format.
Moving through the main columns from left to right, using 7 spindles with 3 inch spacing leaves more than 3 inches at each end. Too big. Using 8 spindles with 3 inch spacing leaves less than 1 1/2
inches (half of 3 inches) at each end. Too small.
We need to fudge things a bit here. People will not be able tell the difference between 3 inch spacing and alternatives that are slightly smaller or larger. How much will depend on the person, but I
think we’re safe with trying to add or subtract 1/8 inch. Adding that 1/8 inch in the 7 spindle case now leaves a smaller gap at the end, and it is almost exactly 3 inches. Subtracting an 1/8 inch
and going with 8 spindles leaves gaps at the ends of 1 5/8 inch.
We could go either way here. I went with the last 8 spindle option because I like having the less than full spacing of the narrow spindles next to the much larger posts. You might like the
alternative. If you’re not sure, ask some people what they think. It’s good to give your spouse or significant other a strongly weighted vote, in my opinion.
The final set of calculations we need to do concern the horizontal placement of the spindles. I measured from the upper post (that is, the one on the landing) and using the above numbers got the
following measurements for where to put the left/upper edge of each spindle as I move down the steps.
My son William helped me transfer these measurement to the upper railing this morning. We used a long level and a square to make sure things were all straight. Double check all numbers and
measurements before installing anything.
Next (on the porch project): “Stair spindles – ‘C’ wins – Halfway home”
One Response to Spindle math
1. This is by far one of the most unique blog posts I have ever read. See, things like this have made this site one of my favorites.
PS – Love the spindles above. I think I am going to have to use another door. I would feel guilty walking on that porch.
This entry was posted in Home and tagged carpentry, mathematics, porch. Bookmark the permalink.
|
{"url":"http://www.sutor.com/newsite/blog-open/?p=1086","timestamp":"2014-04-20T08:14:02Z","content_type":null,"content_length":"42605","record_id":"<urn:uuid:5626b3e4-df7a-4010-9547-820c49c5610d>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00552-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Benchmarks Online
RSS Matters
A brief reminder about Sample Size
This article originally appeared in the March 2012 issue of Benchmarks Oline. Link to the last RSS article here: Model Specification Error…Are you straight, or do you have curves? -- Ed.
By Dr. Jon Starkweather, Research and Statistical Support Consultant
We’ve all heard (or spoken) questions similar to those below.
• How many voters should I poll to get an idea of who will win the election?
• What sample size do I need to determine whether people prefer green M&M’s over red?
• How many undergraduates should I collect data from to determine if my model of retention is meaningful or predictive?
• How many people should I survey to measure satisfaction with my new product?
• How many mice should I assign to each condition of my experiment?
• How many protein samples should I extract from each person in order to create a composite protein estimate of each person?
These are good questions. However, easy answers do not often follow good questions. The above questions all relate to the issue of sample size and much has been said on the subject. In this issue
I'll provide some highlights for your consideration.
Questions of sample size
This paragraph contains information you likely are aware of, but (alas); I'm compelled by my professional conscience to type it. Generally it is suggested that questions of sample size be addressed
prior to proposing a study (e.g. as a student; prior to thesis/dissertation proposal & as a faculty/professional researcher; prior to IRB and grant application). Typically during discussions of study
design or methodology the issue of sample size should be addressed -- because sample size is directly linked to statistical power and external validity. Post hoc power estimates are virtually
useless. Generally, it is recommended that an a-priori power analysis be computed (using a desired level of power, desired effect size, desired error rate, and known/proposed number of parameters,
variables, or conditions); which will produce a sample size estimate which in turn gives the researcher a target sample size which is likely to achieve the specified levels of power and effect size
for a given error rate and design. We (RSS) like to recommend using G*Power 3 (which is a free download) or any one of several R packages designed for this task. In conducting a-priori power
analysis, it is important to remember what statistical power actually is: the ability to detect an effect if one exists (in formula: power = 1 – β). Or, if you prefer, as Cohen (1988) put it: “the
power of a statistical test is the probability that it will yield statistically significant results” (p. 1).
The most general, and flippant, guideline for sample sizes often tossed around is "you need to have more cases/participants than you have parameters/variables/questions." The next most stringent
phrase you are likely to hear, often associated with a 'step' from descriptive statistics to inferential statistics, is "you need to have at least 5 to 10 cases/participants for each parameter/
variable/question." Next, often associated with a 'step' from fairly straightforward inferential techniques (t-test, ANOVA, linear [OLS] regression...) to multivariate statistical techniques is "you
need at least 25 (up to 150) cases/participants for each parameter/variable/question." These types of heuristics, although they make nice quick sound-bite answers, are not terribly useful because;
real consideration must be taken with respect to a variety of issues. The first issue to consider is the statistical perspective one is planning on taking with the data, will a Bayesian perspective
be used or a Frequentist perspective. Generally speaking, Bayesian analyses handle small sample sizes better than analogous Frequentist analyses, largely because of the incorporation of a prior. A
Bayesian perspective also allows one to use sequential testing; implementation of a stopping rule (Goodman, 1999a; Goodman, 1999b; Cornfield, 1966). Other considerations include, what types of
hypothesis (-es) one is attempting to test, what type of phenomena is being statistically modeled, the size of the population one is sampling from (as well as its diversity), and (certainly not
least) the type of analysis one expects to conduct. Some analyses inherently have more power than others (e.g., see discriminant function analysis vs. multinomial logistic regression). Furthermore,
one must consider the assumptions of the analysis one is expecting to run. Often data collected does not conform to the assumptions of a proposed analysis and therefore, an alternative analysis must
be chosen – one which will provide analogous statistics for addressing the hypothesis or research question posed; but, the alternative often has less power. Another consideration is this; it is well
accepted that point estimates (e.g., mean, median, model parameters; such as regression coefficients) are fairly stable and fairly accurate even with relatively small sample sizes. The problem
(again, well accepted) is that interval estimates (e.g., confidence intervals) will not be terribly accurate with small samples; often the standard errors will be biased. The only real answer is;
larger samples are better than smaller samples...
Overcoming small sample size
Contrary to much of the above considerations; some modern methods (e.g., optimal scaling, resampling) can be used to overcome some of the pitfalls of a small sample. However, many people are
suspicious of these modern methods and they can be quite controversial (e.g. if a journal editor or reviewer has never heard of optimal scaling, how likely do you think you are to get the study
published in their journal?). These methods are genuinely controversial because they often assume a particular position or belief about something -- for instance, people who use optimal scaling with
survey data have particular beliefs about the characteristics and properties of survey measurement; which others, of equal professional respect, disagree with or hold opposing beliefs.
Lastly, with respect to sample size, using new measures/instruments (ones which have not been validated nor had their psychometric properties established/accepted) should motivate the collection of
large samples. The larger sample can be divided into 2 or more subsamples so one subsample can be used for validation or confirmatory analysis, while the other subsample(s) can be used to fit the
hypothesized models.
Informed decisions
We (RSS) have a rule that the study author(s) or primary investigator(s) should be the one(s) to make decisions regarding what is done and we want those decisions to be as informed as possible by
providing as much (often called too much) information as we can. Therefore, we will not provide ‘easy’ answers to questions of sample size. The amount of data collected for any empirical study should
be based on critical thought, on the part of the study authors, directed toward the considerations mentioned in this article. The best two pieces of advice on the subject of sample size are; start to
think about sample size very early (i.e. long before data collection begins) and collect as much data as you possibly can.
Until next time, don’t play The Lottery with Shirley Jackson…
References and Resources
Cohen, J. (1988). Statistical Power Analysis for the Behavioral Sciences (2^nd ed.). Hillsdale, NJ: Lawrence Erlbaum Associates.
Cornfield, J. (1966). A Bayesian test of some classical hypotheses, with applications to sequential clinical trials. Journal of the American Statistical Association, 61, 577 – 594. Available at
JSTOR: http://www.jstor.org/stable/10.2307/2282772
Erdfelder, E., Faul, F., & Buchner, A. (1996). GPOWER: A general power analysis program. Behavior Research Methods, Instruments & Computers, 28, 1-11. Available at: http://
Goodman, S. (1999a). Toward evidence-based medical statistics. 1: The p value fallacy. Annals of Internal Medicine, 130(12), 995 – 1004. Available at: http://psg-mac43.ucsf.edu/ticr/syllabus/courses/
Goodman, S. (1999b). Toward evidence-based medical statistics. 2: The Bayes factor. Annals of Internal Medicine, 130(12), 1005 – 1013. Available at: http://psg-mac43.ucsf.edu/ticr/syllabus/courses/4/
Herrington, R. (2002). Controlling False Discovery Rate in Multiple Hypothesis Testing. http://www.unt.edu/benchmarks/archives/2002/april02/rss.htm
Herrington, R. (2001). The Calculation of Statistical Power Using the Percentile Bootstrap and Robust Estimation. http://www.unt.edu/benchmarks/archives/2001/september01/rss.htm
Jeffreys, H. (1948). Theory of probability (2^nd ed.). London: Oxford University Press.
Price, P. (2000). The 2000 American Psychological Society Meeting. http://www.unt.edu/benchmarks/archives/2000/august00/rss.htm
|
{"url":"http://it.unt.edu/benchmarks/issues/2013/03/rss-matters","timestamp":"2014-04-19T19:33:45Z","content_type":null,"content_length":"25185","record_id":"<urn:uuid:9e84cb75-425f-4cf4-b866-15235ce6ae75>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00344-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Z Evaluation Engine
Posted by
on July 9, 2010 at 6:04 PM PDT
Library for evaluating mathematic expressions and equations
PublishedThis project's provides a robust library for evaluating mathematical expressions, functions, and sets of equations.
There are plenty of other mathematical expression parser and evaluation libraries available. The Z Evaluation Engine is unique in some specific ways:
• it allows for function definitions of multiple types, including exact definition (f(x)=abs(x)), piecewise definition (f(x)= if x<0 then abs(x), if x>0 then x), or numeric definition (f(x)=[-1,1;
• it allows for multiple equations to be defined as a set and to reference each other as arguments (f(x)=x^2, g(x)=sin(f), h(x,y)=g(y)+f)
• it allows a user-defined domain to be evaluated over any set of expressions that you want (let x=[1,2,3,4,5], y=[1,2,3,4,5] evaluate the equation set over all points in that domain and return a
table with columns [x, y, f, g, h], with columns [x, y, g, f(y), h(x,y), h(y,x)], etc)
How to participate:
1) as a user: just by using the library and reporting any bugs
2) as a contributer: emailing patches, bug fixes, documentation, etc to a project developer
3) as a committer: you may be granted write permissions to the code base (if you have contributed multiple patches for this project and meet the required skills for being a project developer)
Prerequisites to becoming a project developer:
1) having Java programming skills (it is after all a Java project)
2) having taken a pre-calculus course (much of the library includes concepts such as functions and domains which won't make any sense to you otherwise)
3) knowledge of how to write unit tests (all new code must be unit-tested)
Apache License, Version 2.0
Related Topics >>
|
{"url":"https://www.java.net/project/z-evaluation-engine","timestamp":"2014-04-19T02:41:51Z","content_type":null,"content_length":"18711","record_id":"<urn:uuid:417de5f8-8e57-4012-8ad4-26e62350abd6>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00219-ip-10-147-4-33.ec2.internal.warc.gz"}
|
If you randomly choose a card from a standard deck of 52 cards what is the probability you will choose a heart or an ace?
In poker, the probability of each type of 5-card hand can be computed by calculating the proportion of hands of that type among all possible hands.
The following chart enumerates the (absolute) frequency of each hand, given all combinations of 5 cards randomly drawn from a full deck of 52 without replacement. Wild cards are not considered.
In this chart:
In journalism, a human interest story is a feature story that discusses a person or people in an emotional way. It presents people and their problems, concerns, or achievements in a way that brings
about interest, sympathy or motivation in the reader or viewer.
Human interest stories may be "the story behind the story" about an event, organization, or otherwise faceless historical happening, such as about the life of an individual soldier during wartime, an
interview with a survivor of a natural disaster, a random act of kindness or profile of someone known for a career achievement.
Related Websites:
|
{"url":"http://answerparty.com/question/answer/if-you-randomly-choose-a-card-from-a-standard-deck-of-52-cards-what-is-the-probability-you-will-choose-a-heart-or-an-ace","timestamp":"2014-04-18T08:08:49Z","content_type":null,"content_length":"20341","record_id":"<urn:uuid:6939c416-b7ba-4cd0-b395-f1992eb800af>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00099-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Help
February 15th 2011, 04:18 AM #1
Super Member
Aug 2009
cut set
im trying to understand the meaning of a cut set and the book taht i read states that a cutset must not only disconnect the graph but no proper subset will disconnect.
may i know what does it mean by "no proper subset will disconnect."?
im trying to find out how to i identify what the cut set is for a graph
That is saying that no smaller subset of the cutset is also a cutset.
February 15th 2011, 04:31 AM #2
|
{"url":"http://mathhelpforum.com/discrete-math/171331-cut-set.html","timestamp":"2014-04-17T09:07:24Z","content_type":null,"content_length":"28290","record_id":"<urn:uuid:3257eb70-c538-4903-b578-1be2ac383b72>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00489-ip-10-147-4-33.ec2.internal.warc.gz"}
|
New Chicago, IN Geometry Tutor
Find a New Chicago, IN Geometry Tutor
...In the past 5 years, I've written proprietary guides on ACT strategy for local companies. These guides have been used to improve scores all over the midwest. I've been tutoring test prep for
15 years, and I have a lot of experience helping students get the score they need on the ACT.
24 Subjects: including geometry, calculus, physics, GRE
...My grades in my first two semesters of the IU theory sequence were both A+; I earned A's in subsequent honors theory courses. I'd be happy to tutor through the first year of undergraduate
music theory. I'm willing to tutor groups.
13 Subjects: including geometry, calculus, statistics, algebra 1
...I began playing when I was young and was part of the chess club in high school. I was champion of a boys' dorm tournament my first year of college. I recently ran a table at a local Boy Scout
Jamboree for the Chess Merit Badge, where I had 3 chess boards set up and was playing 3 people at-a-time for most of the day.
59 Subjects: including geometry, reading, Spanish, English
...I also took a course in Greek prose composition. During my Master's studies I took one introductory course and one graduate course in symbolic logic. I also did two independent reading courses
in set theory and logic in the math department.
15 Subjects: including geometry, reading, writing, GRE
...Without having to attend a classroom setting, you and I will develop a personalized plan just for you! I can give you the inside "scoop" as far as test strategy and increasing your odds of
passing. If you'd like, we can use Skype if you live in any city in the U.S.
19 Subjects: including geometry, reading, GRE, writing
Related New Chicago, IN Tutors
New Chicago, IN Accounting Tutors
New Chicago, IN ACT Tutors
New Chicago, IN Algebra Tutors
New Chicago, IN Algebra 2 Tutors
New Chicago, IN Calculus Tutors
New Chicago, IN Geometry Tutors
New Chicago, IN Math Tutors
New Chicago, IN Prealgebra Tutors
New Chicago, IN Precalculus Tutors
New Chicago, IN SAT Tutors
New Chicago, IN SAT Math Tutors
New Chicago, IN Science Tutors
New Chicago, IN Statistics Tutors
New Chicago, IN Trigonometry Tutors
Nearby Cities With geometry Tutor
Beverly Shores geometry Tutors
Boone Grove geometry Tutors
Gary, IN geometry Tutors
Hebron, IN geometry Tutors
Hobart, IN geometry Tutors
Kouts geometry Tutors
La Crosse, IN geometry Tutors
Lake Station geometry Tutors
Leroy, IN geometry Tutors
Lowell, IN geometry Tutors
Ogden Dunes, IN geometry Tutors
Pottawattamie Park, IN geometry Tutors
Wanatah geometry Tutors
Wheeler, IN geometry Tutors
Whiting, IN geometry Tutors
|
{"url":"http://www.purplemath.com/New_Chicago_IN_Geometry_tutors.php","timestamp":"2014-04-17T15:28:44Z","content_type":null,"content_length":"24039","record_id":"<urn:uuid:8dc30f66-0df2-4be9-bc62-923644d7222a>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00006-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Course Description
MATH 230
Credit Hours
(4-0) 4 Cr. Hrs.
Section Start Dates
Section No Start Date
143085 August 25, 2014
Linear Algebra
Course Description
Topics covered in this course include systems of linear equations, matrices, determinants, Euclidean vector spaces, general vector spaces, inner product spaces, eigenvalues and eigenvectors,
diagonalization, linear transformations and applications.
(A requirement that must be completed before taking this course.)
• MATH 150 or equivalent with grade of 2.0 or better.
Course Competencies
Upon successful completion of the course, the student should be able to:
• Calculate an expression involving matrix operations including addition, subtraction, multiplication, scalar-multiplication, and transposition.
• Calculate the determinant of a given matrix using various techniques including cofactor expansion, row reduction, and shortcuts for small or triangular matrices.
• Calculate the multiplicative inverse of a given square matrix using various techniques including Gauss-Jordan Elimination and the adjoint-determinant method.
• Determine the solution of a system of linear equations using various techniques including Gauss, Gauss-Jordan, matrix-inverse, and Cramer methods.
• Calculate an expression involving vector operations, including addition, subtraction, scalar-multiplication, dot-multiplication, cross-multiplication, magnitudes, and parallel and perpendicular
• Figure out equations for a line or plane in three-dimensional Euclidean space, given certain facts about the line or plane.
• Determine whether a given set of objects and operations constitute a vector space or subspace by using the relevant axioms.
• Determine whether a given set of vectors is linearly independent, whether it spans a given vector space, and whether it constitutes a basis for the vector space.
• Construct a basis for the null space of a given system of linear equations.
• Construct a basis for the linear span of a given set of vectors.
• Calculate the coordinates of a given vector relative to a given basis.
• Calculate the change-of-basis matrix for transitions from one given basis to another.
• Translate the coordinates of a given vector from one basis to another by using the change-of-basis matrix.
• Determine whether a given scalar function on a given vector space constitutes an inner product by using the relevant axioms.
• Calculate lengths, distances, and angles between vectors using a specified inner product.
• Construct an orthonormal basis for a given set of vectors in an inner product space by using the Gram-Schmidt method.
• Determine whether a given function between vector spaces constitutes a linear transformation by using the relevant axioms.
• Construct a basis for the kernel or for the range of a given linear transformation.
• Calculate the matrix that represents a given linear transformation relative to a given pair of bases.
• Calculate the values of a given linear transformation (or of its inverse) by using the matrix that represents it.
• Determine the eigenvalues and eigenvectors of a given linear transformation or matrix.
• Determine a diagonalized or orthogonally diagonalized form for a given linear transformation or matrix.
• Calculate powers of a given square matrix by using a diagonalized or orthogonally diagonalized form.
• Apply matrix methods to solve selected types of practical problems involving networks, curve-fitting, directed graphs, Markov chains, linear differential equations, conic sections, or quadric
|
{"url":"http://schoolcraft.cc.mi.us/academics/course-description/MATH/230","timestamp":"2014-04-20T06:18:07Z","content_type":null,"content_length":"25570","record_id":"<urn:uuid:d94fa475-0d09-4473-8798-a0ab09ff5d7a>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00000-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physics Forums - View Single Post - effect of lunar/solar gravity on shape of earth
A uniform inertial force field causes uniform acceleration, so it cannot cause or contribute to any deformation (tides).
The moon's gravitational force is nonzero and pointed toward the moon (even if there is a gradient), yet the earth center stays still in the rotating frame, and the "tide-producing force" on the
antipodal side of the earth even has opposite sign. Hence the centrifugal force is "necessary" to describe what is observed in the rotating frame.
I have not idea what you mean by "residual vectors" and "taking out the earth rotation"...So again:
In which rotating frame would the centrifugal force vector be the same for A & B (as shown in the picture)?
Let r be the radius of the earth, x the distance from the barycenter to the center of the earth, and [itex]\omega[/itex] the orbital frequency. In the rotating frame, the "centrifugal acceleration"
at the center of the earth is [itex]\omega^2 x[/itex] directed to the right, at A is [itex]-\omega^2 (r - x)[/itex], directed toward the left, and at B is [itex]\omega^2 (r + x)[/itex] toward the
right again. Since people like to remove the component of centrifugal force which is due to rotation around the earth's axis and put that into the geoid, take out outward accelerations [itex]-\omega^
2 r[/itex], [itex]+\omega^2 r[/itex] from the accelerations at A and B, respectively. The remaining centrifugal bits are what I called the "residual" vectors. What are they?
|
{"url":"http://www.physicsforums.com/showpost.php?p=3502340&postcount=82","timestamp":"2014-04-16T19:12:59Z","content_type":null,"content_length":"9256","record_id":"<urn:uuid:1e9cbebe-5abd-40f9-a001-b36486051b3a>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00357-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: definitions of exp/log
Replies: 0
definitions of exp/log
Posted: Oct 18, 2001 2:30 PM
Here is another way to definite exp(x), appropriate for first year calculus
or early in advanced calculus (prior to developing the derivative).
(1) Start by extending a^x from the rationals to a continuous function on
the reals.
(2) Consider the functions x/a^x for constant a>1. In first year calculus,
graphical evidence suggests that each such function has exactly one
maximum, and that location of the maximum decreases as x increases. Take e
to be the value of a such that the maximum occurs at x = 1.
(3) It is now an easy and entertaining exercise to obtain quite rigorously
the derivative of e^x. Furthermore, the proof of the traditional limit
lim_{n- > infinity}(1+1/n)^n = e is pleasantly quick as well.
This will appear in the November 2001 issue of College Mathematics Journal.
For a more advance class - i.e., Introduction to Analysis or Advance
Calculus - the details of (2) can be filled in as follows (these are
reasonable exercises for students)
(2a) For each a>1, x -> x/a^x attains a maximum. The extreme value theorem
will get this, but if that is not yet in hand, it is an easy exercise to
show that for each a>1 here is a cut (L,R) of the rationals such that x/a^x
increases strictly on L and decreases strictly on R . This approach also
shows that the location of the maximum is unique.
(2b) The maximum of x/a^{kx} occurs at x_a/k where x_a is the location of
the maximum of x/a^x.
(2c) There is a unique number e such that x/e^x has a maximum at x =
1. (The uniqueness can be proved using (2b) and the fact that for a given
a>1, x->a^x maps onto the positive reals. The intermediate value theorem
makes this immediate, but it is not needed in light of elementary
properties of a^x over the rationals (x->a^x is strictly increasing and
unbounded above, a^t-a^s <= a^{t+1}(t-s) for s<t, and the extension in (1)
) if one prefers to avoid it.
An alternative approach not mentioned already in this thread appears in
(John Kemeny, The exponential function, Amer. Math. Monthly, 64 (1957),
John W. Hagood
Department of Mathematics and Statistics
PO Box 5717
Northern Arizona University
Flagstaff, AZ 86011-5717
Phone: 520-523-6879
Fax: 520-523-5847
To UNSUBSCRIBE from the calc-reform mailing list,
send mail to:
with the following in the message body:
unsubscribe calc-reform your_email_address
-Information on the subject line is disregarded.
|
{"url":"http://mathforum.org/kb/thread.jspa?threadID=223079","timestamp":"2014-04-17T19:04:52Z","content_type":null,"content_length":"16153","record_id":"<urn:uuid:b56a7562-ee4b-4103-8141-76414406958d>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00063-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Challenging problem comparing categories...
October 11th 2012, 09:36 AM
Challenging problem comparing categories...
Hi all,
I have kind of a tricky problem:confused: ... hope someone can help me!
Suppose a couple of ethologists checked the probability distributions of categories of bear in some areas (let's say 50), for example:
brown black polar grizzly other
area 1 0.11 0.23 0.00 0.49 0.17
area 2 0.51 0.00 0.00 0.39 0.10
area 3 0.06 0.00 0.94 0.00 0.00
... ... ... ... ... ...
area 50 0.30 0.18 0.02 0.19 0.31
The distributions within a category are not necessarily normal (e.g. the polar bear).
Now a remote system tracks a single bear, across a limited number of areas (let's say 4). The system doesn't know which kind of bear it is, so it is unknown to which category it belongs
Is there a way to determine the probability and certainty (confidence) the bear will belong to a category, given the areas the bear is found in?
To clarify the reasoning: consider three adjacent areas. The left area has a high occurrence of brown bears, the right area a high occurrence of black bears and the middle area both black and
brown are equally distributed. Suppose a bear moves around only in the middle and right areas, it would seem the probability that bear is a black bear increases. But how to calculate this?
I was first thinking of using a Wilcoxon Signed test for each category combination, or should I use Fisher's exact test... and some post-hoc test ?
Any suggestions?
Thanks in advance!
October 11th 2012, 10:40 AM
Re: Challenging problem comparing categories...
1-You can normalize each row of this table so that sum of prob values=1 for each row.
2-Confidence intervals are based on probabilities already on the table
3-There are more advanced methods that instead of this table you can have probabilities contour maps on a real 2D map so that movement of bears and finding probabilities can be examined more
October 19th 2012, 03:26 AM
Re: Challenging problem comparing categories...
@MaxJasper...eh... thanks I guess.
1 - obviously, this is already the case
2 - obviously, for bear types in an area. not for the unknown bear across areas
3 - probabilities contour maps are an excellent answer to something I didn't ask
|
{"url":"http://mathhelpforum.com/statistics/205108-challenging-problem-comparing-categories-print.html","timestamp":"2014-04-20T12:38:53Z","content_type":null,"content_length":"7456","record_id":"<urn:uuid:66bd6504-7d90-4b13-aeaa-4ebf60b0fb34>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00523-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mplus Discussion >> Bootstrapping
Jian Wang posted on Thursday, January 29, 2009 - 3:31 pm
Dr. Muthen:
I am using Mplus to find the indirect effect between a binary outcome and a categorical predictor, with two categorical mediators. Therefore I tried to use the bootstrapping approach.
I understand that the estmites in the output are assessed using probit regression. I wonder if I am able to use logistic regression in bootstrapping? I tried to use "estimator is ML", which gave me
error message.
Thank you for your help.
Linda K. Muthen posted on Thursday, January 29, 2009 - 3:34 pm
Bootstrapping is available for the ML estimator. Please send your Version 5.2 output and license number to support@statmodel.com.
Jian Wang posted on Thursday, January 29, 2009 - 3:42 pm
Great thanks for your promp response. I am using the Demo version currently. Is this the reason I could not use ML estimator?
Linda K. Muthen posted on Thursday, January 29, 2009 - 6:10 pm
No, the demo version is the same as the regular version except for a limit on the number of variables. Please send your output to support@statmodel.com.
Ben Spycher posted on Wednesday, March 04, 2009 - 2:11 am
Dear Linda,
I am doing a simulation study in which I generate binary outcomes alternatively from factor models, latent class models and factor mixture models. I want to fit the generated data with the various
models to see if the true stucture is recovered.
To speed up the process it is convenient to use the starting option of the Montecarlo command to read in starting values for the parameters. However the montecarlo command only seems to handle the
situation when data generation and estimation are of the same model class. If I use external montecarlo (option montecarlo in the data command) there does not seem to be an option to read starting
values from a file. Do you have a suggestion?
Linda K. Muthen posted on Wednesday, March 04, 2009 - 7:04 am
Your understanding is correct and I have no suggestion. The STARTING option is for only internal Monte Carlo. However, with external Monte Carlo you must give values in the MODEL command for each
parameter. These are used for coverage and also as starting values. So I'm not sure why you would need the STARTING option.
Ben Spycher posted on Friday, March 13, 2009 - 6:11 am
Thanks. The reason why this would be convenient is that for models with many parameters it is quite cumbersome to fix the starting values using the syntax of the model command. It would be much more
convenient to read in values from an already fitted data set as starting values. As starting values potentially greatly increase convergence time, might this be a useful feature to include in the
next version?
And/Or extend the Monte Carlo command to be able to generate from one model class and estimate from a completely different one?
Kind regards and thanks for your help
Stefanie Köhler posted on Sunday, July 24, 2011 - 4:18 am
hi, I'm running a mediation analysis which is moderated as well. I've got 2 groups and 3 measurement points. at point 3 I've got in one group 66 people and in the other just 9 left. now following
warning shows up:
GROUP 2:
WARNING: THE SAMPLE CORRELATION OF PDAUER_3 AND E_2003 IS -1.000
GROUP 2:
WARNING: THE SAMPLE CORRELATION OF PDAUER_3 AND K_VORH03 IS -0.986
AND NO bootstrap draws are complteted anymore (I've requested 5000).
I guess the warning shows up and the bootstrap draws are missing because of the unequal group size?!
do you think I should at that point stop splitting up my sample into groups and just use all of the people (without grouping)?
or even more don't use groups right from the beginning?
Or could I do something else?
Hopefully you'll help me.
Best wishes.
Linda K. Muthen posted on Sunday, July 24, 2011 - 11:58 am
The error messages you show are definitely the result of small sample sizes. I would think this is the cause of your other problems as well.
Back to top
|
{"url":"http://www.statmodel.com/discussion/messages/23/3914.html?1311533926","timestamp":"2014-04-16T14:16:26Z","content_type":null,"content_length":"28433","record_id":"<urn:uuid:3da9c0f9-0b76-416a-be22-d58fec504618>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00524-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Geometry and the imagination
You are currently browsing the category archive for the ‘Dynamics’ category.
Mapping class groups (also called modular groups) are of central importance in many fields of geometry. If $S$ is an oriented surface (i.e. a $2$-manifold), the group $\text{Homeo}^+(S)$ of
orientation-preserving self-homeomorphisms of $S$ is a topological group with the compact-open topology. The mapping class group of $S$, denoted $\text{MCG}(S)$ (or $\text{Mod}(S)$ by some people) is
the group of path-components of $\text{Homeo}^+(S)$, i.e. $\pi_0(\text{Homeo}^+(S))$, or equivalently $\text{Homeo}^+(S)/\text{Homeo}_0(S)$ where $\text{Homeo}_0(S)$ is the subgroup of homeomorphisms
isotopic to the identity.
When $S$ is a surface of finite type (i.e. a closed surface minus finitely many points), the group $\text{MCG}(S)$ is finitely presented, and one knows a great deal about the algebra and geometry of
this group. Less well-studied are groups of the form $\text{MCG}(S)$ when $S$ is of infinite type. However, such groups do arise naturally in dynamics.
Example: Let $G$ be a group of (orientation-preserving) homeomorphisms of the plane, and suppose that $G$ has a bounded orbit (i.e. there is some point $p$ for which the orbit $Gp$ is contained in a
compact subset of the plane). The closure of such an orbit $Gp$ is compact and $G$-invariant. Let $K$ be the union of the closure of $Gp$ with the set of bounded open complementary regions. Then $K$
is compact, $G$-invariant, and has connected complement. Define an equivalence relation $\sim$ on the plane whose equivalence classes are the points in the complement of $K$, and the connected
components of $K$. The quotient of the plane by this equivalence relation is again homeomorphic to the plane (by a theorem of R. L. Moore), and the image of $K$ is a totally disconnected set $k$. The
original group $G$ admits a natural homomorphism to the mapping class group of $\mathbb{R}^2 - k$. After passing to a $G$-invariant closed subset of $k$ if necessary, we may assume that $k$ is
minimal (i.e. every orbit is dense). Since $k$ is compact, it is either a finite discrete set, or it is a Cantor set.
The mapping class group of $\mathbb{R}^2 - \text{finite set}$ contains a subgroup of finite index fixing the end of $\mathbb{R}^2$; this subgroup is the quotient of a braid group by its center. There
are many tools that show that certain groups $G$ cannot have a big image in such a mapping class group.
Much less studied is the case that $k$ is a Cantor set. In the remainder of this post, we will abbreviate $\text{MCG}(\mathbb{R}^2 - \text{Cantor set})$ by $\Gamma$. Notice that any homeomorphism of
$\mathbb{R}^2 - \text{Cantor set}$ extends in a unique way to a homeomorphism of $S^2$, fixing the point at infinity, and permuting the points of the Cantor set (this can be seen by thinking of the
“missing points” intrinsically as the space of ends of the surface). Let $\Gamma'$ denote the mapping class group of $S^2 - \text{Cantor set}$. Then there is a natural surjection $\Gamma \to \Gamma'$
whose kernel is $\pi_1(S^2 - \text{Cantor set})$ (this is just the familiar Birman exact sequence).
The following is proved in the first section of my paper “Circular groups, planar groups and the Euler class”. This is the first step to showing that any group $G$ of orientation-preserving
diffeomorphisms of the plane with a bounded orbit is circularly orderable:
Proposition: There is an injective homomorphism $\Gamma \to \text{Homeo}^+(S^1)$.
Sketch of Proof: Choose a complete hyperbolic structure on $S^2 - \text{Cantor set}$. The Birman exact sequence exhibits $\Gamma$ as a group of (equivalence classes) of homeomorphisms of the
universal cover of this hyperbolic surface which commute with the deck group. Each such homeomorphism extends in a unique way to a homeomorphism of the circle at infinity. This extension does not
depend on the choice of a representative in an equivalence class, and one can check that the extension of a nontrivial mapping class is nontrivial at infinity. qed.
This property of the mapping class group $\Gamma$ does not distinguish it from mapping class groups of surfaces of finite type (with punctures); in fact, the argument is barely sensitive to the
topology of the surface at all. By contrast, the next theorem demonstrates a significant difference between mapping class groups of surfaces of finite type, and $\Gamma$. Recall that for a surface
$S$ of finite type, the group $\text{MCG}(S)$ acts simplicially on the complex of curves $\mathcal{C}(S)$, a simplicial complex whose simplices are the sets of isotopy classes of essential simple
closed curves in $S$ that can be realized mutually disjointly. A fundamental theorem of Masur-Minsky says that $\mathcal{C}(S)$ (with its natural simplicial path metric) is $\delta$-hyperbolic
(though it is not locally finite). Bestvina-Fujiwara show that any reasonably big subgroup of $\text{MCG}(S)$ contains lots of elements that act on $\mathcal{C}(S)$ weakly properly, and therefore
such groups admit many nontrivial quasimorphisms. This has many important consequences, and shows that for many interesting classes of groups, every homomorphism to a mapping class group (of finite
type) factors through a finite group. In view of the potential applications to dynamics as above, one would like to be able to construct quasimorphisms on mapping class groups of infinite type.
Unfortunately, this does not seem so easy.
Proposition: The group $\Gamma'$ is uniformly perfect.
Proof: Remember that $\Gamma'$ denotes the mapping class group of $S^2 - \text{Cantor set}$. We denote the Cantor set in the sequel by $C$.
A closed disk $D$ is a dividing disk if its boundary is disjoint from $C$, and separates $C$ into two components (both necessarily Cantor sets). An element $g \in \Gamma$ is said to be local if it
has a representative whose support is contained in a dividing disk. Note that the closure of the complement of a dividing disk is also a dividing disk. Given any dividing disk $D$, there is a
homeomorphism of the sphere $\varphi$ permuting $C$, that takes $D$ off itself, and so that the family of disks $\varphi^n(D)$ are pairwise disjoint, and converge to a limiting point $x \in C$.
Define $h$ to be the infinite product $h = \prod_i \varphi^i g \varphi^{-i}$. Notice that $h$ is a well-defined homeomorphism of the plane permuting $C$. Moreover, there is an identity $[h^{-1},\
varphi] = g$, thereby exhibiting $g$ as a commutator. The theorem will therefore be proved if we can exhibit any element of $\Gamma'$ as a bounded product of local elements.
Now, let $g$ be an arbitrary homeomorphism of the sphere permuting $C$. Pick an arbitrary $p \in C$. If $g(p)=p$ then let $h$ be a local homeomorphism taking $p$ to a disjoint point $q$, and define
$g' = hg$. So without loss of generality, we can find $g' = hg$ where $h$ is local (possibly trivial), and $g'(p) = q e p$. Let ${}E$ be a sufficiently small dividing disk containing $p$ so that $g'
(E)$ is disjoint from ${}E$, and their union does not contain every point of $C$. Join ${}E$ to $g'(E)$ by a path in the complement of $C$, and let $D$ be a regular neighborhood, which by
construction is a dividing disk. Let $f$ be a local homeomorphism, supported in $D$, that interchanges ${}E$ and $g'(E)$, and so that $f g'$ is the identity on $D$. Then $fg'$ is itself local,
because the complement of the interior of a dividing disk is also a dividing disk, and we have expressed $g$ as a product of at most three local homeomorphisms. This shows that the commutator length
of $g$ is at most $3$, and since $g$ was arbitrary, we are done. qed.
The same argument just barely fails to work with $\Gamma$ in place of $\Gamma'$. One can also define dividing disks and local homeomorphisms in $\Gamma$, with the following important difference. One
can show by the same argument that local homeomorphisms in $\Gamma$ are commutators, and that for an arbitrary element $g \in \Gamma$ there are local elements $h,f$ so that $fhg$ is the identity on a
dividing disk; i.e. this composition is anti-local. However, the complement of the interior of a dividing disk in the plane is not a dividing disk; the difference can be measured by keeping track of
the point at infinity. This is a restatement of the Birman exact sequence; at the level of quasimorphisms, one has the following exact sequence: $Q(\Gamma') \to Q(\Gamma) \to Q(\pi_1(S^2 - C))^{\
The so-called “point-pushing” subgroup $\pi_1(S^2 - C)$ can be understood geometrically by tracking the image of a proper ray from $C$ to infinity. We are therefore motivated to consider the
following object:
Definition: The ray graph $R$ is the graph whose vertex set is the set of isotopy classes of proper rays $r$, with interior in the complement of $C$, from a point in $C$ to infinity, and whose edges
are the pairs of such rays that can be realized disjointly.
One can verify that the graph $R$ is connected, and that the group $\Gamma$ acts simplicially on $R$ by automorphisms, and transitively on vertices.
Lemma: Let $g \in \Gamma$ and suppose there is a vertex $v \in R$ such that $v,g(v)$ share an edge. Then $g$ is a product of at most two local homeomorphisms.
Sketch of proof: After adjusting $g$ by an isotopy, assume that $r$ and $g(r)$ are actually disjoint. Let $E,g(E)$ be sufficiently small disjoint disks about the endpoint of $r$ and $g(r)$, and $\
alpha$ an arc from ${}E$ to $g(E)$ disjoint from $r$ and $g(r)$, so that the union $r \cup E \cup \alpha \cup g(E) \cup g(r)$ does not separate the part of $C$ outside $E \cup g(E)$. Then this union
can be engulfed in a punctured disk $D'$ containing infinity, whose complement contains some of $C$. There is a local $h$ supported in a neighborhood of $E \cup \alpha \cup g(E)$ such that $hg$ is
supported (after isotopy) in the complement of $D'$ (i.e. it is also local). qed.
It follows that if $g \in\Gamma$ has a bounded orbit in $R$, then the commutator lengths of the powers of $g$ are bounded, and therefore $\text{scl}(g)$ vanishes. If this is true for every $g \in \
Gamma$, then Bavard duality implies that $\Gamma$ admits no nontrivial homogeneous quasimorphisms. This motivates the following questions:
Question: Is the diameter of $R$ infinite? (Exercise: show $\text{diam}(R)\ge 3$)
Question: Does any element of $\Gamma$ act on $R$ with positive translation length?
Question: Can one use this action to construct nontrivial quasimorphisms on $\Gamma$?
Recent Comments
Ian Agol on Cube complexes, Reidemeister 3…
Danny Calegari on kleinian, a tool for visualizi…
Quod est Absurdum |… on kleinian, a tool for visualizi…
dipankar on kleinian, a tool for visualizi…
Ludwig Bach on Liouville illiouminated
|
{"url":"http://lamington.wordpress.com/category/dynamics/page/2/","timestamp":"2014-04-18T20:43:45Z","content_type":null,"content_length":"67114","record_id":"<urn:uuid:d8558132-b0d8-43aa-8bb6-59a9276211c3>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00482-ip-10-147-4-33.ec2.internal.warc.gz"}
|
American Mathematical Monthly -March 2010
March 2010
Yueh-Gin Gung and Dr. Charles Y. Hu Award for 2010 to Kenneth A. Ross for Distinguished Service to Mathematics
By: Barbara Faires
Old and New Results in the Foundations of Elementary Plane Euclidean and Non-Euclidean Geometries
By: Marvin J. Greenberg
This survey highlights some foundational history and some interesting recent discoveries in elementary geometry that deserve to be better known, such as the hierarchies of axiom systems, Aristotle’s
axiom as a "missing link," Bolyai’s discovery— proved and generalized by William Jagy— of the relationship of "circle-squaring" in a hyperbolic plane to Fermat primes, the undecidability,
incompleteness, and consistency of elementary Euclidean geometry, and much more. A main theme is what Hilbert called "the purity of methods of proof," exemplified in his and his early twentieth
century successors’ works on foundations of geometry.
Pólya's Theorem on Random Walks via Pólya's Urn
By: David A. Levin and Yuval Peres
dlevin@uoregon.edu, peres@microsoft.com
We give a proof of Pólya's 1921 theorem on the transience of the simple random walk on Z^3 using the Pólya urn process (Eggenberger and Pólya, 1923; Pólya, 1931). We give a self-contained exposition
of the method of flows, which provides a necessary and sufficient condition for transience of a simple random walk on an infinite graph. The key ingredient to our proof of transience of Z^3 is the
construction of a flow on Z^3 using the Pólya urn process. While the transience result is classical and can be proved in many ways, it is particularly satisfying to derive it from Pólya's urn, a
connection which surely was not realized by Pólya himself.
A Congruence Problem for Polyhedra
By: A. Borisov, M. Dickinson, and S. Hastings
borisov@pitt.edu, dickinsm@gmail.com, sph@math.pitt.edu
It is well known that to determine a triangle up to congruence requires 3measurements: three sides, two sides and an angle, or one side and two angles. We consider various generalizations of this
fact to two and three dimensions. In particular we consider the following question: given a convex polyhedron P, how many measurements are required to determine P up to congruence? We show that, in
most cases, the number of measurements required to determine the polyhedron locally is equal to the number of edges of the polyhedron. However, for some polyhedra fewer measurements suffice; in the
case of the cube we show that nine carefully chosen measurements are enough. We also prove a number of analogous results for planar polygons. In particular we describe a variety of quadrilaterals,
including all rhombi and all rectangles, that can be determined up to congruence with only four measurements, and we prove the existence of n-gons requiring only n measurements. Finally, we show that
one cannot do better: for any ordered set of n distinct points in the plane one needs at least n measurements to determine this set up to congruence.
Euclid Meets Bézout: Intersecting Algebraic Plane Curves with the Euclidean Algorithm
By: Jan Hilmar and Chris Smyth
trafficjan82@gmail.com, c.smyth@ed.ac.uk
Finding the intersection point of two lines in the plane is easy. But doing the same for a pair of algebraic plane curves is more difficult, not least because each point on both curves may be a
multiple intersection point. However, we show that, with the help of the Euclidean algorithm for polynomials, this general problem can be reduced to the case of intersecting lines, giving an
algorithm for finding these intersection points, with multiplicities. It also yields a simple proof of Bézout's theorem, giving the total number of such points.
A Realization of Measurable Sets as Limit Points
By: Jun Tanaka and Peter F. McLoughlin
juntanaka@math.ucr.edu, pmcloughlin@aol.com
Starting with a sigma-finite measure on an algebra, we define a pseudometric and show how measurable sets from the Caratheodory Extension Theorem can be thought of as limit points of Cauchy sequences
in the algebra.
A Parity Theorem for Drawings of Complete and Complete Bipartite Graphs
By: Dan McQuillan and R. Bruce Richter
dmcquill@norwich.edu, brichter@math.uwaterloo.ca
Forty years ago, Kleitman considered the numbers of crossings in good planar drawings of the complete bipartite graph K_{m,n}. Among other things, he proved that, for m and nboth odd, the parities of
these numbers of crossings are all the same. His proof was sufficiently controversial that he provided another proof a few years later. In this work, we provide a complete, simple proof based on
counting, elementary graph theory, and the Jordan Curve Theorem.
Three Proofs of the Inequality e{n+0.5}
By: Sanjay K. Khattri
The inequality e>(1+1/n)^n is well known. In this work, we give three proofs of the inequality e{n+0.5}. For deriving the inequality, we use the Taylor series expansion and the Hermite-Hadamard
inequality. In the third proof, we define a strictly increasing function which is bounded from above by 0.5.
Alfred Tarski: Life and Logic
By: Anita Burdman Feferman and Solomon Feferman
Reviewed by: Anil Nerode
|
{"url":"http://www.maa.org/publications/periodicals/american-mathematical-monthly/american-mathematical-monthly-march-2010?device=mobile","timestamp":"2014-04-19T14:58:44Z","content_type":null,"content_length":"26546","record_id":"<urn:uuid:6624374d-a8b2-4bec-b770-a7e3085f1d7d>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00449-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50ee5cd3e4b07cd2b6499f20","timestamp":"2014-04-19T10:11:41Z","content_type":null,"content_length":"34786","record_id":"<urn:uuid:6b7a8209-f05a-4090-8cdd-8a7ef1dabf8d>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00646-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Vallejo Statistics Tutor
Find a Vallejo Statistics Tutor
...I can help your student ace the following standardized math tests: SAT, ACT, GED, SSAT, PSAT, ASVAB, TEAS, and more. I am an expert on math standardized testing, as stated in my reviews from
previous students. I have worked on thousands of these types of problems and can show your student how to do every single one, which will dramatically increase their test scores!
59 Subjects: including statistics, chemistry, reading, physics
...I can help any student improve their writing skills by teaching them to how organize their thoughts in a coherent paper or presentation in any subject! Excellent study skills are essential to
academic success! As a teaching assistant (applied economics and statistics) and a personal tutor for g...
16 Subjects: including statistics, reading, English, algebra 1
...I have extensive knowledge of pretty much all of math through the end of college, and of statistics well beyond that. I'm good at zeroing in on precisely what's giving you trouble, and will
break each problem into pieces you can understand and learn. While a grad student at UC Berkeley, I recei...
14 Subjects: including statistics, geometry, ASVAB, algebra 1
...He is a very patient and effective tutor, and builds confidence in his student. Algebra 2 introduces independent and dependent variables and how their solution can be determined by for linear
relationships for two or three variables. Algebra 2 also gives an overview of more complex mathematica...
41 Subjects: including statistics, calculus, geometry, algebra 1
...I teach all subjects with the same fundamental process. I first evaluate the student's capability. Thereafter, I present problems that are slightly more difficult than their current level of
understanding to help them elevate their potential gradually.
37 Subjects: including statistics, chemistry, algebra 1, algebra 2
|
{"url":"http://www.purplemath.com/vallejo_ca_statistics_tutors.php","timestamp":"2014-04-19T11:59:11Z","content_type":null,"content_length":"23998","record_id":"<urn:uuid:39760e57-4043-4ace-9f2d-53781bfbb2c0>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00090-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Designing an EMC-compliant UHF oscillator
[For a copy of this article in PDF format, which displays figures and equations, click here. Requires Adobe Acrobat Reader, free download.]
This straightforward design technique for two-port SAW oscillators minimizes stability problems and time-consuming experimentation.
Before jumping right in and discussing oscillator design it would be prudent to review the fundamentals of two-port surface acoustic wave (SAW) devices.
The two ports being referred to are the input port and an output port. A typical two-port SAW is a three-pin device. There is a pin for the input signal and a pin for the output signal. The third pin
serves as the common node for the input and output ports and is tied to the case of the device. The physics of SAW devices is based on the phenomenon of piezoelectricity. Piezoelectric materials
exhibit a coupling between acoustic and electrical properties. In other words, an electric field causes a mechanical strain and vice versa. In the same manner, applying an AC electric signal to a
piezoelectric material causes acoustic waves or vibrations. Various types of crystal exhibit piezoelectricity; however, quartz is typically used because of its stability and ease of manufacturing.
Quartz crystals, commonly used through the VHF frequency region of the spectrum, are based on thin disks of quartz with electrodes placed at the top and bottom. This geometry sets up an acoustic wave
in the disk, with the resonant frequency being inversely proportional to the thickness of the disk. Such crystal devices become impractical at high frequencies because the disks required become too
SAW devices are similar to standard quartz crystals; however, they rely on surface acoustic waves. SAWs are similar to ocean waves in that the wave energy travels along the surface of the material.
For two-port SAW devices, a pair of interdigitated (IDT) metal fingers are placed on the quartz surface and used to excite a surface acoustic wave. The resonant frequency of a SAW device is inversely
proportional to the spacing of the interdigitated fingers. Since the electrode fingers can be deposited using state of the art IC deposition technology, small spacings and high frequencies (into the
GHz range) can be achieved. SAW resonators also have high quality factors (Q's). The unloaded Q of two-port SAWs typically falls in the range of 5,000 to 20,000.
A two-port SAW resonator can be modeled with an RLC circuit, as shown in Figure 2. The resistor, R subscript m, represents the energy loss in the resonator. C subscript o represents the
inter-electrode capacitance. C subscript m and L subscript m simulate the resonant characteristics of the device. The transformer is an ideal one-to-one transformer, with the secondary winding
inverted so as to model a constant 180 degrees of phase shift. It is important to know that this equivalent circuit is only valid in the region around resonance. An actual SAW resonator has a more
complicated response that includes sidelobes and harmonics. Figures 3 and 4 compare the transmission characteristics of the model to an actual RP1239 device used as the physical reference for this
technique. Figure 3 shows that, in the neighborhood of resonance (ñ0.2 MHz), the lumped element model is accurate. Figure 4 shows that, for a broader frequency span (ñ3.0 MHz), the lumped element
model is inadequate.
Oscillator fundamentals There exist two common methods for oscillator analysis and design, the feedback/loop method and the negative resistance method superscript [2][3]. Although either method can
be used to analyze any circuit, oscillators using two-port devices such as SAW resonators and SAW delay lines are most amenable to the loop method. Following the loop method, the two-port SAW is
placed as an element in the feedback loop of an amplifier, as shown in Figure 5. The figure shows the oscillator in open loop configuration; i.e. the feedback loop is cut to provide input and output
When the loop is closed, the circuit will oscillate if the following two conditions are met:
1) The first condition for oscillation is that the net gain through the loop must be one or higher. In decibels, this translates to a gain of at least 0 dB.
2) The second condition is that the phase shift in the loop must total 0ø.
When these two conditions are met, positive feedback will occur at the resonant frequency, resulting in oscillation. The resonant frequency will be in the neighborhood of the resonant frequency of
the SAW device. A popular misconception about oscillators is that the oscillation frequency is determined by the resonant peak. Actually, the exact frequency of resonance is determined by the point
of 0ø total phase shift. Figure 6 illustrates this concept. This figure shows the transfer function of a hypothetical open loop 315 MHz oscillator circuit with the topology of Figure 5. While the
peak of the gain occurs at 315 MHz, the zero phase location occurs at about 314.98 MHz. The gain at 314.98 MHz is approximately 5.75 dB. Both criteria for oscillation are met at this frequency.
Therefore, this circuit will oscillate at 314.98 MHz.
The zero phase shift frequency defines the resonant frequency; however, any real-world oscillator circuit will have noise, or variation, about the resonant frequency. The variance of the noise about
the center frequency is related to the quality factor, or Q, of the oscillator, with higher Q's producing smaller frequency noise. Noise in the loop, summed with the energy from power-on transients,
provides the energy to start oscillation when the circuit is powered. Since a higher Q results in smaller noise, a circuit with a high Q will take longer to start up than a similar circuit with a
lower Q. When there exists more than one frequency where the oscillation conditions are met, oscillation becomes difficult to predict. The circuit may oscillate at one of the frequencies or it may
hop between several frequencies. Circuits with more than one such frequency must be avoided.
The power supply voltage, the net gain through the loop, and the compression characteristics of the amplifier together determine the amplitude of oscillation. At the onset of oscillation, the signal
is at a low voltage; with the amplifier operating in the linear, or constant gain, region. Positive feedback causes the amplitude of oscillation to increase until the amplifier output starts to
saturate. At saturation, the amplifier gain decreases and tends to zero as the power supply voltage is approached. The quiescent point of the amplitude of oscillation is the point in the compression
region where the amplifier gain equals the losses through the feedback loop. Consequently, unless the linear gain of the amplifier equals the losses through the loop, the amplifier will be operating
in the non-linear region for portions of the oscillation cycle. This non-linearity of gain distorts the waveform, producing harmonics of the fundamental frequency. Thus a side effect of a high gain
margin is a signal with high harmonic energy content. The harmonics can be reduced by reducing the loop gain or by filtering the oscillator output.
Design method Typically, oscillator design is as much an art as it is a science. However, by designing the oscillator in stages using empirical and analytical methods where appropriate, much of the
trial and error typically encountered can be avoided. In our design process, a combination of physical circuit measurements and simulation techniques was used, utilizing simulation for filter
synthesis and lab measurements for determining loop gain/phase.
The basic topology of the SAW oscillator design is shown in Figure 7. It is a feedback network with the loop consisting of an attenuation network, an RF amplifier, a SAW device, a frequency selective
filter, and a phase shift filter. The signal is output through a coupling network. Of the five blocks in the feedback loop, only the RF amp and SAW device are absolutely required. The frequency
selective filter is only necessary if there are spurious resonances; i.e. unintended frequencies where the oscillation criteria are met. The attenuation network is only needed if there is too much
loop gain and a more linear output response is needed. The phase shift filter is most likely needed, since it is used to set the location of resonance. In addition, a two-port SAW provides only 180
degrees of phase shift. So there is a 50/50 chance that the circuit will not oscillate without the proper phase shifting.
RFIC amplifier selection At the core of any oscillator is an amplifier to provide gain. Two common choices for amplification are RF transistors and RF integrated circuits (RFICs). An RFIC was chosen
because of the relative design simplicity. Other attractive features common to RFIC amplifiers are a broad frequency bandwidth and a nominal 50 W input and output impedance. Furthermore, with the
present state of IC packaging, an RFIC will typically have a layout footprint comparable in size to that of a transistor.
Gain is the most important criterion for selecting the amplifier. The amplifier must have enough gain at the desired oscillation frequency, and this gain must be large enough to compensate for any
losses that occur in the feedback loop. To ensure reliable and rapid oscillator startup, a gain margin above 0 dB is included. A typical value used for gain margin is 6 dB.
Maximum input power is another criterion important for oscillator design, especially when using amplifiers with high gain. Some RFIC amplifiers can only tolerate low input power without damage
occurring to the amplifier. For oscillator design, where the output is fed through a feedback network and then into the input, the following condition can be followed to avoid device damage: P
subscript MAX, input superscript 3 P subscript MAX, output - P subscript loop, where P subscript MAX, input is the maximum input power, P subscript MAX, output is the maximum output power, and P
subscript loop is the loss through the feedback network, each expressed in dB.
Gain/phase vs. frequency characteristics The next step in the design process is to build an open loop circuit with the amplifier in series with a SAW device so that the gain versus frequency behavior
can be determined. To avoid oscillation at spurious frequencies, a low-pass or band-pass filter may need to be placed in line with the SAW device. The gain and phase of the circuit as a function of
frequency is measured using a network analyzer to determine the location of any spurious resonant frequencies. From this, the requirements of the frequency selective filter can be determined.
Phase compensation After a frequency selective filter is designed, the entire open loop circuit shown in Figure 8 is implemented. The gain and phase behaviors, as a function of frequency, are
measured again. It is important to measure the phase of the loop with the amplifier in saturation because an amplifier's phase curve is different in saturation from that of linear operation.
S-parameters provided by the manufacturer should not be relied upon because this data is measured in the linear regime. The phase can be measured using a network analyzer, or determined via RF
simulation. When conducting measurements, the phase shift of the probes or connectors must be calibrated out to achieve accurate results.
For the oscillator to function as intended, the loop phase must be adjusted so that it equals 0ø at the desired resonant frequency. For two-port resonator SAW devices, the phase changes by about 180ø
near resonance. Because of the phase shifts caused by device parasitics, transmission lines between components, the amplifier, and any other components in the loop, a point of 0ø phase may not occur
in the neighborhood of resonance. The resulting circuit may not oscillate reliably unless the loop phase is properly adjusted. Even if the circuit does oscillate, it may not be at the exact desired
Once the optimal frequency is determined, the phase, f, through the loop at this frequency is measured with a network analyzer or through simulation. The desired phase for the phase shifting filter
is simply q = -f, so that the total phase sums to 0ø at the optimal frequency.
An alternate method can be employed to determine the phase shift. The loop can be closed with a variable-length transmission line. The length of the transmission line is varied until the circuit
oscillates at the desired frequency. The equivalent phase shift is determined from this optimal length using the relation:
where L is the length of the transmission line and l subscript eff is the effective wavelength of resonance on the given transmission line. Two parallel 50 W terminated microstrip lines can be used
to create a variable length transmission line. A copper short can be placed at any point across these parallel transmission lines to give arbitrary lengths.
The next step is to design a filter to produce the desired phase shift. The common design practice for designing such a filter is to use an empirical trial and error tuning method. However, a purely
analytical method can be applied. Utilizing Butterworth filter coefficients, a Matlab program was created that calculates the capacitor and inductor values based on the input phase shift, operating
frequency, and the desired filter topology, pi or tee. The calculated, normalized values are then converted to a characteristic impedance of Z = 50 W, using the formulas given in superscript [1].
This method of phase shifting is based on the fact that Butterworth filters possess a linear phase versus frequency relationship in the pass band. The slope of this linear relationship is negative
for a low-pass filter and positive for a high-pass filter:
where n is the number of poles, f subscript c is the corner frequency and q is the phase, given in degrees. Using the above relations, a filter of any desired phase shift at a given frequency can be
designed. The first step of the algorithm is to calculate the number of poles required by determining the next highest integer of the function:
The next step is to determine whether a high-pass or low-pass filter will be used, based on the sign of the angle q and to calculate the filter corner frequency f subscript c:
where f is the desired frequency of oscillation. The last step is to use Butterworth filter tables to determine the components (inductor and capacitors) of the filter. The resulting filter will
produce a phase shift of q at the desired frequency (f).
Complete oscillator circuit To complete the oscillator circuit, the phase shift filter is added in series with the feedback loop. When the open loop response is measured, the phase should be 0ø at
the desired oscillation frequency. When the loop is closed, the circuit will oscillate at the desired frequency. The output of the oscillator is coupled to the loop using an RF power divider,
directional coupler, or using a discrete capacitor or inductor. Though capacitors are commonly used to couple AC signals, in this case an inductor is often preferable because it will limit the power
of the harmonics. A capacitor, on the other hand, will enhance the harmonic power over the fundamental power since the impedance of a capacitor is inversely proportional to frequency. Another
consideration for output coupling is the placement of the coupling network. By placing the coupling network after the SAW and filters, much of the harmonic content from the saturated amplifier will
be suppressed.
Impedance matching Referring to the circuit model of Figure 2, it is readily seen that a SAW will not have a 50 W impedance at its ports. When connecting a SAW to a 50 W transmission line, the
consequent impedance mismatch causes insertion loss. Using the model of a 315 MHz SAW, we designed two matching networks and evaluated the results. The first network, shown in figure 9a consists of a
shunt inductor and resistor at each port. The inductor, L subscript 1, acts to create a parallel resonance with the SAW capacitance, C subscript o, effectively canceling out the effects of C
subscript o at 315 MHz. Furthermore, the SAW inductance and capacitance form a series-resonant circuit. The remaining impedance is the series resistance, R subscript m. The matching R subscript 1
resistors form a pi network with R subscript m, and provide a 50 W input at each port. The matched circuit eliminates reflections at the expense of power lost in the added resistors. This circuit
actually has about 5.5 dB of more loss when compared to the unmatched SAW. The one benefit of the circuit is that it creates a slightly higher Q (The in-circuit Q was about 6% higher than the
unmatched SAW).
The second matching network is shown in Figure 9b. It consists of an LC network placed at each port, which provides the 50 W matching. This circuit reduces the loss by about 3 dB, but reduces the
in-circuit Q by about 50%. Because of the dramatic decrease in Q, this matching circuit is not desirable for an oscillator application.
In summary, because impedance matching is primarily a technique to reduce insertion loss, the application of impedance matching to SAW oscillators provides either little benefits to or actually
decreases the oscillator performance.
Results Using the methods described in this paper, two different oscillators were designed. The oscillators exhibited stable frequency response (ñ50 ppb typical noise over five minute span).
Measurements were performed using a HP4396B Network/Spectrum Analyzer. The complete oscillator circuits for a 915 MHz oscillator and a 315 MHz are shown in Figures 10 and 11. For both oscillators,
the NEC UPC2713 amplifier was chosen because it has considerable gain in the UHF band. RF Monolithics two-port, 180ø SAW devices were selected. All components were surface mount devices, except for
the SAWs.
Circuit board layout considerations The first consideration when performing the board layout is to determine the characteristic impedance of the printed circuit board (PCB) traces. To reduce
reflections, all traces carrying RF signals were designed to a characteristic impedance of 50 W. We used 30 mil (0.76 mm) thick PCBs with FR4 dielectric. The PCBs were a double layer with signal
traces on top and a solid ground plane on the bottom. Through theoretical analysis and lab measurements, 50 W characteristic impedance was determined to correspond to 55 mil (1.40 mm) width traces.
Therefore, 55 mil traces were used for all of the RF interconnections. Vias were placed wherever possible for sufficient grounding. The technique of using multiple vias helps reduce effects of
interference and noise by providing low-impedance connections to the ground plane. De-coupling capacitors were added to the power supply voltage to prevent RF signals from coupling to the supply.
Since the amplifier acts as a variable load at the frequency of oscillation, values of de-coupling capacitance were chosen so that the frequency of interest was shunted to ground on the supplies.
These capacitors as well as the coupling capacitors on the amplifier input/output were chosen such that they had an impedance of about 1 W at the oscillation frequency.
One of the most important things to remember in RF layout is to minimize the diameter of the loops that are formed by each signal trace and its return path. It is important to realize that currents
travel in a much different manner at RF frequencies. RF signals follow the path of least impedance, implying that inductance should be limited in all ground paths. The path of least inductance for RF
return current is to travel directly below the signal trace, forming the smallest loop diameter for the total current path. At RF frequencies, all the return current on the ground plane will be
concentrated directly below the trace and follow underneath the trace wherever it proceeds. Forcing the return current to flow otherwise creates inductance and stray fields. Keeping the overall
currents loops small in diameter reduces stray fields, limiting interference and noise susceptibility. Figure 12 shows the layout for the 915 MHz oscillator whose schematic is shown in Figure 10.
It is evident from Figure 12 that the signal paths were kept short and grounding vias were added wherever possible. The SAW resonator (the circle near the top of the figure) is soldered on the
backside of the board while all of the electronics are placed on the topside. The device labeled "CIJ" is the RFIC amplifier, with pin 1 at the lower left and pin 6 at the lower right. Notice that
the input (pin 1) and output (pin 4) signals to the amplifier are closely accompanied by ground vias so that the return current for the signal can leave the ground pins (pins 2, 3, and 5) of the IC
and immediately travel to the ground underneath the signal trace. The same practice was utilized for the inputs and outputs of the SAW resonator.
Conclusion A methodology for designing two-port SAW oscillators was described, including a general overview of oscillators. Using this methodology, two oscillators were designed, at frequencies of
315 MHz and 915 MHz. By utilizing a combination of open loop measurement techniques and analytical filter design, most of the trial and error process that typically accompanies oscillator design was
avoided. The resulting oscillators were robust and stable.
References: [1] A. B. Williams, F. J. Taylor, Electronic Filter Design Handbook, third edition, McGraw Hill, 1995
[2] I.M. Gottlieb, Practical Oscillator Handbook, Butterworth-Heinemann, 1997.
[3] R. W. Rhea, Oscillator Design and Computer Simulation, second edition, Noble, 1995.
[4] A. R. Northam, SAW Resonator Oscillator Design Using Linear RF Simulation, RFM Product Data Book, RF Monolithics, 1997.
[5] R. Schmitt, J. Allen, R. Wright, Rapid Design of SAW Oscillator Electronics for Sensor Applications, submitted July 2000, Sensors and Actuators B.
Discuss this Article 0
Post new comment
|
{"url":"http://mobiledevdesign.com/news/designing-emc-compliant-uhf-oscillator","timestamp":"2014-04-20T20:55:19Z","content_type":null,"content_length":"79664","record_id":"<urn:uuid:64934351-5e39-4310-ad46-16fd25fad75f>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00015-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Benchmarks Online
RSS Matters
Multinomial Logistic Regression
Link to the last RSS article here: An alternative modeling strategy: Partial Least Squares -- Ed.
Multinomial logistic regression is used to predict categorical placement in or the probability of category membership on a dependent variable based on multiple independent variables. The independent
variables can be either dichotomous (i.e., binary) or continuous (i.e., interval or ratio in scale). Multinomial logistic regression is a simple extension of binary logistic regression that allows
for more than two categories of the dependent or outcome variable. Like binary logistic regression, multinomial logistic regression uses maximum likelihood estimation to evaluate the probability of
categorical membership.
Multinomial logistic regression does necessitate careful consideration of the sample size and examination for outlying cases. Like other data analysis procedures, initial data analysis should be
thorough and include careful univariate, bivariate, and multivariate assessment. Specifically, multicollinearity should be evaluated with simple correlations among the independent variables. Also,
multivariate diagnostics (i.e. standard multiple regression) can be used to assess for multivariate outliers and for the exclusion of outliers or influential cases. Sample size guidelines for
multinomial logistic regression indicate a minimum of 10 cases per independent variable (Schwab, 2002).
Multinomial logistic regression is often considered an attractive analysis because; it does not assume normality, linearity, or homoscedasticity. A more powerful alternative to multinomial logistic
regression is discriminant function analysis which requires these assumptions are met. Indeed, multinomial logistic regression is used more frequently than discriminant function analysis because the
analysis does not have such assumptions. Multinomial logistic regression does have assumptions, such as the assumption of independence among the dependent variable choices. This assumption states
that the choice of or membership in one category is not related to the choice or membership of another category (i.e., the dependent variable). The assumption of independence can be tested with the
Hausman-McFadden test. Furthermore, multinomial logistic regression also assumes non-perfect separation. If the groups of the outcome variable are perfectly separated by the predictor(s), then
unrealistic coefficients will be estimated and effect sizes will be greatly exaggerated.
There are different parameter estimation techniques based on the inferential goals of multinomial logistic regression analysis. One might think of these as ways of applying multinomial logistic
regression when strata or clusters are apparent in the data.
Unconditional logistic regression (Breslow & Day, 1980) refers to the modeling of strata with the use of dummy variables (to express the strata) in a traditional logistic model. Here, one model is
applied to all the cases and the stata are included in the model in the form of separate dummy variables, each reflecting the membership of cases to a particular stata.
Conditional logistic regression (Breslow & Day, 1980; Vittinghoff, Shiboski, Glidden, & McCulloch, 2005) refers to applying the logistic model to each of the stata individually. The coefficients of
the predictors (of the logistic model) are conditionally modeled based on the membership of cases to a particular stata.
Marginal logistic modeling (Vittinghoff, Shiboski, Glidden, & McCulloch, 2005) refers to an aggregation of the stata so that the coefficients reflect the population values averaged across the stata.
As a rudimentary example, consider averaging each of the conditional logistic coefficients, from the previous paragraph, to arrive at set marginal coefficients for all members of the population –
regardless of stata membership.
Variable selection or model specification methods for multinomial logistic regression are similar to those used with standard multiple regression; for example, sequential or nested logistic
regression analysis. These methods are used when one dependent variable is used as criteria for placement or choice on subsequent dependent variables (i.e., a decision or flow-chart). For example,
many studies indicate the decision to use drugs follows a sequential pattern, with alcohol at an initial stage followed by the use of marijuana, cocaine, and other illicit drugs.
For the following example a fictitious data set will be used. The data includes a single categorical dependent variable with three categories. The data also includes three continuous predictors. The
data contained enough cases (N = 600) to satisfy the cases to variables assumption mentioned earlier. First, import the data using the ‘foreign’ package and get a summary.
Next, we need to identify the outcome variable as a factor (i.e. categorical).
Next, we need to load the ‘mglogit’ package (Croissant, 2011) which contains the functions for conducting the multinomial logistic regression. Note, the ‘mlogit’ packages requires six other packages.
Next, we need to modify the data so that the multinomial logistic regression function can process it. To do this, we need to expand the outcome variable (y) much like we would for dummy coding a
categorical variable for inclusion in standard multiple regression.
Now we can proceed with the multinomial logistic regression analysis using the ‘mlogit’ function and the ubiquitous ‘summary’ function of the results. Note that the reference category is specified as
The results show the logistic coefficient (B) for each predictor variable for each alternative category of the outcome variable; alternative category meaning, not the reference category. The logistic
coefficient is the expected amount of change in the logit for each one unit change in the predictor. The logit is what is being predicted; it is the odds of membership in the category of the outcome
variable which has been specified (here the first value: 1 was specified, rather than the alternative values 2 or 3). The closer a logistic coefficient is to zero, the less influence the predictor
has in predicting the logit. The table also displays the standard error, t staistic, and the p-value. The t test for each coefficient is used to determine if the coefficient is significantly
different from zero. The Pseudo R-Square (McFadden R^2) is treated as a measure of effect size, similar to how R² is treated in standard multiple regression. However, these types of metrics do not
represent the amount of variance in the outcome variable accounted for by the predictor variables. Higher values indicate better fit, but they should be interpreted with caution. The Likelihood Ratio
chi-square test is alternative test of goodness-of-fit. As with most chi-square based tests however, it is prone to inflation as sample size increases. Here, we see model fit is significant χ² =
1291.40, p < .001, which indicates our full model predicts significantly better, or more accurately, than the null model. To be clear, you want the p-value to be less than your established cutoff
(generally 0.05) to indicate good fit. To get the expected B values, we can use the ‘exp’ function applied to the coefficients.
The Exp(B) is the odds ratio associated with each predictor. We expect predictors which increase the logit to display Exp(B) greater than 1.0, those predictors which do not have an effect on the
logit will display an Exp(B) of 1.0 and predictors which decease the logit will have Exp(B) values less than 1.0. Keep in mind, the first two listed (alt2, alt3) are for the intercepts.
Further reading on multinomial logistic regression is limited. Several authors (Garson, 2006; Mertler & Vannatta, 2002; Pedhazur, 1997) provide discussions of binary logistic regression in the
context of graduate level textbooks, which provides insight into multinomial because it is a direct extension. Clearly those authors believe that if one is inclined to understand binary logistic,
then one is also likely to understand multinomial logistic. There is merit in this position because one is an extension of the other and both use maximum likelihood (an ogive function). However;
other authors provide either direct examples of multinomial logistic regression (Schwab, 2002; Tabachnick & Fidell, 2001) or a full discussion of multinomial logistic regression (Aldrich & Nelson,
1984; Fox, 1984; Hosmer & Lemeshow, 1989; Menard, 1995).
Until next time, you can tell everybody this is your song…
References & Resources
Aldrich, J. H., & Nelson, F. D. (1984). Linear probability, logit, and probit models. Thousand Oaks, CA: Sage.
Breslow, N. E., & Day, N. E. (1980). Statistical Methods in Cancer Research. Lyon, UK: International Agency for Research on Cancer.
Croissant, Y. (2011). Package ‘mlogit’. http://cran.r-project.org/web/packages/mlogit/index.html
Fox, J. (1984). Linear statistical models and related methods: With applications to social research. New York: Wiley.
Garson, G. D. (2011). “Logistic Regression”, from Statnotes: Topics in Multivariate Analysis. http://faculty.chass.ncsu.edu/garson/pa765/statnote.htm.
Hair, J. F., Anderson, R. E., Tatham, R. L., & Black, W. C. (1998). Multivariate Data Analysis (5th ed.). Upper Saddle River, NJ: Prentice Hall.
Hoffmann, J. (2003). Generalized linear models: An applied approach. Boston, MA: Allyn & Bacon.
Hosmer, D. W., & Lemeshow, S. (1989). Applied logistic regression. New York: Wiley.
Menard, S. (1995). Applied logistic regression analysis. Thousand Oaks, CA: Sage.
Mertler, C. & Vannatta, R. (2002). Advanced and multivariate statistical methods (2nd ed.). Los Angeles, CA: Pyrczak Publishing.
Pedhazur, E. J. (1997). Multiple regression in behavioral research: Explanation and prediction (3rd ed.). New York: Harcourt Brace.
Schwab, J. A. (2002). Multinomial logistic regression: Basic relationships and complete problems. http://www.utexas.edu/courses/schwab/sw388r7/SolvingProblems/
Tabachnick, B. G. & Fidell, L. S. (2001). Using multivariate statistics (4th ed.). Needleham Heights, MA: Allyn and Bacon.
Vittinghoff, E., Shiboski, S. C., Glidden, D. V., & McCulloch, C. E. (2005). Regression Methods in Biostatistics: Linear, Logistic, Survival, and Repeated Measures Models. New York: Springer
Science+Business Media, Inc.
|
{"url":"http://it.unt.edu/benchmarks/issues/2011/08/rss-matters","timestamp":"2014-04-20T21:09:04Z","content_type":null,"content_length":"27102","record_id":"<urn:uuid:85b62876-cbdc-4dec-b3b4-0bbb62ccf493>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00047-ip-10-147-4-33.ec2.internal.warc.gz"}
|
It seems like lots of people around here have problems with random numbers. I thought this might help understand the beast. It helped me.
I used this simple program to do a little study.
#include <iostream>
#include <stdlib.h>
#include <time.h>
#include <fstream>
using namespace std;
int main(int argc, char* argv[])
time_t now = time(&now);
srand((unsigned int)now);
ofstream randout(argv[1]);
int randnum;
for(int x = 20;x>0;--x)
randnum = rand()%6+1;
return 0;
You give the program a text file name on the command line, and it stores the 20 random values in the text file, overwriting whatever was there. The numbers are from 1 to 6 (like a die).
I executed this ten times and came out with data like this.
You would expect, if the numbers are truly random, that as you get more and more results to get roughly the same number of all the possible values, which would give you an average value of 3.5
((123456)/6 = 3.5). These averages range from 3.05 to 3.75. As you can see, the overall average is 3.44, which I suppose is close enough to 3.5.
That's it, this method works very well for generating random values.
The delay in between rand() is irrelevant. In almost every single PRNG, the next random number is generated by multiplication of the previous random with a constant, followed by addition.
You see rand() often gives the same numbers in a row because the lowest bits aren't usually as random as the upper bits.
Not sure I understand your explanation, but you seem to know more about it than I do. I went back through and tried with and without the pause, and I believe you're right. I was certain I
remembered a big difference with and without. Oh well.
Sample results produced without the pause:
Sample results produced with the pause:
At any rate, I'll be editing my original post so not to confuse anybody.
cool. thanks.
Zach L.
As Cat noted, linear-congruential PRNGs (which most rand() implementations are) have 'bad' low-order bits. To get around this, and generate numbers in the range [low, high], the following is
often much better (more 'random') because the reliance is on high-order bits:
int randnum = low + int(double(high - low) * rand() / (RAND_MAX + 1.0));
A linear-congruential PRNG works as follows:
x[i+1] = (a*x[i]+c) % m
for constants a, c, m. (Note, the above equation is often not a good way to implement it due to potential overflow errors.)
>int randnum = low + int(double(high - low) * rand() / (RAND_MAX + 1.0));
int randnum = low + int(double(high - low) * rand() / RAND_MAX);
Leave off that plus one at the end though. Right Zach?
Zach L.
Thats what I get for copy-n-pasting without paying attention. :rolleyes:
Thanks :)
|
{"url":"http://cboard.cprogramming.com/cplusplus-programming/43601-randoms-printable-thread.html","timestamp":"2014-04-20T10:55:29Z","content_type":null,"content_length":"10820","record_id":"<urn:uuid:3eae210f-880c-401f-92c7-2f7925648a63>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00095-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Excess-3 Code in Digital Electronics
Excess-3 code is an example of unweighted code. Excess-3 equivalent of a decimal number is obtained by adding 3 and then converting it to a binary format. For instance to find excess-3 representation
of decimal number 4, first 3 is added to 4 to get 7 and then binary equivalent of 7 i.e. 0111 forms the excess-3 equivalent.
Below is table representing excess-3 equivalent of decimal numbers (0-9):
│Decimal Number │Excess-3 Equivalent │
│ 0 │ 0011 │
│ 1 │ 0100 │
│ 2 │ 0101 │
│ 3 │ 0110 │
│ 4 │ 0111 │
│ 5 │ 1000 │
│ 6 │ 1001 │
│ 7 │ 1010 │
│ 8 │ 1011 │
│ 9 │ 1100 │
Excess-3 code is also known as self complimenting code or reflective code, as 1′s compliment of any number (0-9) is available within these 10 numbers. For example 1′s complement of 9 (1100) is 0011.
Addition of two numbers in Excess-3 Code
Let’s understand it by taking few examples:
Example 1:
0101 (2)
+ 1000 (5)
1101 (10)
The result 1101 is in excess-6. To obtain an excess-3 equivalent, binary 3 needs to be subtracted from the result as below:
1101 (10)
- 0011 (3)
1010 (7)
Example 2:
0101 1100 (29)
+ 0110 1100 (39)
1100 1000 (95)
Considering that 4 leftmost significant bits form column 1 and 4 rightmost significant bits form column 2. If carry is generated in addition, excess-3 equivalent is obtained by adding binary
equivalent of 3 to the column generating carry and subtracting binary equivalent of 3 from the column that doesn’t generate any carry.
In this example, carry is generated by column 1 and no carry is generated by column 2. Thus, excess-3 equivalent is calculated as follows:
1100 1000 (95)
- 0011 + 0011
1001 1011 (68)
For excess-3 addition of decimal numbers, first convert the decimal numbers into binary and then perform the addition as explained above.
Hope you find the information presented here useful. Feel free to leave your footprints in the comments section below for any queries or suggestions.
1. To the condition and written well, tyvm to the info.
2. thanks 4 helping in such a simple and perfect way.
□ Thanks Shan for such a nice feedback. We really appreciate it..!!
3. I have one query i.e why we go for Excess 3 code ? Is binary code not sufficient? plz reply me
□ Hi Girish,
Excess-3 code is binary only. Binary means any number can be represented using 0 and 1. As defined by Wiki, Excess-3 binary-coded decimal code is also called as biased representation. It was
used on some older computers with a pre-specified number 3 as a biasing value. It is a way to represent values with a balanced number of positive and negative numbers.
The advantage of excess-3 code over bcd code is that it is easy to find 1′s complement (just by inverting bits) for binary numbers for subtraction.
Let us know if you have further queries on this. Thank you for stopping by.
4. ….i don’t understand the meaning of excess 3 code.can i get daily update on that through this email:dynamicrich4u@yahoo.com
□ Hi Okezie,
Do let us know if above post help you to understand excess-3 code to some extent?
5. Hi,
Please explain this to me.
My understanding is that i
can add a 3 to any number and
convert to binary. What kind of
Numbers am I allowed to add a 3
□ Hi Mots,
Your understanding is correct. Just to be precise, you can add 3 to any “decimal” number and convert it to binary. So in your words, you are allowed to add a 3 to decimal number to get it’s
excess-3 equivalent.
Hope that helps. Let us know.
6. Why do we add only 3…??? I mean why not any other number??
7. Hi Rishwa,
For an Excess-N number, N is the excess amount and N is added to the decimal number. Similarly, for an Excess-3 number, 3 is the excess amount and hence 3 is added to the decimal number to get an
excess-3 equivalent.
Hope the answer clarifies your doubt.
8. Its simple…!! thanks for that……!
But why to add 3 to the column generating carry and subtract 3 to the column with no carry generated ????
help me please..!!!
9. please explain for me the the substractionn of excess 3 codes.
10. it is difficult to write 45326 equivalent binary number
so we use BCD(binary coded decimal number)
decimial (0-9) will represent in binary
so 4 5 3 2 6
if we know 0t0 9 equivalent binary number we can represent any number
11. Hi,
why are we writting excess-3 code only upto 9 and not more than that?
12. Hi,
why are we writing excess-3 code only upto 9 and not more than that?
13. Hi Lavaynya,
We have provided only a snapshot of excess-3 codes here. You surely can write excess-3 codes for numbers greater than 9.
Let us know for any further queries.
□ everyone has gud experience
true said ………everyone.
14. thanks!!! for simplifying it all…. loving this electronics stuff now
□ Thanks Jill
15. What is the use of this code in parity checking?
16. i have a question. why excess 3 code is used and not excess 4?
17. Hi Amit,
We have explained above to Rishwa:
For an Excess-N number, N is the excess amount and N is added to the decimal number. Similarly, for an Excess-3 number, 3 is the excess amount and hence 3 is added to the decimal number to get an
excess-3 equivalent.
Hope the answer clarifies your doubt.
18. hii,
plz exaplain what is excess-3 code
explain with many example
19. what is excees -3 code
explain with some many example
20. thats so simple lamguage easly understand .thanks
21. in the example 2 why did the answer turn to 92
Pls explain
22. Whats the main use of excess 3 code in digital elecronics? Is there a similarity between excess 3 code and parity bits?
23. Tell me convesion of number into excess 3 code?
24. Hi Rohit,
We have explained in the post above. Please let us know of any specific queries you have.
25. thank you. very nice and simple way.
□ Thanks M.J.
26. if we try to find excess 3 code of 108 by adding 333 to it we get answer 010001000001 but on the another way if i firrt convert 108 into excess 3 nd them add binary equivalent of 3 into it then
answer will be 010000111011 then my question is that why such diff. is come ie one method is wrong then plz tell me which method is write
27. Hi Rajni,
You should first find excess-3 equivalent of 108 and 333. Now add excess-3 equivalents of 108 and 333 and adjust the sum as shown in example 1 and 2 above.
28. Won’t inverting the bits of Excess-3 give the 9′s compliment of a number (and not the 1′s compliment)?
29. how to add 16 and 29 in excess 3 addition ????
|
{"url":"http://verticalhorizons.in/excess-3-code-in-digital-electronics/","timestamp":"2014-04-16T13:42:42Z","content_type":null,"content_length":"76189","record_id":"<urn:uuid:709754a2-00d9-4938-9525-a29afc283594>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00348-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Post a reply
hi bobbym and BeamReacher,
Firstly, I agree with MrWhy. There are too many variables to correctly determine the nature of the curve. But let's assume it is part of a circle.
In my diagram, I've converted the distances to feet.
The following equations apply:
From these last two
I used trial and improvement to find a = 0.03370775.... radians.
This gives the radius as
From the top two equations
Using the value of R and the quadratic formula
d = 44.498509.....
(My earlier answer of 63 was because I had used 2641 rather than 2640.5 for the arc length.)
|
{"url":"http://www.mathisfunforum.com/post.php?tid=19258&qid=264299","timestamp":"2014-04-18T00:32:19Z","content_type":null,"content_length":"27393","record_id":"<urn:uuid:cf577f7a-4e7f-4e69-b6e8-ce7003b62808>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00511-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summary: MATHEMATICS OF COMPUTATION
Volume 71, Number 239, Pages 909922
S 0025-5718(02)01439-4
Article electronically published on March 22, 2002
DOUGLAS N. ARNOLD, DANIELE BOFFI, AND RICHARD S. FALK
Abstract. We consider the approximation properties of finite element spaces
on quadrilateral meshes. The finite element spaces are constructed starting
with a given finite dimensional space of functions on a square reference ele-
ment, which is then transformed to a space of functions on each convex quadri-
lateral element via a bilinear isomorphism of the square onto the element. It
is known that for affine isomorphisms, a necessary and sufficient condition for
approximation of order r + 1 in Lp and order r in W 1
p is that the given space
of functions on the reference element contain all polynomial functions of total
degree at most r. In the case of bilinear isomorphisms, it is known that the
same estimates hold if the function space contains all polynomial functions of
separate degree r. We show, by means of a counterexample, that this latter
condition is also necessary. As applications, we demonstrate degradation of the
convergence order on quadrilateral meshes as compared to rectangular meshes
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/785/2298453.html","timestamp":"2014-04-18T22:26:30Z","content_type":null,"content_length":"8327","record_id":"<urn:uuid:60947684-9751-4810-b4e6-952f6d7376db>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00136-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Redan Calculus Tutor
Find a Redan Calculus Tutor
...This was an experience I enjoyed so much that I eventually became a classroom teacher. 3. After 1 year as a HVAC Design Engineer, I started teaching math (Algebra, Geometry, Algebra II)then AP
Physics AP calculus for thelast 15 years. 4 My AP exam pass rate was 100% for the first 3 years. Every year since, almost all my AP Calculus students (math and physics) have passed the AP exam.
2 Subjects: including calculus, physics
...I majored in electrical engineering and currently work in the power industry. My love for math has grown since grade school which prompted me to take all of the math courses that I could in
college. Before I transferred to Clemson, I attended Newberry College where I maintained a GPA above 3.0 and majored in Math and Computer Science.
14 Subjects: including calculus, geometry, algebra 1, algebra 2
...My daytime job is tutoring college students in math and electronics. I don't just give out the answers but I help the student with the methods to find the answers. I can help with algebra I &
II, geometry, trigonometry, pre-calculus, math I, II and III.
22 Subjects: including calculus, geometry, GRE, ASVAB
...I also have all kinds of diagnostics tests that help me to gauge a students true standard, weaknesses, and strengths. Results from these tests saves time and for that matter, cost of tutoring
on the student's part. I have a BSc.
30 Subjects: including calculus, chemistry, physics, geometry
...In addition I have worked in a prosthetic laboratory designing new tools for persons with limb loss for two years. Math and science has opened many doors for me and they can do the same for
you!Differential Equations is an intimidating and potentially frustrating course. The course is usually taken by engineering students and taught by mathematics professors.
15 Subjects: including calculus, physics, algebra 1, trigonometry
Nearby Cities With calculus Tutor
Avondale Estates calculus Tutors
Between, GA calculus Tutors
Clarkston, GA calculus Tutors
Conley calculus Tutors
Ellenwood calculus Tutors
Grayson, GA calculus Tutors
Hapeville, GA calculus Tutors
Jersey, GA calculus Tutors
Mansfield, GA calculus Tutors
Oxford, GA calculus Tutors
Porterdale calculus Tutors
Red Oak, GA calculus Tutors
Rex, GA calculus Tutors
Scottdale, GA calculus Tutors
Walnut Grove, GA calculus Tutors
|
{"url":"http://www.purplemath.com/redan_calculus_tutors.php","timestamp":"2014-04-20T14:00:23Z","content_type":null,"content_length":"23727","record_id":"<urn:uuid:1d749b1a-1b70-44c5-8ac0-9feb525969a9>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00611-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Untitled Document
TLNise: TLNise stands for Two-Level Normal independent sampling estimation, and enables inference about the parameters of a 2-level Normal hierarchical model based on independent draws from their
exact posterior distributions. This is in contrast to MCMC methods, which produces correlated draws from a Markov chain that converges to the exact distribution.
rcwish: The SPlus function rcwish generates random draws from the Wishart and related distributions.
To use TLNise in R, go to http://cran.r-project.org/web/packages/tlnise/index.html and download the tlnise R package. This does not require a Fortran compiler to use.
norm.hm: The S-Plus function norm.hm performs empirical Bayes inference for a two-level normal hierarchical model with univariate outcomes. The file normhm.txt contains the S-Plus source code, which
must be read into S-Plus. The file normhm.pdf is a README file.
|
{"url":"http://www.swarthmore.edu/NatSci/peverso1/Software.htm","timestamp":"2014-04-17T16:29:44Z","content_type":null,"content_length":"1710","record_id":"<urn:uuid:81ff3ce0-a8ea-4147-8394-ac98138190d4>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00108-ip-10-147-4-33.ec2.internal.warc.gz"}
|
st: Saveold degrades variables' formats
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
st: Saveold degrades variables' formats
From "Sergiy Radyakin" <serjradyakin@gmail.com>
To "statalist@hsphsun2.harvard.edu" <statalist@hsphsun2.harvard.edu>
Subject st: Saveold degrades variables' formats
Date Sun, 17 Aug 2008 21:28:22 -0400
Dear All
Last week on Aug 13, there was a question regarding reading Stata 10
files in Stata 9 in the thread "st: Stata 10 files in Stata 9".
Several answers were given to this question , i.e.:
1) use saveold command in Stata 10 to save a file in Stata 9 format
2) use Stat\Transfer 9 or another conversion program
3) my answer to convert the file by stripping-off extra formatting
bytes and substituting the version signature
I don't know if the person who asked the question has resolved the
issue, but this weekend I had some time to implement just what I have
said. Two things became apparent:
1. this is doable and works just fine.
2. Stata does not convert the dataset correctly with the command
saveold in at least one case as can be illustrated below:
------ Step I (to be performed in Stata 10) --------
input my_date
format my_date %dM-D-Y
saveold R:\saveold
-------- Step II (to be performed in Stata 9) ----------
use r:\saveold
format my_date %dM-D-Y
To those who do not have both versions of Stata or don't want to spend
time actually running the above example, the following happens:
saveold changes the format of the variable my_date from %dM-D-Y to the
simplier format %d for no apparent reason. Note that this format is
valid in both Stata 10 and Stata 9, and (in my understanding) must be
Stata 10 for Windows, Aug 11, 2008
Stata 9.2 for Windows, July 20, 2007
I assume my program use10.ado can be somewhat more useful if it
converted the data preserving all formats, rather then simply
degrading format to a default. So I would be happy to know:
1) is there any motivation to degrade date format as -saveold- does it?
2) what is the exact strategy of -saveold- while converting formats
from Stata10 to Stata9?
3) is it possible to determine in a comprehensive manner if a given
string is a valid format in Stata 9? (e.g. "the first symbol must be
...., if the first symbol is ... then the second must be ...." etc)
If not, what might be a good rule for substituting formats?
Thank you, Sergiy Radyakin
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2008-08/msg00690.html","timestamp":"2014-04-20T08:42:56Z","content_type":null,"content_length":"7256","record_id":"<urn:uuid:b81085ad-979e-47e3-9618-7314abc4694b>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00309-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
HHHHHHHHHEEEEEEEELLLLLPPPPPPP!!!! what is happening in the picture?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50ab1b1ce4b064039cbd7e8f","timestamp":"2014-04-18T08:30:50Z","content_type":null,"content_length":"45461","record_id":"<urn:uuid:b838f1d5-f51c-4483-b925-5bf653a7aaa1>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00376-ip-10-147-4-33.ec2.internal.warc.gz"}
|
North Easton Calculus Tutor
Find a North Easton Calculus Tutor
Experienced, dedicated, expert tutor specializing in math, any section of test prep, and high-level critical thinking, reading, and writing. I'm friendly, patient, eager to help you succeed - and
I guarantee you won't find anyone more knowledgeable or helpful, anywhere. (If you don't agree, I won't...
47 Subjects: including calculus, English, reading, chemistry
...I have formal education in Differential Equations at both undergraduate and graduate levels. The courses I've taught and tutored required differential equations, so I have experience working
with them in a teaching context. In addition to undergraduate level linear algebra, I studied linear algebra extensively in the context of quantum mechanics in graduate school.
16 Subjects: including calculus, physics, geometry, biology
...Typically this involves having students work on problems relevant to the material they are studying. I make sure that students do as much as possible on their own, and I take on for myself the
role as a guide rather than simply an instructor. I use many examples and problems, starting with easy ones and working up to harder ones.
9 Subjects: including calculus, physics, geometry, algebra 1
...Statistics offers many new concepts which, depending how it's taught, can be overwhelming at times. I have experience taking topics in statistics which students find challenging or intimidating
and placing them in an easier to understand context. I have taught math for an SAT prep company.
24 Subjects: including calculus, chemistry, physics, statistics
I am a senior chemistry major and math minor at Boston College. In addition to my coursework, I conduct research in a physical chemistry nanomaterials lab on campus. I am qualified to tutor
elementary, middle school, high school, and college level chemistry and math, as well as SAT prep for chemistry and math.I am a chemistry major at Boston College.
13 Subjects: including calculus, chemistry, geometry, biology
|
{"url":"http://www.purplemath.com/North_Easton_Calculus_tutors.php","timestamp":"2014-04-18T16:14:07Z","content_type":null,"content_length":"24257","record_id":"<urn:uuid:00b454ab-7d8c-42fb-9072-fce035ca5f1c>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00315-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[R] Weights in binomial glm
Jan van der Laan djvanderlaan at gmail.com
Fri Apr 16 14:11:00 CEST 2010
I have some questions about the use of weights in binomial glm as I am
not getting the results I would expect. In my case the weights I have
can be seen as 'replicate weights'; one respondent i in my dataset
corresponds to w[i] persons in the population. From the documentation
of the glm method, I understand that the weights can indeed be used
for this: "For a binomial GLM prior weights are used to give the
number of trials when the response is the proportion of successes."
>From "Modern applied statistics with S-Plus 3rd ed." I understand the
However, I am getting some strange results. I generated an example:
Generate some data which is simular to my dataset
> Z <- rbinom(1000, 1, 0.1)
> W <- round(rnorm(1000, 100, 40))
> W[W < 1] <- 1
Probability of success can either be estimated using:
> sum(Z*W)/sum(W)
[1] 0.09642109
Or using glm:
> model <- glm(Z ~ 1, weights=W, family=binomial())
Warning message:
In glm.fit(x = X, y = Y, weights = weights, start = start, etastart =
etastart, :
fitted probabilities numerically 0 or 1 occurred
> predict(model, type="response")[1]
These two results are obviously not the same. The strange thing is
that when I scale the weights, such that the total equals one, the
probability is correctly estimated:
> model <- glm(Z ~ 1, weights=W/sum(W), family=binomial())
Warning message:
In eval(expr, envir, enclos) : non-integer #successes in a binomial glm!
> predict(model, type="response")[1]
However scaling of the weights should, as far as I am aware, not have
an effect on the estimated parameters. I also tried some other
scalings. And, for example scaling the weights by 20 also gives me the
correct result.
> model <- glm(Z ~ 1, weights=W/20, family=binomial())
Warning message:
In eval(expr, envir, enclos) : non-integer #successes in a binomial glm!
> predict(model, type="response")[1]
Am I misinterpreting the weights? Could this be a numerical problem?
More information about the R-help mailing list
|
{"url":"https://stat.ethz.ch/pipermail/r-help/2010-April/235476.html","timestamp":"2014-04-18T15:53:52Z","content_type":null,"content_length":"4927","record_id":"<urn:uuid:01e2df8f-130a-4f5a-bc84-d3a6e85b1723>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00433-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Patent US5909656 - Protective relay with improved DFT function
Publication number US5909656 A
Publication type Grant
Application number US 08/811,646
Publication date Jun 1, 1999
Filing date Mar 5, 1997
Priority date Mar 5, 1997
Fee status Lapsed
Publication number 08811646, 811646, US 5909656 A, US 5909656A, US-A-5909656, US5909656 A, US5909656A
Inventors Lifeng Yang
Original Assignee Abb Power T&D Company Inc.
Export Citation BiBTeX, EndNote, RefMan
Patent Citations (6), Non-Patent Citations (2), Referenced by (18), Classifications (14), Legal Events (9)
External Links: USPTO, USPTO Assignment, Espacenet
Protective relay with improved DFT function
US 5909656 A
A method for applying a modified Discrete Fourier Transform (DFT) in a protective relaying system involves five basic steps: The first step, S1, comprises measuring voltage and current time-domain
samples v(k), i(k). The next step, S2, involves the computation of the DFT of the DC component(s) (V.sub.DC (k), I.sub.DC (k)) of the voltage and current samples. The next step, S3, comprises the
computation of the regular DFT (V(k), I(k)). Next, in step S4, the modified DFTs are computed as, V.sub.m (k)=V(k)-V.sub.DC (k), and I.sub.m (k)=I(k)-I.sub.DC (k). In step S5 the modified DFT values
representing the desired phasors are employed to carry out various protective relaying functions.
I claim:
1. A method for deriving a phasor representation of a current or voltage waveform on a transmission line, wherein said waveform includes a decaying DC component, comprising the steps of:
(a) measuring time-domain samples (v(k) or i(k)) of said waveform, where k is an index referring to the sample number;
(b) computing, on the basis of said samples a, Discrete Fourier Transform (DFT) of the decaying DC component (V.sub.DC (k) or I.sub.DC (k)) of the waveform;
(c) computing a DFT (V(k) or I(k)) of said waveform, said DFT (V(k) or I(k)) being computed in accordance with the equation(s): ##EQU7## where K and N are predefined constants and where n is an index
referring to the sample number;
(d) computing a modified DFT (V.sub.m (k) or I.sub.m (k)) as a function of said DFT (V(k) or I(k)) and said DFT of said DC component (V.sub.DC (k) or I.sub.DC (k)), where m is an index referring to a
modified sample number, wherein said modified DFT yields said phasor and wherein K=N/2 and said DFT of the decaying DC component (I.sub.DC (k)) is computed in accordance with the equation: ##EQU8##
where MF1 and MF2 are modification factors and are derived as follows: ##EQU9## wherein N is the number of samples per cycle, T1 is a time constant defining a rate at which the DC component decays,
and T.sub.s is a sample interval; and
(e) performing a prescribed power system or protective relaying function using said phasor.
2. A method for deriving a phasor representation of a current or voltage waveform on a transmission line, wherein said waveform includes a decaying DC component comprising the steps of:
(a) measuring time-domain samples (v(k) or i(k)) of said waveform, where k is an index referring to the sample number;
(b) computing, on the basis of said samples, a Discrete Fourier Transform (DFT) of the decaying DC component (V.sub.DC (k) or I.sub.DC (k)) of the waveform;
(c) computing a DFT V(k) or I(k)) of said waveform, said DFT (V(k) or I(k)) being computed in accordance with the equation(s): ##EQU10## where K and N are predefined constants and where n is an index
referring to the sample number;
(d) computing a modified DFT (V.sub.m (k)) or I.sub.m (k)) as a function of said DFT (V(k) or I(k)) and said DFT of said DC component (V.sub.DC (k) or I.sub.DC (k)), where m is an index referring to
a modified sample number, wherein said modified DFT yields said phasor; and wherein K=N and said DFT of the decaying DC component (I.sub.DC (k)) is computed in accordance with the equation: ##EQU11##
where MF1 and MF2 are modification factors and are derived as follows: ##EQU12## wherein N is the number of samples per cycle, T1 is a time constant defining a rate at which the DC component decays,
and T.sub.s is a sample interval; and
(e) performing a prescribed power system or protective relaying function using said phasor.
3. A method for deriving a phasor representation of a current or voltage waveform on a transmission line, wherein said waveform includes a decaying DC component, comprising the steps of:
(a) measuring time-domain samples (v(k) or i(k)) of said waveform, where k is an index referring to the sample number;
(b) computing, on the basis of said samples, a Discrete Fourier Transform (DFT) of the decaying DC component (V.sub.DC (k) or I.sub.DC (k)) of the waveform;
(c) computing a DFT (V(k) or I(k)) of said waveform;
(d) computing a modified DFT (V.sub.m (k) or I.sub.m (k)) as a function of said DFT (V(k) or I(k)) and said DFT of said DC component (V.sub.DC (k) or I.sub.DC (k)), where m is an index referring to a
modified sample number, wherein said modified DFT yields said phasor, and wherein
(1) said modified DFT is computed in accordance with the equations,
V.sub.m (k)=V(k)-V.sub.DC (k) or
I.sub.m (k)=I(k)-I.sub.DC (k);
(2) said DFT (V(k) or I(k)) is computed in accordance with the equation: ##EQU13## wherein K and N are predefined constants; (3) said DFT of the DC component is computed in accordance with one of the
following equations:
(i) for K=N/2, ##EQU14## and,
(ii) for K=N, ##EQU15## where MF1 and MF2 are modification factors and are derived as follows; ##EQU16## wherein N is the number of samples per cycle, T1 is a time constant defining a rate at which the DC component decays, and T.sub.s is a sample interval; and
(e) performing a prescribed power system or protective relaying function using said phasor.
4. A system for deriving a phasor representation of a current or voltage waveform on a transmission line, wherein said waveform includes a decaying DC component, comprising:
(a) means for measuring time-domain samples (v(k) or i(k)) of said waveform where k is an index referring to the sample number;
(b) means for computing, on the basis of said samples, a Discrete Fourier Transform (DFT) of the decaying DC component (V.sub.DC (k) or I.sub.DC (k)) of the waveform;
(c) means for computing a DFT (V(k) or I(k)) of said waveform, said DFT (V(k) or I(k)) being computed in accordance with the equation(s): ##EQU17## wherein K and N are predefined constants and
wherein n is an index referring to the sample number;
(d) means for computing a modified DFT (V.sub.m (k)) or I.sub.m (k)) as a function of said DFT (V(k) or I(k)) and said DFT of said DC component (V.sub.DC (k) or I.sub.DC (k)), where m is an index
referring to a modified sample number, wherein said modified DFT yields said phasor and wherein K=N/2 and said DFT of the decaying DC component (I.sub.DC (k)) is computed in accordance with the
equation: ##EQU18## where MF1 and MF2 are modification factors and are derived as follows: ##EQU19## wherein N is the number of samples per cycle, T1 is a time constant defining a rate at which the
DC component decays, and T.sub.s is a sample interval; and
(e) means for performing a prescribed protective relaying function using said phasor.
5. A system for deriving a phasor representation of a current or voltage waveform on a transmission line, wherein said waveform includes a decaying DC component, comprising:
(a) means for measuring time-domain samples (v(k) or i(k)) of said waveform where k is an index referring to the sample number;
(b) means for computing, on the basis of said samples, a Discrete Fourier Transform (DFT) of the decaying DC component (V.sub.DC (k) or I.sub.DC (k)) of the waveform;
(c) means for computing a DFT (V(k) or I(k)) of said waveform said DFT V(k) or I(k) is computed in accordance with the equation(s): ##EQU20## wherein K and N are predefined constants and wherein n is
an index referring to the sample number;
(d) means for computing a modified DFT (V.sub.m (k) or I.sub.m (k)) as a function of said DFT (V(k) or I(k)) and said DFT of said DC component (V.sub.DC (k) or I.sub.DC (k)), where m is an index
referring to a modified sample number, wherein said modified DFT yields said phasor and wherein K=N and said DFT of the decaying DC component (V.sub.DC (k)) is computed in accordance with the
equation: ##EQU21## where MF1 and MF2 are modification factors and are derived as follows: ##EQU22## wherein N is the number of samples per cycle, T1 is a time constant defining a rate at which the
DC component decays, and T.sub.s is a sample interval.
6. A system for deriving a phasor representation of a current or voltage waveform on a transmission line, wherein said waveform includes a decaying DC component, comprising:
(a) means for measuring time-domain samples (v(k) or i(k)) of said waveform where k is an index referring to the sample number;
(b) means for computing, on the basis of said samples, a Discrete Fourier Transform (DFT) of the decaying DC component (V.sub.DC (k) or I.sub.DC (k)) of the waveform;
(c) means for computing a DFT (V(k) or I(k)) of said waveform;
(d) means for computing a modified DFT (V.sub.m (k) or I.sub.m (k)) as a function of said DFT (V(k)) or I(k)) and said DFT of said DC component (V.sub.DC (k) or I.sub.DC (k)), wherein m is an index
referring to a modified sample number, wherein said modified DFT yields said phasor; and wherein:
(1) said modified DFT is computed in accordance with the equations,
V.sub.m (k)=V(k)-V.sub.DC (k) or
I.sub.m (k)=I(k)-I.sub.DC (k);
(2) said DFT (V(k), I(k)) is computed in accordance with the equation: ##EQU23## wherein K and N are predefined constants; (3) said DFT of the DC component is computed in accordance with one of the
following equations:
(i) for K=N/2, ##EQU24##
(ii) for K=N, ##EQU25## where MF1 and MF2 are modification factors and are derived as follows: ##EQU26## wherein N is the number of samples per cycle, T1 is a time constant defining a rate at which the DC component decays, and T.sub.s is a sample interval; and
(e) means for performing a prescribed protective relaying function using said phasor.
The present invention relates generally to protective relaying, and more particularly to a microprocessor- or DSP-based protective relay with an improved Discrete Fourier Transform (DFT) function.
Electrical transmission lines and power generation equipment must be protected against faults and consequent short circuits, which could cause a collapse of the power system, equipment damage, and
personal injury. It is the function of the protective relays, which monitor AC voltages and currents, to locate line faults and initiate isolation by the tripping of circuit breakers. Protective
relays generally perform one or more of the following functions: (a) monitoring the system to ascertain whether it is in a normal or abnormal state; (b) metering, which involves measuring certain
electrical quantities for operational control; (c) protection, which typically involves tripping a circuit breaker in response to the detection of a short-circuit condition; and (d) alarming, which
provides a warning of some impending problem. Fault location, e.g., is associated with the protection function and involves measuring critical system parameters and, when a fault occurs, quickly
making a rough estimate of the fault location and of certain characteristics of the fault so that the power source can be isolated from the faulted line; thereafter, the system makes a comprehensive
evaluation of the nature of the fault.
Modern protective relays employ microprocessors and/or digital signal processors (DSPs) to process the voltage and current waveforms measured on the protected transmission line (the term
"transmission line" as employed herein is intended to cover any type of electrical conductor, including high power conductors, feeders, and transformer windings). Such processing may include the
computation of a DFT. For example, U.S. Pat. No. 5,592,393, Jan. 7, 1997, titled "Method and System for Providing Protective Relay Functions," describes a system that uses the DFT function to compute
instantaneous values of fundamental, second harmonic and fifth harmonic components. U.S. Pat. No. 5,172,329, Dec. 15, 1992, "Microprocessor Digital Protective Relay for Power Transformers," describes
a system that uses the DFT function to compute voltage and current phasors.
The conventional DFT exhibits poor performance if the input signal contains a decaying DC component having a continuous frequency spectrum. Therefore, the DC signal component, or offset, is typically
filtered out of the input signal before the DFT function is carried out. There are a number of the methods to deal with such DC offset, including the use of: (1) a digital mimic circuit, (2)
half-cycle and full-cycle compensation, (3) a parallel filter, and (4) a cosine filter. However, certain problems are associated with each of these methods. The digital mimic circuit is very
sensitive to noise and degrades the response of the DFT in the presence of noise. The half-cycle and full-cycle compensation techniques are similar, and both cause computational problems when the
decaying DC component is very small. The disadvantages of the parallel filter method are that line time constant is needed for an integration filter and the computation burden is high. The cosine
filter exhibits poor performance in attenuating harmonics, and also may involve a quarter-cycle delay in obtaining an orthogonal part of the DFT. The latter may be a significant disadvantage in
applications in which speed is crucial.
Accordingly, a primary object of the present invention is to provide an improved DFT process and protective relay utilizing the improved DFT. The invention is especially intended for protective
relaying applications in which accurate voltage and current phasors must be derived.
A method or system for deriving a phasor representation of a current or voltage waveform in accordance with the present invention comprises the steps of, or means for, measuring time-domain samples
(v(k), i(k)) of the waveform; computing, on the basis of the samples, a DFT (V.sub.DC (k), I.sub.DC (k)) of the decaying DC component of the waveform; computing a DFT (V(k), I(k)) of the waveform;
computing a modified DFT (V.sub.m (k), I.sub.m (k)) as a function of the DFT (V(k), I(k)) and the DFT, V.sub.DC (k) and/or I.sub.DC (k), of the DC component, wherein the modified DFT yields the
desired phasor; and performing a prescribed protective relaying function using the phasor. The prescribed relaying function may include, e.g., fault typing and/or fault location, although many other
applications for phasors are known. Moreover, in the presently preferred embodiments of the invention, the modified DFT is computed in accordance with the equations,
V.sub.m (k)=V(k)-V.sub.DC (k) or
I.sub.m (k)=I(k)-I.sub.DC (k)
where V(k) and I(k) represent the regular DFTs of the voltage and current waveforms, respectively, and V.sub.DC (k) and I.sub.DC (k) represent the DFTs of the DC components of the voltage and current
waveforms. These DFTs are computed in accordance with the algorithms described below in connection with a detailed description of the presently preferred embodiments. The DFT algorithms involve
certain parameters in addition to the sample data to be transformed by the DFT. Such parameters include the number of samples per cycle, denoted "N", the time constant defining the rate at which the
DC component decays, denoted "T1", and the sample interval, denoted "T.sub.s ". For example, if analog voltage and current signals are sampled at a rate of 24 samples per power system cycle, then N
will be 24 and T.sub.s, the time period between successive samples, will be 1/fN. Other features of the invention are disclosed below.
FIG. 1 schematically depicts a protective relay in accordance with the present invention.
FIG. 2 is a flowchart of a DFT process in accordance with the present invention.
FIG. 1 depicts one presently preferred embodiment of a microprocessor-based protective relay in accordance with the present invention. As shown, the relay comprises current and voltage transducers
10, filters 12, and a multiplexor 14, the latter outputting an interleaved stream of analog phase current and voltage signal samples, as well as neutral current samples. The analog multiplex output
by the multiplexor 14 is digitized by an analog-to-digital converter 16. The output of the analog-to-digital converter 16 is fed to a DSP 18. The DSP 18 employs a DFT, described below, to produce
phasor data for each of the sampled channels. The phasor data is stored in a memory 20. The phasor data in the memory 20 is fed via a data bus to a central processing unit (CPU) board 22. The CPU
board 22 includes a microprocessor 22-1, random access memory 22-2, and read only memory (ROM) 22-3. The ROM 22-3 contains program code controlling the microprocessor 22-1 in performing fault typing,
fault location, reporting, and other protective relaying functions. The random access memory 22-2 may include a pre-fault segment of memory and a post-fault segment of memory, which may be employed
(as described, e.g., in U.S. Pat. No. 5,428,549, Jun. 27, 1995, "Transmission Line Fault Location System") in performing the various protective relaying functions. The CPU board 22 may output fault
data to a protection/alarming block 24 that performs protection and alarming functions such as tripping a circuit breaker or sounding an alarm as appropriate.
FIG. 2 is a flowchart of a modified DFT process or method in accordance with the present invention. The modified DFT process will first be explained generally with reference to FIG. 2, and then a
detailed mathematical explanation will be provided.
The presently preferred application of the modified DFT involves a protective relay of the kind depicted in FIG. 1. The inventive process for applying the modified DFT includes five basic steps,
denoted S1 through S5 in FIG. 2. The first step, S1, comprises measuring voltage and current time-domain samples v(k), i(k), where k is an index referring to the sample number. The next step, S2,
involves the computation of the DFT of the DC component(s) (V.sub.DC (k), I.sub.DC (k)) of the voltage and/or current samples (whether voltage or current samples, or both, are used to compute the DFT
will depend upon the particular relaying application(s) involved). The next step, S3, comprises the computation of the regular DFT (V(k), I(k)) of the voltage and/or current component(s). Next, in
step S4, the modified DFTs are computed as,
V.sub.m (k)=V(k)-V.sub.DC (k), and
I.sub.m (k)=I(k)-I.sub.DC (k).
(In some applications it will only be necessary to compute the modified voltage or current DFT.) In step S5 the modified DFT values, which represent the desired phasor(s), are used to perform the
various well known protective relaying or similar power system functions.
Procedure for Computing Modified DFT
A procedure for computing a modified current DFT I.sub.m (k) in accordance with the present invention will now be described in detail. In the following description, the transformed signal is assumed
to be a current waveform i(t) measured on one of the phase conductors of a transmission line. It will be apparent to those skilled in the art that the same algorithm could be used to derive the
voltage transform V.sub.m (k).
The inventive DFT procedure provides an efficient way to obtain accurate phasor representations of the current and voltage waveforms on a transmission line. As discussed above, the effects of a
decaying DC component are minimized. The invention requires one more time domain samples and a regular DFT data window. The procedure may be carried out by first computing the desired phasor using
the regular DFT for a certain length of data window, and then using one sample that is one cycle before to do the corrections. The computational burden involved in the correction is small and the
time delay is just one sample.
The regular DFT can be stated as: ##EQU1## If the signal contains only a decaying DC component, ##EQU2## (where T1 is the decay time constant and B is a constant) then the DFT output becomes, ##EQU3#
# Similarly, for K=N, ##EQU4## Now let us examine what would happen where the signal contains the fundamental (sin(ωt)) and all harmonics (sin(nωt)) in addition to the decaying DC (e.sup.-t/T1).
Assume that the time domain signal has the following form: ##EQU5##
It is seen that for the half-cycle correction (i(k-N/2)+i(k)), the odd harmonics (sin(nωt)) are canceled but the even harmonics (sin(2nωt)) remain and are doubled in magnitude. Therefore, the even
harmonics contribute an error in the correction. In contrast, the full cycle correction (i(k-N)-i(k)) eliminates all harmonics. It should be noted that we assume the decaying DC component starts
after the fault's inception; therefore the minimum data window will be (N/2+1) for the half-cycle correction and (N+1) for the full-cycle correction.
We have now developed the formulas for estimating the regular DFT error due to a decaying DC component for both half-cycle and full-cycle DFT algorithms. The correction of the regular DFT output may
now be stated as follows: Given the output from the regular DFT, ##EQU6## then the modified DFT output (I.sub.m (k)) will be,
I.sub.m (k)=I(k)-I.sub.dc (k)
The method described above can be used to derive voltage and current phasors when a DC component is present. The effectiveness of this method depends upon the line time constant T1. In other words,
the invention will be most effective when the estimated line time constant in the correction is close to the actual time constant. For power system relaying applications, different line time
constants may be used for voltage and current signals. For example, a voltage signal v(t) from a potential transformer typically contains a very small DC component, and so it may not be necessary to
make any corrections to the DFT phasor. However, if the voltage is taken from a capacitor coupled voltage transformer (CCVT), it may contain a significant, long lasting DC component for a low voltage
fault. In this case, DC correction may be desirable, and a larger time constant T1 may be used. For a current signal, the time constant may be obtained from the zone-one reach impedance. The time
constant T1 may be defined as the ratio of the line inductance to the line resistance. It is a measure of the rate at which the DC component decays. Thus, for example, if the line impedance is Z=
R+jX, T1=X/2πfR, where f is the power system frequency.
The DC correction method is simple and efficient, and requires only one extra sample to obtain the modification factors (MF1 and MF2). The modification factors can be computed off-line or on-line
during relay initialization. Since these factors are common to the half-cycle DFT and full-cycle DFTs, it is very convenient to perform adaptive DFT calculations. Moreover, real-time computations are
practical since the computational burden is very small, and thus the invention is suitable for use in high-speed relaying applications.
Those skilled in the protective relaying art will recognize that there are a variety of uses for phasors of the kind yielded by the improved DFT provided by the present invention. For example,
phasors are used in power system protection (e.g., level detection (threshold units), direction discrimination, fault distance estimation, out of step detection, and fault location). Phasors are also
used in the fields of power measurement (voltage, current and power metering), power flow analysis, state estimation, and power system control. Voltage and current phasors, e.g., are essential to
carrying out many different calculation and decision making processes in the frequency domain. Since errors in the phasor calculations can result in erroneous decisions, it is important that the
phasors used in the decision making process be accurate. The present invention provides such accurate phasors.
The above description of preferred embodiments of the invention is not intended to limit the scope of protection of the following claims. Thus, for example, except where they are expressly so
limited, the claims are not limited to applications involving three-phase power systems or power systems employing a 60 Hz or 50 Hz fundamental frequency. Moreover, the claims are not limited to
systems associated with any particular part (i.e., transformer, feeder, high power transmission line, etc.) of a power distribution system.
Cited Patent Filing date Publication date Applicant Title
US4587626 * Feb 14, 1985 May 6, 1986 Trw Inc. Sum and difference conjugate discrete Fourier transform
US5172329 * Jun 14, 1990 Dec 15, 1992 Rahman Azizur M Microprocessor-based digital protective relay for power transformers
US5406495 * Feb 1, 1993 Apr 11, 1995 Systems Analysis And Integration, Inc. Substation load distribution monitor system
US5453903 * Aug 18, 1993 Sep 26, 1995 Abb Power T&D Company, Inc. Sub-cycle digital distance relay
US5592393 * Mar 16, 1995 Jan 7, 1997 Beckwith Electric Co. Method and system for providing protective relay functions
US5671112 * May 13, 1996 Sep 23, 1997 Abb Power T&D Company, Inc. Digital integrator V/Hz relay for generator and transformer over-excitation protection
1 Lian, C.Z., "Direct Current Error Compensation of Fourier Method," 4th Protective Relaying and Automation Meeting of CIEE (Chinese Institution of Electrical Engineers), Oct. 1986. Chinese
language copy and copy of English translation enclosed.
2 * Lian, C.Z., Direct Current Error Compensation of Fourier Method, 4th Protective Relaying and Automation Meeting of CIEE (Chinese Institution of Electrical Engineers), Oct. 1986. Chinese language
copy and copy of English translation enclosed.
Citing Patent Filing date Publication date Applicant Title
US6154687 * Apr 15, 1998 Nov 28, 2000 Abb Power T&D Company Inc. Modified cosine filters
US6173216 * Apr 15, 1998 Jan 9, 2001 Abb Power T&D Company Inc. Protective relay with improved, sub-window cosine filter
US6483680 Jul 21, 2000 Nov 19, 2002 General Electric Co. Magnetizing inrush restraint method and relay for protection of power transformers
US6714881 * Aug 14, 2001 Mar 30, 2004 Square D Company Time reference compensation for improved metering accuracy
US6911827 * Oct 21, 2002 Jun 28, 2005 Hewlett-Packard Development Company, L.P. System and method of measuring low impedances
US7554214 Apr 26, 2007 Jun 30, 2009 Cummins Power Generation Ip, Inc. Large transient detection for electric power generation
US7557544 Apr 23, 2007 Jul 7, 2009 Cummins Power Generation Ip, Inc. Zero crossing detection for an electric power generation system
US7598623 Apr 23, 2007 Oct 6, 2009 Cummins Power Generation Ip, Inc. Distinguishing between different transient conditions for an electric power generation system
US7687929 Jun 1, 2007 Mar 30, 2010 Cummins Power Generation Ip, Inc. Electric power generation system with multiple inverters
US7855466 Jun 1, 2007 Dec 21, 2010 Cummins Power Generation Ip, Inc. Electric power generation system with current-controlled power boost
US7880331 Jun 1, 2007 Feb 1, 2011 Cummins Power Generation Ip, Inc. Management of an electric power generation and storage system
US7888601 Jun 1, 2007 Feb 15, 2011 Cummins Power Generations IP, Inc. Bus bar interconnection techniques
US7956584 Jun 1, 2007 Jun 7, 2011 Cummins Power Generation Ip, Inc. Electric power generation system with multiple alternators driven by a common prime mover
US7982331 Dec 28, 2007 Jul 19, 2011 Cummins Power Generation Ip, Inc. Transfer switch assembly
US8085002 Dec 28, 2007 Dec 27, 2011 Cummins Power Generation Ip, Inc. Shore power transfer switch
US8513925 Dec 27, 2011 Aug 20, 2013 Cummins Power Generation Ip, Inc. Shore power transfer switch
US8525492 Jan 4, 2011 Sep 3, 2013 Cummins Power Generation Ip, Inc. Electric power generation system with multiple alternators driven by a common prime mover
CN101277012B Mar 31, 2008 Nov 6, 2013 通用电气公司 Fast impedance protection technique immune to dynamic errors of capacitive voltage transformers
U.S. Classification 702/77, 702/66, 702/64, 700/292, 324/522, 708/405, 361/86, 361/160, 700/293, 324/76.21, 361/87
International Classification G06F17/14
Cooperative Classification G06F17/14
European Classification G06F17/14
Date Code Event Description
Jul 19, 2011 FP Expired due to failure to pay maintenance fee Effective date: 20110601
Jun 1, 2011 LAPS Lapse for failure to pay maintenance fees
Jan 3, 2011 REMI Maintenance fee reminder mailed
Nov 16, 2006 FPAY Fee payment Year of fee payment: 8
Owner name: ABB INC., NORTH CAROLINA
Free format text: CHANGE OF NAME;ASSIGNOR:ASEA BROWN BOVERI INC.;REEL/FRAME:016641/0598
Effective date: 20010627
Oct 17, 2005 AS Assignment
Owner name: ASEA BROWN BOVERI INC., NORTH CAROLINA
Free format text: CHANGE OF NAME;ASSIGNOR:ABB POWER T&D COMPANY INC.;REEL/FRAME:016641/0594
Effective date: 20010622
Owner name: CARNELIAN CORDLESS LLC, NEVADA
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ABB RESEARCH LTD.;REEL/FRAME:016489/0518
Sep 6, 2005 AS Assignment
Effective date: 20050517
Owner name: CARNELIAN CORDLESS LLC,NEVADA
Dec 18, 2002 REMI Maintenance fee reminder mailed
Nov 21, 2002 FPAY Fee payment Year of fee payment: 4
Owner name: ABB POWER T&D COMPANY INC., NORTH CAROLINA
May 19, 1997 AS Assignment Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:YANG, LIFENG;REEL/FRAME:008510/0391
Effective date: 19970227
Original Image
|
{"url":"http://www.google.com/patents/US5909656?dq=5754119","timestamp":"2014-04-18T04:27:28Z","content_type":null,"content_length":"79099","record_id":"<urn:uuid:df7d29c9-c30e-40c4-af00-09613bd3b6a5>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00250-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The simplex method
August 4th 2009, 08:26 AM
The simplex method
The question goes...
Use the simplex method to solve the following linear programming problem. Maximise
$f(x_1, x_2, x_3)=3x_1+2x_2+4x_3$
subject to the constraints
$x_1, x_2, x_3\geq0$
August 4th 2009, 09:24 AM
This looks like a totally standard exercise in using the simplex method, so where are you having trouble with it?
Just to get you started, you should introduce "slack variables" u,v,w, to convert the inequalities into equations, namely
\begin{aligned}3x_1+x_2+4x_3+u\qquad\qquad &= 60,\\ x_1+2x_2+3x_3 \qquad+ v\qquad&= 30,\\ 2x_1+2x_2+3x_3\qquad\qquad+w &= 600.\end{aligned}
The objective equation is $M = 3x_1+2x_2+4x_3$, which you write as $-3x_1-2x_2-4x_3+M=0$.
Then you write down the simplex tableau, which is just the matrix of coefficients in these equations, and apply the simplex algorithm.
If it's any help to you, I wrote down a systematic description of the simplex algorithm a few years ago when teaching this stuff. You can find a copy of it here (pdf file).
August 4th 2009, 09:52 AM
My problem is that I don't ever remember being taught this, and it's not in my notes anywhere, so I don't know what the simplex algorithm is or anything. I'll have a look at that .pdf, see if
that helps.
August 4th 2009, 07:21 PM
mr fantastic
|
{"url":"http://mathhelpforum.com/advanced-applied-math/96940-simplex-method-print.html","timestamp":"2014-04-17T08:45:37Z","content_type":null,"content_length":"7606","record_id":"<urn:uuid:e6435ae7-9475-4bdc-bfd4-e8f9112a4d2c>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00219-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Unit 2, Set B - Sample Problems
ChemPhys 173/273
Unit 2: Refraction and Lenses
Problem Set B
The following selection of problems are sample problems. Individual student problem sets will vary since numerical information is randomly-generated.
For the following problems:
• Compute the unknown quantity and enter the answer in the blank.
• Do not round any computed numbers until the last calculation.
• Unless told otherwise, enter your answers accurate to the second decimal place.
• Unless told otherwise, enter positive numbers for your answers.
• Unless otherwise mentioned, use index of refraction values from your textbook.
Problem 1:
Problem 2:
Problem 3:
Problem 4:
Problem 5:
Problem 6:
Problem 7:
Problem 8:
Problem 9:
A moving esalator at a department store is 58 meters long. It makes an angle of 20.5 degrees with the horizontal. What is the vertical rise (in meters) of passengers who ride the escalators?
Problem 10:
A student stands 93.8 meters from the base of a tall building. Using a protractor, she determines that she must sight along a 34.8 degree angle in order to view the top of the building. How tall (in
meters) is the building?
Problem 11:
A ray of light in air is incident on the surface of a block of clear ice at an angle of 63.8 degrees with the normal. Part of the light is reflected and part of the light is refracted. Find the angle
between the reflected and refracted light rays. Since there are two possible answers (the angle can be measured either clockwise or counter-clockwise from the reflected ray), enter the answer that is
less than 180 degrees.
Problem 12:
A thick plate of glass (n = 1.69) rests on top of a thick plate of transparent acrylic (n = 1.48). A beam of light in air is incident on the top surface of the glass at an angle theta-i. The beam
passes through both the glass and the acrylic and emerges from the acrylic at an angle of 36.4 degrees with respect to the normal. Calculate the value of theta-i (in degrees). A sketch of the light
path through the the two plates of refracting material would be helpful. (Assume that both plates are flat and form parallel layers.)
Problem 13:
Problem 14:
(Referring to the previous problem.) The light ray passes through the equiangular prism and emerges from one of the other faces. Determine the angle of refraction (in degrees) of the light ray as it
emerges from the glass prism. (HINT: Use geometric principles to determine the angle of incidence and Snell's law to determine the angle of refraction.)
Problem 15:
Problem 16:
(Referring to the previous problem.) Determine the angle theta-' (in degrees) in the diagram.
Problem 17:
A submarine is 325 m horizontally out from the shore and 112 m beneath the surface of the water. A laser beam is sent from the submarine such that it strikes the surface of the water at a point 229 m
horizontally out from the shore. If the beam just strikes the top of a building standing directly at the water's edge, find the height (in meters) of the building.
Problem 18:
When Crocodile Dundee was young and ignorant of the physics of refraction, villagers would continually laugh at him as he would miss on his attempts at spearing fish. On one such unsuccessful
attempt, he was hunting the rare Fishus Targetus. He tried to spear the fish by throwing the spear into the water a horizontal distance of 2.412 meters away from him. The fish was actually under
water a horizontal distance of 1.329 meters from the point that the spear entered the water. Dundee aimed for the center of the fish's target (scientists believe that this has contributed to the fact
that it is rare). He threw the spear along his line of sight from the height of 1.697 meters. The spear passed over the fish as Dundee aimed directly along his line of sight. Determine the vertical
distance by which he missed? That is, find how far the actual fish was below the image of the fish. Enter your answer accurate to the third decimal place.
Problem 19:
Problem 20:
[water]). The diameter (D) of the cup is 14.64 cm. A student looks downward just over the left rim of the cup at an angle of 40.47 degrees with the water's surface (theta). At this angle, the
refraction of light at the water's surface just barely allows her to see the bottom-right corner of the cup. A sketch (not drawn to scale) of the path of light is shown at the right. Determine the
height of the cup (H[cup]) in centimeters.
Return to: Set B Overview Page || Audio Help Home Page || Set B Sample Problems
Audio Help for Problem: 1 || 2 || 3 || 4 || 5 || 6 || 7 || 8 || 9 || 10 || 11 || 12 || 13 || 14 || 15 || 16 || 17 || 18 || 19 || 20
Retrieve info about: Problem-Solving || Audio Help || Technical Requirements || CD-ROM
Return to: ChemPhys Problem Set Page || CP 173 Home || CP 273 Home || The Physics Classroom
|
{"url":"http://gbhsweb.glenbrook225.org/gbs/science/phys/chemphys/audhelp/u2setb/sample.html","timestamp":"2014-04-21T07:05:29Z","content_type":null,"content_length":"11546","record_id":"<urn:uuid:140c93d0-2466-4f99-815f-f0edcef333b2>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00184-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Feedback: Lunar Recession
[The July 11 AiG web] article mentioned the moon moving away from the earth at approximately 1.5 inches a year. It said that would be 250 meters in 6,000 years which I can get on my computer’s
calculator program. It also mentioned the moon as being 400,000km away and that 1.5 billion years ago the earth and moon would be touching at that pace. However I get 9.6 billion years at the 1.5
inches a year pace and am not so sure that the moon moving away from the Earth would be constant. I would think that the speed at which the moon moves away from the Earth would be lower at lower
distances due to planetary gravity? Is the author using logic somewhere to change the math that I am unaware of? At a closer distance from earth at a certain point the two would accelerate
towards each other and collide I’m sure.. There has to be a minimum distance the moon would have to start away from Earth for this to not happen right? So how far is that distance and is that the
factor I’m missing?
We have had many people contact us concerning the recession rate of the moon. Hopefully this short article will help to clarify the details of lunar recession and why it supports a young age for our
solar system.
The recession of the moon is not constant over time. It would have been faster in the past. So, it is incorrect to assume that the rate has always been 4 cm/year.
Gravity is the force that keeps our moon in orbit around the earth. In Figures 1 and 2 this is represented by line “B.” If not for the gravity of the earth and moon, the moon would simply float away
from the earth into space.
A major point to remember about lunar recession is that it is not constant over long periods of time. The further the moon moves away from the earth the more constant its recession seems to become.
In short, lunar recession is caused by tidal forces. Tidal forces are not the same thing as the gravity that keeps the moon orbiting around the earth. (However, they are caused by the moon’s gravity
as will be shown.) The moon does more than just the rising and receding of tides along shorelines. When combined with the rotation of the earth and its gravity, these tidal forces are what cause the
moon to recede away from the earth.
As we know, the moon causes tides; these are due to the fact that the moon’s gravitational force is stronger the closer you are to it. So, the moon’s gravity pulls more strongly on the side of Earth
closest to the moon, and pulls less on the opposite side. This effectively “stretches” the Earth and produces two tidal bulges. The figure illustrates how the moon is actually pulling the oceans away
from the earth toward itself (point 1) and causes the earth to bulge. At the same time there is a bulge produced on the opposite side of the earth (point 3) where the earth is being pulled away from
the oceans.
Since the earth rotates faster than the moon orbits, the tidal bulge stays slightly ahead of the moon. With the earth bulging, the moon is “pulled” by the point of gravity (point 1), produced by the
bulge, since it is closer to it (line A) than the point of gravity (point 3) at the opposite side of the earth (line C). Since the moon is constantly being pulled it is constantly accelerating. Even
though the earth’s gravity (point 2) is acting as a centripetal force (line B) to keep the moon in an orbital path (dark arrow), the acceleration of the moon caused by the tidal bulge at point 1 is
increasing its angular momentum, therefore moving it outward (gray arrow).
Figure 1 shows what the past (theoretical) recession rate would have looked like. Being much closer in a more-distant past, the moon would have caused larger tidal bulges, creating a greater
“pulling” force (point 1, line A), increasing the angular momentum; thus the moon receded at a much greater speed (as shown by the red arrows).
With the earth where it is today (Figure 2) tidal bulges are much smaller (than the theoretical past), making the “pulling” force of point 1 smaller; thus the angular momentum is much less, resulting
in the present and seemingly more-constant recession rate of 4 cm per year. The moon could never have been closer than 18,400 km (11,500 miles), known as the Roche Limit, because Earth’s tidal forces
(i.e., the result of different gravitational forces on different parts of the moon) would have shattered it. This is explained in more detail In Dr. Lisle’s book Taking Back Astronomy.
The equations (also taken from Dr. Lisle’s book) involved in the recession rate of the moon are thus:
k = r^6dr/dt = (384,401 km)^6 x (0.000038 km/year) = 1.2 x 10^29 km^7/year
∫[0]^T dt = ∫[0]^R (r^6/k)dr
T = R^7/(7k)
In His name and for His glory,
David Wright, AiG–USA
Help keep these daily articles coming. Support AiG.
Recommended Resources
|
{"url":"http://www.answersingenesis.org/articles/2006/08/11/feedback-lunar-recession","timestamp":"2014-04-17T03:56:41Z","content_type":null,"content_length":"39085","record_id":"<urn:uuid:e96df4d6-0a59-4e09-9b37-5dfeff49680b>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00552-ip-10-147-4-33.ec2.internal.warc.gz"}
|
This Article
Bibliographic References
Add to:
Synthesis of Finite State Algorithms in a Galois Field GF[p^n]
March 1981 (vol. 30 no. 3)
pp. 225-229
ASCII Text x
W.R. English, "Synthesis of Finite State Algorithms in a Galois Field GF[p<sup>n</sup>]," IEEE Transactions on Computers, vol. 30, no. 3, pp. 225-229, March, 1981.
BibTex x
@article{ 10.1109/TC.1981.1675759,
author = {W.R. English},
title = {Synthesis of Finite State Algorithms in a Galois Field GF[p<sup>n</sup>]},
journal ={IEEE Transactions on Computers},
volume = {30},
number = {3},
issn = {0018-9340},
year = {1981},
pages = {225-229},
doi = {http://doi.ieeecomputersociety.org/10.1109/TC.1981.1675759},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
RefWorks Procite/RefMan/Endnote x
TY - JOUR
JO - IEEE Transactions on Computers
TI - Synthesis of Finite State Algorithms in a Galois Field GF[p<sup>n</sup>]
IS - 3
SN - 0018-9340
EPD - 225-229
A1 - W.R. English,
PY - 1981
KW - sequential networks
KW - Finite-state algorithms
KW - finite-state machines
KW - Galois field arithmetic
VL - 30
JA - IEEE Transactions on Computers
ER -
This correspondence describes a method for achieving synthesis of finite state algorithms by the use of a set of logic elements that execute field operations from the Galois field GF[pn]. The method
begins with a definition of the algorithm to be synthesized in a completely specified finite state flow table form. A polynomial expansion of this flow table function is derived. A canonical
sequential circuit corresponding to this polynomial expansion is defined. Subsequently, the given algorithm is synthesized using the canonical circuit by specification of a number of arbitrary
constants in the canonical circuit. A mechanical method for deriving constants used in the canonical circuits is given. Finally, some estimates on complexity of the given circuit structure are stated
assuming the most fundamental logic element structures.
Index Terms:
sequential networks, Finite-state algorithms, finite-state machines, Galois field arithmetic
W.R. English, "Synthesis of Finite State Algorithms in a Galois Field GF[p^n]," IEEE Transactions on Computers, vol. 30, no. 3, pp. 225-229, March 1981, doi:10.1109/TC.1981.1675759
Usage of this product signifies your acceptance of the
Terms of Use
|
{"url":"http://www.computer.org/csdl/trans/tc/1981/03/01675759-abs.html","timestamp":"2014-04-17T11:20:45Z","content_type":null,"content_length":"48676","record_id":"<urn:uuid:e5de6316-bc82-4650-adec-e3f12aeea3bb>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00470-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Another Log Graph question
January 8th 2009, 10:57 AM #1
Junior Member
Jan 2008
Another Log Graph question
Hey guys,
I have another question here...
Explain why any equation of the form y = -x + b is its own inverse. Use both algebraic and graphics arguments.
What do they mean by 'is its own inverse'? Its its own inverse without switching the y and x, and just the - sign?
Last edited by Slipery; January 8th 2009 at 01:30 PM.
f(x) = -x + b is its own inverse because f(f(x)) = x
f(f(x)) = -f(x)+b = -(-x+b)+b = x
January 9th 2009, 07:21 AM #2
MHF Contributor
Nov 2008
|
{"url":"http://mathhelpforum.com/pre-calculus/67333-another-log-graph-question.html","timestamp":"2014-04-17T22:56:09Z","content_type":null,"content_length":"31413","record_id":"<urn:uuid:f06a10ba-4acc-4a81-8a22-7d5387ea8bba>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00488-ip-10-147-4-33.ec2.internal.warc.gz"}
|
West Warwick Algebra 1 Tutor
...I am also passionate about sports and am a baseball coach and long-time athlete and would love to help out anyone who is looking to learn how to play sports such as baseball or football. I can
also help out some other academic areas as well. Being a teacher, I am pretty flexible in the summer.I have been watching football my entire life.
16 Subjects: including algebra 1, Spanish, reading, elementary math
...Precalculus seems like the link to truly higher order math. It combines algebra and geometry. My approach to helping students understand and truly learn precalculus involves introduction to
analytic techniques/processes that are easy to master and become second nature.
11 Subjects: including algebra 1, calculus, physics, geometry
...I am presently elementary and special education certified through RIDE and teach in a public school. I have years of experience working with children with ADD/ADHD and have taught several years
in the ed/bd setting (self-contained & inclusion). I am presently RIDE certified in elementary and spe...
33 Subjects: including algebra 1, reading, English, writing
...I am well traveled and an enthusiastic lifelong learner. This past summer I traveled to Iceland and learned about its unique geology as well as to Peru to study its birds and history. In 2010,
I joined a Norwegian trekking group and hiked through two Norwegian National Parks.
16 Subjects: including algebra 1, chemistry, physics, biology
...Last year, I tutored a freshman student in geometry and her grade improved significantly. I received an 800 on the math section of the SAT and a 750 on the SAT II for Math 2. I received an 800
in SAT Math.
20 Subjects: including algebra 1, physics, calculus, French
|
{"url":"http://www.purplemath.com/West_Warwick_algebra_1_tutors.php","timestamp":"2014-04-16T16:03:50Z","content_type":null,"content_length":"24101","record_id":"<urn:uuid:5c83b7b1-ae2b-4c75-bee5-52330c7623ca>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00489-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: st: RE: creating combined correlation of dummy (ordered multilevel)
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: RE: creating combined correlation of dummy (ordered multilevel)
From Stefan Nijssen <stefannijssen@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: RE: creating combined correlation of dummy (ordered multilevel)
Date Sat, 11 Jun 2011 14:24:53 +0200
Maarten, thanks for your reply. As I am currently reading from your website, the sheaf coefficient seems to be very relevant in my model.
Indeed I am talking about independent variables. The dependent is continuous, a risk factor represented by an interest rate. I am trying to estimate the effect of both a rating scale and a set of accounting ratios to the risk factor. I am hypothesizing that the effect of the rating on the interest rate, compared to the effect of the accounting ratios on this interest rate has diminished in recent years, using data from three points in time during the last decade. Hence the independent variables can be seen as two blocks, the ratings (which are in the form of an ordered scale, AAA (1) AA (2) A (3) BBB (4) BB (5) B (6)). I have split this variable into 6 dummies since the distance between either point might not be the same. Reading from your website, I think I should create this so called sheaf coefficient for both the rating scale and the accounting ratios, and in the ideal case I will be able to visualize the influence of either variable block. Doing this for three points in!
time, I would be able to see, possibly, a pattern. With the rating scale and the accounting ratios being a possible tradeoff (theoretically), I would be able to see their correlation using the model this way.
Does this sound logical?
Thanks for any help and suggestions,
Stefan Nijssen
On Jun 10, 2011, at 16:48 , Maarten Buis wrote:
> On Fri, Jun 10, 2011 at 3:58 PM, Stefan Nijssen wrote:
>> This is true. The variable AAA AA etc.. is a rating scale, ordered
>> from AAA to B (in original format this was 1 to represent AAA, 2 to
>> represent AA etc. until 6 to represent B, from which through "xi
>> i.Rating, noomit" I created the dummies). I am using the dummies since
>> for instance the distance between AA and A (2 and 3) might not be the
>> same as the distance A and BBB (3 and 4). Therefore I don't think the
>> original variable (with numbers 1 to 6) is the one to read the
>> correlation from, although this would be easiest.
>> The reason for my interest in the correlation is that theoretically
>> the variable Rating is created using variables I use in the regression
>> in the first place. Any idea how to be able to interpret the
>> multicorrelation?
> This depends on whether you want to use your variable as a
> dependent/explained/y or independent/explanatory/x variable. In the
> latter case I would look at sheaf coefficients to simultaneously
> estimate the distances between the levels and a single effect of your
> variable, see -ssc d sheafcoef- and
> <http://www.maartenbuis.nl/wp/prop.html>. In the former case I would
> look at ordered regression models like -ologit- or -ssc d gologit2-.
> Hope this helps,
> Maarten
> --------------------------
> Maarten L. Buis
> Institut fuer Soziologie
> Universitaet Tuebingen
> Wilhelmstrasse 36
> 72074 Tuebingen
> Germany
> http://www.maartenbuis.nl
> --------------------------
> *
> * For searches and help try:
> * http://www.stata.com/help.cgi?search
> * http://www.stata.com/support/statalist/faq
> * http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2011-06/msg00544.html","timestamp":"2014-04-16T13:34:54Z","content_type":null,"content_length":"12288","record_id":"<urn:uuid:7862c30d-2a0e-451a-a239-64a6e8430e23>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00387-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Help
March 29th 2011, 03:55 PM #1
Feb 2010
I wan't to show that $C_{c}(\mathbb{R})$ is a subspace of $L^{p}(\mathbb{R})$.
Can I use the usual three conditions to show that $C_{c}(\mathbb{R})$ is a subspace. But how would that show that $C_{c}(\mathbb{R})$ is a subspace of specifically $L^{p}(\mathbb{R})$?
Is $C_c\left(\mathbb{R}\right)$ continuous functions with compact support? If so, then what particularly are you having trouble with, you know that the sum of two continuous functions is
continuous as is the product of a continuous function by a scalar, thus it suffices to prove that the same is true for functions with compact support. But, it's clear that $\text{supp}(cf)=\text
{supp}(f)$ and $\text{supp}(f+g)\subseteq \text{supp}(f)\cup\text{supp}(g)$ so that $\overline{\text{supp}(f+g)}$ is a closed subspace of $\overline{\text{supp}(f)}\cup\overline{\text{supp} (g)}$
and since this superset is compact it follows that $\overline{\text{supp}(f+g)}$ is compact.
And $C_{c}(\mathbb{R})$ is a subset of $Lp(\mathbb{R})$? Then comes the explanation you gave above. Right?
Last edited by surjective; March 30th 2011 at 05:47 AM.
March 29th 2011, 05:34 PM #2
March 30th 2011, 04:57 AM #3
Feb 2010
March 30th 2011, 07:36 AM #4
|
{"url":"http://mathhelpforum.com/differential-geometry/176232-subspace.html","timestamp":"2014-04-17T19:04:39Z","content_type":null,"content_length":"44432","record_id":"<urn:uuid:e5cab340-9204-4ddd-acac-1046e91f08af>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00407-ip-10-147-4-33.ec2.internal.warc.gz"}
|
North Providence ACT Tutor
Find a North Providence ACT Tutor
Born in Cuba I arrived to the United States in 1995. Since then I remember playing teacher with my school friends at the time. Of course, I was the teacher and they were the students!
63 Subjects: including ACT Math, Spanish, English, reading
...It addresses several types of equations such as first Order Differential Equations such as Linear Equations, Separable Equations, Bernoulli Equations, Homogeneous Equations, Exact and Non-Exact
Equations, Integrating Factor technique,Radioactive Decay, Population Dynamics, Existence and Uniquenes...
38 Subjects: including ACT Math, reading, writing, English
...Since then I have been a nanny and a tutor and a cheerleading coach, while also starting a family. As far as my tutoring background, I started in high school when I spent my study halls
tutoring student peers that needed the extra help. Then spent after school volunteering at an elementary school to help out children that were falling behind class.
17 Subjects: including ACT Math, calculus, actuarial science, linear algebra
...I have a B.A. (History; Math), two M.A.'s (European History; History of Science, Medicine, & Technology), and a wide range of teaching: taught many different college history courses; high
school math; junior high math and science; ESL in Taiwan; chess. Also have good standardized test sc...
45 Subjects: including ACT Math, chemistry, Spanish, English
...Successful teaching and tutoring experiences in college and public school setting for over 5 years...very flexible and reliable!! Committed to helping students succeed in math and carry these
essential skills outside the classroom!! Plan for a variety of learning styles and focus on conceptual un...
13 Subjects: including ACT Math, calculus, geometry, algebra 1
|
{"url":"http://www.purplemath.com/North_Providence_ACT_tutors.php","timestamp":"2014-04-16T16:48:13Z","content_type":null,"content_length":"23999","record_id":"<urn:uuid:e4014a2c-5128-418c-bdc1-321c75ada340>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00362-ip-10-147-4-33.ec2.internal.warc.gz"}
|
: Algorithms: Linear Search
C++ Notes: Algorithms: Linear Search
Look at every element
This is a very straightforward loop comparing every element in the array with the key. As soon as an equal value is found, it returns. If the loop finishes without finding a match, the search failed
and -1 is returned.
For small arrays, linear search is a good solution because it's so straightforward. In an array of a million elements linear search on average will take 500,000 comparisons to find the key. For a
much faster search, take a look at binary search.
int linearSearch(int a[], int first, int last, int key) {
// function:
// Searches a[first]..a[last] for key.
// returns: index of the matching element if it finds key,
// otherwise -1.
// parameters:
// a in array of (possibly unsorted) values.
// first, last in lower and upper subscript bounds
// key in value to search for.
// returns:
// index of key, or -1 if key is not in the array.
for (int i=first; i<=last; i++) {
if (key == a[i]) {
return i;
return -1; // failed to find key
Related Pages
Binary Search
|
{"url":"http://www.fredosaurus.com/notes-cpp/algorithms/searching/linearsearch.html","timestamp":"2014-04-19T09:50:30Z","content_type":null,"content_length":"2211","record_id":"<urn:uuid:1a56b38b-5096-4e60-bcc6-8ec679c71d1a>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00323-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Difficult Normal Distribution question
September 5th 2011, 03:31 AM #1
Jul 2011
X is normally distributed with mean $\mu$ where the mean is greater than zero
If $Pr(a>X)=0.025$
what is the closest decimal approximation for the value
e:-0.025 (obviously wrong choice, just provided for question completeness)
Re: Difficult Normal Distribution question
I would rewrite the probability:
Re: Difficult Normal Distribution question
This looks like a good approach, except how do I find Pr(x<-a)? Pr(X< mu) is obviously 0.5, but this alone doesn't allow me to solve the question...?
Thanks in advance.
Re: Difficult Normal Distribution question
Notice $P(X<-a)=P(X>a)=1-P(X<a)$
Re: Difficult Normal Distribution question
This approach gives the answer as c above, but how does P(x<-a)=P(x>a) hold true even when mean is not zero?
Re: Difficult Normal Distribution question
Because the graph of normal distribution is symmetric.
Re: Difficult Normal Distribution question
Symmetric about the mean, hence P(X<mu-b) would be equal to P(X>mu+b) for any b but if mu is not zero then P(x<-b) would not equal P(x>b) for an b. Or is there a flaw in this logic? Hmmm...
September 5th 2011, 04:37 AM #2
September 5th 2011, 04:45 AM #3
Jul 2011
September 5th 2011, 04:53 AM #4
September 5th 2011, 05:01 AM #5
Jul 2011
September 5th 2011, 05:06 AM #6
September 6th 2011, 03:57 AM #7
Jul 2011
|
{"url":"http://mathhelpforum.com/statistics/187311-difficult-normal-distribution-question.html","timestamp":"2014-04-17T21:53:07Z","content_type":null,"content_length":"45460","record_id":"<urn:uuid:9618b692-8ed9-46b6-8b70-01ff49398d4c>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00606-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Results 1 - 10 of 21
, 1995
"... We discuss external and internal graphical and linguistic representational systems. We argue that a cognitive theory of peoples' reasoning performance must account for (a) the logical
equivalence of inferences expressed in graphical and linguistic form; and (b) the implementational differences th ..."
Cited by 106 (11 self)
Add to MetaCart
We discuss external and internal graphical and linguistic representational systems. We argue that a cognitive theory of peoples' reasoning performance must account for (a) the logical equivalence of
inferences expressed in graphical and linguistic form; and (b) the implementational differences that affect facility of inference. Our theory proposes that graphical representations limit abstraction
and thereby aid processibility. We discuss the ideas of specificity and abstraction, and their cognitive relevance. Empirical support comes from tasks involving (i) the manipulation of external
graphics; and (ii) no external graphics. For (i), we take Euler's Circles, provide a novel computational reconstruction, show how it captures abstractions, and contrast it with earlier construals,
and with Mental Models' representations. We demonstrate equivalence of the graphical Euler system, and the non-graphical Mental Models system. For (ii), we discuss text comprehension, and the mental
- In Proc. of the International Symposium on Logic Programming , 1993
"... Recently, Gelfond and Lifschitz presented a formal language for representing incomplete knowledge on actions and states, and a sound translation from this language to extended logic programming.
We present an alternative translation to abductive logic programming with integrity constraints and prove ..."
Cited by 58 (10 self)
Add to MetaCart
Recently, Gelfond and Lifschitz presented a formal language for representing incomplete knowledge on actions and states, and a sound translation from this language to extended logic programming. We
present an alternative translation to abductive logic programming with integrity constraints and prove the soundness and completeness. In addition, we show how an abductive procedure can be used, not
only for explanation, but also for deduction and proving satisfiability under uncertainty. From a more general perspective, this work can be viewed as a-successfulexperiment in the declarative
representation of and automated reasoning on incomplete knowledge using abductive logic programming. 1
, 1997
"... We present SLDNFA, an extension of SLDNF-resolution for abductive reasoning on abductive logic programs. SLDNFA solves the floundering abduction problem: non-ground abductive atoms can be
selected. SLDNFA provides also a partial solution for the floundering negation problem. Different abductive a ..."
Cited by 55 (13 self)
Add to MetaCart
We present SLDNFA, an extension of SLDNF-resolution for abductive reasoning on abductive logic programs. SLDNFA solves the floundering abduction problem: non-ground abductive atoms can be selected.
SLDNFA provides also a partial solution for the floundering negation problem. Different abductive answers can be derived from an SLDNFA-refutation; these answers provide different compromises between
generality and comprehensibility. Two extensions of SLDNFA are proposed which satisfy stronger completeness results. The soundness of SLDNFA and its extensions is proven. Their completeness for
minimal solutions with respect to implication, cardinality and set inclusion is investigated. The formalisation of SLDNFA presented here is an update of an older version presented in [13] and does
not rely on skolemisation of abductive atoms. 1
- Intelligent Agents: Proceedings of 1994 Workshop on Agent Theories, Architectures, and Languages, number 890 in Lecture Notes in Computer Science , 1994
"... As discussed in previous papers, belief contexts are a powerful and appropriate formalism for the representation and implementation of propositional attitudes in a multiagent environment. In
this paper we show that a formalization using belief contexts is also elaboration tolerant. That is, it is a ..."
Cited by 51 (6 self)
Add to MetaCart
As discussed in previous papers, belief contexts are a powerful and appropriate formalism for the representation and implementation of propositional attitudes in a multiagent environment. In this
paper we show that a formalization using belief contexts is also elaboration tolerant. That is, it is able to cope with minor changes to input problems without major revisions. Elaboration tolerance
is a vital property for building situated agents: it allows for adapting and re-using a previous problem representation in different (but related) situations, rather than building a new
representation from scratch. We substantiate our claims by discussing a number of variations to a paradigmatic case study, the Three Wise Men problem. Introduction Belief contexts (Giunchiglia 1993;
Giunchiglia & Serafini 1994; Giunchiglia et al. 1993) are a formalism for the representation of propositional attitudes. Their basic feature is modularity: knowledge can be distributed into different
and separated mod...
- Cognitive Science , 2005
"... This article presents a formal theory of robot perception as a form of abduction. The theory pins down the process whereby low-level sensor data is transformed into a symbolic representation of
the external world, drawing together aspects such as incompleteness, top-down information flow, active per ..."
Cited by 38 (1 self)
Add to MetaCart
This article presents a formal theory of robot perception as a form of abduction. The theory pins down the process whereby low-level sensor data is transformed into a symbolic representation of the
external world, drawing together aspects such as incompleteness, top-down information flow, active perception, attention, and sensor fusion in a unifying framework. In addition, a number of themes
are identified that are common to both the engineer concerned with developing a rigorous theory of perception, such as the one on offer here, and the philosopher of mind who is exercised by questions
relating to mental representation and intentionality.
, 1993
"... This paper reports on an investigation into a formal language for specifying kads models of expertise. After arguing the need for and the use of such formal representations, we discuss each of
the layers of a kads model of expertise in the subsequent sections, and define the formal constructions tha ..."
Cited by 35 (9 self)
Add to MetaCart
This paper reports on an investigation into a formal language for specifying kads models of expertise. After arguing the need for and the use of such formal representations, we discuss each of the
layers of a kads model of expertise in the subsequent sections, and define the formal constructions that we use to represent the kads entities at every layer: order-sorted logic at the domain layer,
meta-logic at the inference layer, and dynamic-logic at the task layer. All these constructions together make up (ml) 2 , the language that we use to represent models of expertise. We illustrate the
use of (ml) 2 in a small example model. We conclude by describing our experience to date with constructing such formal models in (ml) 2 , and by discussing some open problems that remain for future
work. 1 Introduction One of the central concerns of "knowledge engineering" is the construction of a model of some problem solving behaviour. This model should eventually lead to the construction of
- AI Magazine , 1998
"... The \Naive Physics Manifesto " of Pat Hayes (1978) proposes a large-scale project of developing a formal theory encompassing the entire knowledge of physics of naive reasoners, expressed in a
declarative symbolic form. The theory is organized in clusters of closely interconnected concepts and a ..."
Cited by 25 (6 self)
Add to MetaCart
The \Naive Physics Manifesto " of Pat Hayes (1978) proposes a large-scale project of developing a formal theory encompassing the entire knowledge of physics of naive reasoners, expressed in a
declarative symbolic form. The theory is organized in clusters of closely interconnected concepts and axioms. More recent work in the representation of commonsense physical knowledge has followed a
somewhat di erent methodology. The goal has been to develop a competence theory powerful enough to justify commonsense physical inferences, and the research is organized in microworlds, each
microworld covering a small range of physical phenomena. In this paper we compare the advantages and disadvantages of the two approaches. Three Scenarios Consider the following scenario: Common sense
is a wild thing, savage, and beyond rules.
- Artificial Intelligence , 1994
"... In traditional formal approaches to knowledge representation, agents are assumed to believe all the logical consequences of their knowledge bases. As a result, reasoning in the first-order case
becomes undecidable. Since real agents are constrained by resource limitations, it seems appropriate to lo ..."
Cited by 21 (2 self)
Add to MetaCart
In traditional formal approaches to knowledge representation, agents are assumed to believe all the logical consequences of their knowledge bases. As a result, reasoning in the first-order case
becomes undecidable. Since real agents are constrained by resource limitations, it seems appropriate to look for weaker forms of reasoning with better computational properties. One way to approach
the problem is by modeling belief. Reasoning can then be understood as the question whether a belief follows from believing the sentences in the knowledge base. This paper proposes...
"... Mathematical logicians had developed the art of formalizing declarative knowledge long before the advent of the computer age. But they were interested primarily in formalizing mathematics.
Because of the important role of nonmathematical knowledge in AI, their emphasis was too narrow from the perspe ..."
Cited by 10 (4 self)
Add to MetaCart
Mathematical logicians had developed the art of formalizing declarative knowledge long before the advent of the computer age. But they were interested primarily in formalizing mathematics. Because of
the important role of nonmathematical knowledge in AI, their emphasis was too narrow from the perspective of knowledge representation, their formal languages were not sufficiently expressive. On the
other hand, most logicians were not concerned about the possibility of automated reasoning; from the perspective of knowledge representation, they were often too generous in the choice of syntactic
constructs. In spite of these differences, classical mathematical logic has exerted significant influence on knowledge representation research, and it is appropriate to begin this handbook with a
discussion of the relationship between these fields. The language of classical logic that is most widely used in the theory of knowledge representation is the language of first-order (predicate)
formulas. These are the formulas that John McCarthy proposed to use for representing declarative knowledge in his advice taker paper [176], and Alan Robinson proposed to prove automatically using
resolution [236]. Propositional logic is, of course, the most important subset of first-order logic; recent
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=437728","timestamp":"2014-04-21T13:19:20Z","content_type":null,"content_length":"37845","record_id":"<urn:uuid:17d2cb59-2a6c-43e7-8028-2aff7deb582b>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00100-ip-10-147-4-33.ec2.internal.warc.gz"}
|
increasing and decreasing functions
November 5th 2009, 12:31 AM #1
Junior Member
Apr 2009
increasing and decreasing functions
I am to determine the interval on which f(x)=12x-x^3 is decreasing. I am coming up with x(12-x^2). x plus or minus =2square root 3. Solution being (-infinity, -2 square root 3). I have no other
way to determine if I am doing this right. Can someone please set me straight?
You have to find when f '(x) is negative. You don't seem to have differentiated yet??
November 5th 2009, 01:38 AM #2
Senior Member
Oct 2009
|
{"url":"http://mathhelpforum.com/calculus/112532-increasing-decreasing-functions.html","timestamp":"2014-04-17T04:24:53Z","content_type":null,"content_length":"31007","record_id":"<urn:uuid:eb5c0ff8-0529-4a69-907b-f97c3a7bfb0d>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00365-ip-10-147-4-33.ec2.internal.warc.gz"}
|
how to find the probability of Type II error
February 21st 2009, 06:46 AM #1
Senior Member
Feb 2008
how to find the probability of Type II error
$f(x : \theta)=\frac{\theta}{(x+\theta)^{2}} , \ \ \ H_{0}:\theta=1, \ \ \ H_{1}:\theta=2$
Mr F edit: Additional information given to me via pm:
x>0, theta>0, is an unknown parameter.
find the likelihood ratio test .
and show the probability of TYPE II error is
Last edited by mr fantastic; February 22nd 2009 at 04:43 AM.
More information is needed here I think to calculator $\beta$. A value of $\alpha$, perhaps ....?
Once we have alpha we can get Beta.
February 23rd 2009, 03:03 AM #2
February 23rd 2009, 08:12 PM #3
|
{"url":"http://mathhelpforum.com/advanced-statistics/74813-how-find-probability-type-ii-error.html","timestamp":"2014-04-21T16:13:38Z","content_type":null,"content_length":"38971","record_id":"<urn:uuid:7b1a893a-0730-4e33-acc0-90d41fb97a48>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00300-ip-10-147-4-33.ec2.internal.warc.gz"}
|
North Providence ACT Tutor
Find a North Providence ACT Tutor
Born in Cuba I arrived to the United States in 1995. Since then I remember playing teacher with my school friends at the time. Of course, I was the teacher and they were the students!
63 Subjects: including ACT Math, Spanish, English, reading
...It addresses several types of equations such as first Order Differential Equations such as Linear Equations, Separable Equations, Bernoulli Equations, Homogeneous Equations, Exact and Non-Exact
Equations, Integrating Factor technique,Radioactive Decay, Population Dynamics, Existence and Uniquenes...
38 Subjects: including ACT Math, reading, writing, English
...Since then I have been a nanny and a tutor and a cheerleading coach, while also starting a family. As far as my tutoring background, I started in high school when I spent my study halls
tutoring student peers that needed the extra help. Then spent after school volunteering at an elementary school to help out children that were falling behind class.
17 Subjects: including ACT Math, calculus, actuarial science, linear algebra
...I have a B.A. (History; Math), two M.A.'s (European History; History of Science, Medicine, & Technology), and a wide range of teaching: taught many different college history courses; high
school math; junior high math and science; ESL in Taiwan; chess. Also have good standardized test sc...
45 Subjects: including ACT Math, chemistry, Spanish, English
...Successful teaching and tutoring experiences in college and public school setting for over 5 years...very flexible and reliable!! Committed to helping students succeed in math and carry these
essential skills outside the classroom!! Plan for a variety of learning styles and focus on conceptual un...
13 Subjects: including ACT Math, calculus, geometry, algebra 1
|
{"url":"http://www.purplemath.com/North_Providence_ACT_tutors.php","timestamp":"2014-04-16T16:48:13Z","content_type":null,"content_length":"23999","record_id":"<urn:uuid:e4014a2c-5128-418c-bdc1-321c75ada340>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00362-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Department of Mathematics and Statistics
Dohyoung Ryang, Assistant Professor
Office: Petty 110
Email address: d_ryang@uncg.edu
Starting year at UNCG: 2010
Office hours: No Office Hours; On Junior Research Leave for Spring 2014
Ed.D. in Mathematics Education, University of Alabama (2010), Ph.D. in Mathematics, University of Alabama, Tuscaloosa (2005)
Fall, 2013
• MAT 191-01 LEC (Calculus I), TR 9:30-10:45, Eberhart Building 161
• MAT 191-01E LEC (Calculus I), TR 9:30-10:45, Eberhart Building 161
• MAT 304-01 LEC (Introduction to the Foundations of Geometry), TR 2:00-3:15, Petty Building 223
Winter, 2014
• MAT 115-81D WEB (College Algebra)
Summer Session 1, 2014
• MAT 150-01 LEC (Precalculus I), MTWR 12:20-2:20, Petty Building 150
Research Interests
Selected Recent Publications
• Ryang, D. (2013, August). Development of the Mathematics Teaching Efficacy Beliefs Instrument Korean version for elementary preservice teachers, JKSME Series A: The Mathematical Education 52(3),
• Ryang, D. (2013, May). Developing the Mathematics Teaching Efficacy Beliefs Instrument for secondary prospective mathematics teachers, JKSME series A: The Mathematical Education 52(2), 231-245.
• Ryang, D. (2012). Groups acting on median graphs and median complexes, Pure and Applied Mathematics, 9(4), 349-361.
• Ryang, D. (2012). Exploratory analysis of Korean elementary preservice teachers’ mathematics teaching efficacy beliefs. International Electronic Journal of Mathematics Education, 7(2), 45-61.
• Ryang, D., Thompson, T., & Shwery, C. (2011). Analysis of Korean mathematics teacher educators' response to the Mathematics Teaching Efficacy Beliefs Instrument. Research in Mathematical
Education, 15(3), 229-250.
Brief Bio
Dr. Ryang earned a Ph.D. in 2005 and a Ed.D. in 2010 from the University of Alabama, Tuscaloosa. In 2010 he joined the faculty at UNCG. His research studies mathematics education and geometric group
|
{"url":"http://www.uncg.edu/mat/people/people.php?username=d_ryang","timestamp":"2014-04-18T03:08:03Z","content_type":null,"content_length":"21512","record_id":"<urn:uuid:f32b0a7d-b174-40b6-9281-cdf883441de7>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00224-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Critical Shear Rate - the Instability Reason for the Creation of
Dissipative Structures in Polymers
B. Wessling:
Critical Shear Rate - the Instability Reason for the Creation of Dissipative Structures in Polymers
(Z. Phys. Chem., 191, (1995), S. 119-135)
Only very few of the countless different polymer systems we meet in our daily life are straight single-phase systems. At least most real polymers consist of two phases.
For most of them, many surprising and non-linear phenomena are well known and have been the subject of continuous work for decades. Especially the resulting properties, such as impact modification,
viscosity and conductivity, display a non-linear dependence on a given parameter. The property/ parameter relationship can be described by an S-shaped curve. Theories which are in principle based on
considerations of equilibrium thermodynamics have been developed and are currently being used to explain these phenomena. These theories include the "Flory-Huggins-Theory", the "Percolation Theory",
the "Nearest-Neighbour-Model", but also the constitutive and related equations for the description of rheological phenomena.
In contrast to this, based on experimental results, we [1] have recently developed a new theory which is applicable to all heterogeneous polymer systems [6, 7]. Our main principle is to define the
nonequilibrium character of colloidal dispersions in polymeric matrices and to interpret the experimental findings (phase separation, dispersion-flocculation phase transition, [3]) as "dissipative
structures". This term was introduced by PRIGOGINE [2] for self-organising structures in non-linear systems far from equilibrium. In the case of dispersions in polymers, properties like conductivity,
impact strength, etc., are measured as a result of the "frozen dissipative structure", whereby rheological phenomena (viscosity etc.) are to be considered as dynamic nonequilibrium phenomena.
Dissipative structures can generally only survive under continuous entropy export; this is the case under dispersion conditions, in which a huge amount of high value energy is pumped through the
system; it continues to be the case during melt flow in the molten stage, but not after the multiphase system has been quenched; then the high-viscosity energy barrier prevents the generated
dissipative structures from falling apart.
Since colloidal dispersions in polymeric matrices are processes of isothermal nature, differences of temperature are neglected in the following; this means it is always assumed that T = 0.
In the sections below, the following abbreviations will be used:
G : Gibb's free enthalpy of mixing; G= U + pV - TS
H : dispersion or mixing enthalpy
S : entropy
Q : heat
d : distance
t : time
P : probability
Ji : flux, flow
Xi : force
: interaction parameter
: viscous strain
k : Boltzmann constant
: surface tension
T : temperature
: chemical or dispersion state potential
W : work, energy
A : area
: viscosity
a : rheological interaction parameter
n : number of particles
: volume fraction
1. Thermodynamic considerations concerning the present theory
The new "nonequilibrium thermodynamic theory of heterogeneous polymer systems" [1] is aimed at providing a basis for an integrated description for the dynamics of dispersion and blending processes,
structure formation, phase transition and critical phenomena.
According to PRIGOGINE [2], dissipative structures are to be expected if in open systems the distance from thermodynamic equilibrium exceeds some critical value, di > di,crit. In that region, the
relations between flows (fluxes) and forces are nonlinear and the standard Prigogine principle, which is valid only in the linear regime, is to be replaced by the Glansdorff-Prigogine evolution
It is well known that none of the multiphase systems is spontaneously formed. The formation processes are all endergonic.
Therefore the entropy change of the (irreversible) dispersion process can be calculated according to irreversible thermodynamics of nonlinear processes. In principle, diffusion and dispersion can be
treated in a similar way, they apparently lead to formally similar structures. Both processes are irreversible and nonlinear.
According to irreversible thermodynamics in the neighbourhood of equilibrium, the internal entropy change (production) would be the product of all fluxes and forces:
The entropy change (production) is then:
This is the difference between an "excess interfacial energy", 12eA12 , caused by the work needed for dispersing the particles, Wdisp , and the surface energy of component 2 before dispersion, (2A12
(the term (1A1 can be neglected), divided by the average dispersion path length.
This can be estimated empirically with a good theoretical basis by stating that (12eA12 is the energy which is necessary to provide 1 m2 of matrix polymer and force it to wet 1 m2 of carbon black or
of an ICP, during which the interface A12 is formed. This is the minimum amount of work W , necessary to force the volume Vsyst , of the developing polymer dispersion with the viscosity at the shear
rate sr , to flow:
whereby (see ref. [1], footnote [4d] therein):
(12 is the equilibrium surface tension (eq.(7) in [1]).
In reality, W is obviously only a minimum (ideal) value for 12eA12. A better experimental basis is the real dispersion work, Wdisp , which can be experimentally determined.
According to our experience, the total enthalpy needed for preparing such systems is about 2 MJ/kg, a value large enough to suggest that these systems are being driven far away from equilibrium.
A rough estimation can show that one of the conditions for order under nonequilibrium, a significant amount of negative entropy change, is fulfilled [1]. The next steps in analysing the
nonequilibrium properties are to prove the nonlinearity of the process and to find out whether the distance to equilibrium is supercritical. Therefore we consider similarities to and differences from
the irreversible diffusion process, which shows a positive entropy production.
To arrive at a solution for the dispersion law, entropy development during the diffusion process must be considered. This entropy develops over time as an e-function approaching a saturation value.
This behaviour is characteristic of an irreversible process in the direction of thermodynamic equilibrium, and we are still in the thermodynamic branch.
The solution of a "dispersion law" is a function like
where ru , no = radius and number of the undispersed particles, respectively.
The "dispersion law" might therefore have a special solution like
Two new factors have been introduced. rd describes the particle radius after dispersion and c is a constant, describing dispersibility. It is used with the dimension [c] = [m].
The expression c/rd has the dimension 1, where [c] = [m] = [(m2*N)/ (N*m)] = [surface tension/pressure]. This correctly reflects the fact that dispersion is the result of a stress (N/m2), a
"dispersion pressure" induced through the polymer by Wdisp, being applied against a surface tension (N/m) of the material to be dispersed, which works as a counter pressure induced by interfacial
forces between the particles and the matrix and is directed against dispersion. The "dispersion stress" is transferred to the agglomerates to be dispersed via the shear force applied to the polymer
There is some evidence from the arguments given above that dS/dt <0, and also that dxP < 0 for the dispersion process itself. The entropy function is more complicated (visualised in the detail
diagram in
fig. 1.2 )), because in every dispersion, the extruder shear (leading to dispersion) is not continuous, but intermittent, followed by flow relaxation, with phase separation: a further small negative
entropy change step (dS/dt <0).
So we see that dispersion provides enough negative entropy flow (entropy export) to force the system far away from equilibrium and allow it to build up "dissipative structures". The distance from
equilibrium is very large, i.e. we are beyond the thermodynamic branch.
2. Critical shear rate at bifurcation point
This problem is related to the question of whether there is a minimum work input required before dispersion begins to take place. Or, in nonequilibrium terms: What is the critical parameter at which
fig. 1.3 ) occurs, and what is the value needed to make the system leave the thermodynamic branch?
The "dispersion law" and its possible solution, eq.(1.8), describing the dynamics of dispersion, could enable us to find the instability. With the values
c = 0.1 m, rd = 10^-7 m, t = 100 s
we analysed the evolution of the particle numbers with changing sr ( shear rate):
tab. 1.1 . It can be seen that an appreciable initial degree of dispersion after a residence time of 100s will only be found if sr>1300 s-1 (dispersion degree > 0.1%). Therefore it can be concluded
that the shear rate is the critical parameter, and its critical value above which dispersion takes place or dissipative dispersion structures are created is around 1000 s-1.
It seems helpful to reformulate the exponent in eq.(1.8), starting with the considerations about the dimension of c given there:
Empirically we know that there is no dispersion to be detected under pure pump extrusion conditions (pure melting and conveying screw design). It is known that a minimum shear stress has to be
applied to obtain a significant degree of dispersion, e.g. of pigments (cf. fig. 1.4 ) .
Fig. 1.4 shows the development of the colour intensity (or: colour strength) of any kind of pigment in a polymer, which can be measured according to DIN 53234_. With increasing dispersion degree, the
colour strength (represented by a increasing colour strength value [%]_) also increases. This can be achieved either by increasing dispersion time (at a given supercritical shear rate) or by
increasing shear rate (for a given residence time). Also certain types of carbon black are used as pigments.
A recently published comparison [5] of such a carbon black dispersed in three low viscosity media (water, squalene, polydimethylsiloxane) showed that dispersion only takes place above a critical
shear rate and: the lower the viscosity, the higher the necessary critical shear rate. Moreover, [5] is the only report available with a quantitative description of this qualitatively known
dependence. But even [5] does not supply an answer to the question: "What is this critical shear rate in physical terms?"
Introducing the experimentally observed critical boundary for first occurrence of dispersion
, it follows that
This means that there is no mathematical information about n (in eq.(1.8)) for crit, and the equation (1.8) with the exponent as shown in (1.10) is not applicable. This is in accordance with our own
experiments in low shear extrusion, and with the results published in [5]: if crit, no dispersion occurs.
Another approach to describing the observed behaviour is to introduce the above-mentioned definition of 12e: The value of 12eA12 is at least as large as the work
, necessary to overcome the viscous strain of the polymer before it is able to wet the dispersed phase (see above, eq.(1.4)). Introducing this in eq.(1.9), it follows, ( Vdisp = volume of the
dispersed phase):
( Vsyst is the total volume of the matrix polymer/dispersed phase system).
At the present stage, in which we are now just entering a nonequilibrium thermodynamic description of multiphase polymer systems, an "ab initio" theoretical derivation of eq.(1.7) and (1.8)ff is
still lacking. The foregoing thoughts and reformulations may at least lead to some important conclusions:
1) The critical shear rate above which eq.(1.8) results in a first physically appreciable degree of dispersion (> 0.1 %) is in the neighbourhood of what is known to cause "melt fracture" (sr 1000 s^
-1, 10^5 Nm). This leads to the hypothesis that dispersion can only occur and be observed under conditions of melt fracture, a widely known rheological instability [4][1.]
("Melt fracture" can be observed at the die of an extruder or melt rheometer as a sudden change in the surface aspect of the emerging melt beyond a critical point in the vicinity of ~ 105 Nm or sr ~
1000 s-1. Under given extrusion conditions it suddenly appears at a certain critical point during continuously increasing output. It can be viewed as the analogy of turbulent flow in low-viscosity
media. Unlike there, "melt fracture" structures can be frozen by simply cooling the melt strand or the produced film. It exhibits irregular wave and/or fish-scale patterns. These patterns will again
suddenly change to new patterns after a second critical point in response to a further increase in output.)
2) The non-linear behaviour of the melt is then well reflected in the exponent (eq.(3.10)), which leads to a definition of n = f(t) only for > crit (above "melt fracture").
3) Independently of this approach, eq.(1.11) tell us about two other non-linear phenomena:
a) the non-linear dependence of Vsyst on dispersed phase concentration, cf. the density non-linearity [8];
b) the relation of 12/12e (the "structure factor"), which behaves non-linearly according to fig. 1.5:
12e is not defined for a dispersed phase concentration of zero; for low viscosity systems 12e will be identical to 12 for a certain concentration regime; for "easy-to-disperse" fillers 12e will only
differ from 12 above a certain concentration; in general, two-phase systems cannot reach the 12e level from the 12 equilibrium level without experiencing a (non-linear) jump: there is no continuity
between 12 and 12e.
These results, combining the widely known instability (and dissipative structure!) phenomenon of "melt fracture" with the new nonequilibrium description of multiphase polymer systems, will hopefully
stimulate more experimental and theoretical work devoted to these (frozen) dissipative structures. It still remains an open question which property of the melt may be responsible for its suddenly
occurring capability to disperse fillers (pigments, carbon black, etc.) or other incompatible polymers above melt fracture conditions. We can only speculate that especially the creation of microvoids
(= inner surfaces) and a sudden increase in gas solubilisation capability at and above "melt fracture" allows the polymer melt to wet the surface of the material that is to be dispersed. This means
that a polymer melt might have completely different (supercritical) properties above melt fracture than we usually observe.
This new theoretical view of dispersion in polymer systems and the important critical parameter can probably be used to describe any other colloidal systems [6] in an analogous way.
In the same way we can investigate microemulsion systems. The first attempts of Strey to explain microemulsion structures are still based on equilibrium thermodynamical considerations [9]. But
structure generation is equivalent to an entropy decrease, which makes (-TS) a large positive term. Microemulsion scientists are thus forced to believe that the enthalpy of mixing is so large and
negative that the total free energy of microemulsion formation is negative. However, it is legimate to doubt whether this is a general phenomenon or even whether it occurs at all. So, for some
reasons, we can propose to consider microemulsions and their structure also to be the result of a supercritical energy input and entropy export, leading to self-organized dissipative structures.
Literature / References
1. B. Wessling, Synth. Met. 45 (1991) 119-149
2. a) I. Prigogine, Angew. Chem. 90 (1978) 704
b) G. Nicolis, I. Prigogine, Self-Organization in Non-Equilibrium Systems, J. Wiley, New York (1977)
3. B. Wessling, Synth. Met. 41-43 (1991) 1057-1062
4. J. Wiley & Sons, Encyclopedia of Polymer Science and Engineering 13 (1988) 441
5. S. Rwei, S. Horwatt, I. Manas-Zloczower, D. Feke, Intern. Pol. Proc. VI (1991) 98- 102
6. B. Wessling, Adv. Mat. 5 (4) (1993) 300-305
7. B. Wessling, Macromol. Symp. 78 (1994) 71-82
8. B. Wessling, Polymer Eng. Sci. 31 (16) (1991) 1200-1206
9. M. Kahlweit, R. Strey, G. Busse, J. Phys. Chem. 94 (1990) 3881
|
{"url":"http://www.bernhard-wessling.com/pani/www2/Research/wisslit/nonequ2.html","timestamp":"2014-04-20T03:14:44Z","content_type":null,"content_length":"20734","record_id":"<urn:uuid:5a9d9e79-727d-4f1f-bf7d-da3db143b030>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00227-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Decimal data type in VB.NET
In VB 6, the Currency data type was designed for financial calculations. But Microsoft decided that it just didn't do the job so they dropped it and now we have the Decimal data type in VB.NET.
This article tells you all about the Decimal data type in VB.NET: What's new, what works and what doesn't. Like the rest of .NET, Decimal is far more powerful. And like the rest of .NET, there are
hidden traps. Just to get started, here's one you might not have seen before:
If you just happened to use the VB 6 Currency data type to create a record in a file using a structure like this one ...
Private Type FileRecord
Field1 As Integer
CurrencyField As Currency
Field2 As Double
End Type
Then you have a problem upgrading to VB.NET!
According to Microsoft Knowledge Base article KB 906771, .NET just won't read it correctly!
If you have this problem, the KB article referenced above gives more details, but the workaround recommended by Microsoft is to read the value using the VB.NET Int64 data type and then the
Decimal.FromOACurrency method to convert it.
To be very complete, there actually was a way declare something that was called "Decimal" in VB 6. You could convert a VB 6 Variant to a VB 6 Decimal data type, using the VB 6 CDec function. These
are called "subtypes" in VB 6. In spite of the fact that they have the same name, the VB 6 Decimal data type isn't natively supported by .NET. Since it's really a "variant" it's simply converted into
an "object" in VB.NET and generates an error. It's one of those things you have to convert manually upgrading a VB 6 program to VB.NET.
The old Currency data type in VB 6 uses 8 bytes of memory and can represent numbers with fifteen digits to the left of the decimal point and four to the right. So it was capable of a sort of "fixed
point" arithmetic with a maximum of four decimal digits of accuracy. But lots of calculations these days just need more. A lot more! So Microsoft created the new Decimal data type for .NET.
Decimal allows up to twenty-nine digits of accuracy and stores all numbers as integers with a "scaling factor" that simply tells VB.NET where to place the decimal point. This means that although the
decimal point "floats" in Decimal variables, it is not the same as floating point. Single and Double data types are floating point. The difference is that floating point numbers are stored as "binary
fractions". This means that a value has to be capable of being exactly represented using the binary number system and some aren't! For example, the value 1/3. A Single or Double data type simply gets
as close as it can to the actual value. The value of a Decimal data type is always exact within the limits of precision that it can handle.
What does this mean in a program? Here's an example in VB.NET to demonstrate.
The Microsoft documentation states that Double and Single data types "store an approximation of a real number." But they don't mention what a "real number" is. This was something that was taught in
your high school math class. If you divide 1 by 3, you get a fraction that repeats forever:
0.333333333333333333 ... <and so forth to infinity>
This is a real number. It's an actual value, but (at least using "base 10" arithmetic), you can't store a completely precise value. This can present a problem for us since bank auditors like precise
Let's add the value of "1/3" a hundred thousand times in both Decimal and Double and see what we get.
Dim DecimalVar As Decimal
Dim DoubleVar As Double
Dim AccumDecimal As Decimal = 0
Dim AccumDouble As Double = 0
Dim Difference As Decimal = 0
Dim i As Integer
DecimalVar = 1 / 3
DoubleVar = 1 / 3
For i = 1 To 100000
AccumDecimal += DecimalVar
AccumDouble += DoubleVar
Difference = AccumDecimal - AccumDouble
Debug.WriteLine("AccumDecimal: " & AccumDecimal.ToString)
Debug.WriteLine("AccumDouble: " & AccumDouble.ToString)
Debug.WriteLine("Difference: " & Difference.ToString)
Here's the result:
AccumDecimal: 33333.333333333300000
AccumDouble: 33333.3333332898
Difference: 0.0000000435393303632736
Neither value is exact, but Decimal is a lot closer. Notice that there are five zero's at the end of the Decimal value, however. That's because an error of .0000000000000001 accumulated every time
the Decimal approximation to 1/3 was added. Since the Double value is even further off (remember that it's saved as a "binary fraction"), an even larger error accumulates.
The bottom line is that you can't get ultra-precise calculations with standard data types. This isn't just a problem with Visual Basic, it's true of all the usual programming languages because it
happens due to the way computers store values. If you need that much precision for some reason, you can get it using special math libraries such as Maple.
|
{"url":"http://visualbasic.about.com/od/usingvbnet/a/decdatatype.htm","timestamp":"2014-04-20T05:42:02Z","content_type":null,"content_length":"40418","record_id":"<urn:uuid:45fd0208-bcb0-493b-844d-eed953802399>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00571-ip-10-147-4-33.ec2.internal.warc.gz"}
|
skoool.ie :: Ask The expert
Q. How should I present calculations? I find these really difficult and generally try to avoid them.
A. For calculations, show the starting point (generally the relevant equation) and the various stages, so that the examiner can follow what you are doing. The great advantage to doing this is that
» if you make a small mathematical slip and end up with the wrong answer, you can still score almost full marks. If you do not show your approach and method and end up with the wrong answer, you
score no marks. At the end of every calculation, give (i) the correct unit and (ii) the same number of significant figures as are used in the question. Do not confuse decimal places with
significant figures. Do not write down all the digits displayed in your calculator, most of which will be meaningless. Check your answer — is it sensible? In particular, check the sign and the
powers of ten. Could you have made a simple mistake?
Rating: 15 of 27 users found this answer helpful
« Back to FAQs for Chemistry
|
{"url":"http://www.skoool.ie/asktheexpert/faq.asp?id=536&subjectid=7","timestamp":"2014-04-19T23:06:58Z","content_type":null,"content_length":"19384","record_id":"<urn:uuid:3af4a391-2b83-424a-b1f2-4c308f3ea332>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00573-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Trajectory and acceleration.
I mean, what would be the formulation of its trajectory vector, if in the first segment it is 1/2at^2, whereas in the second segment it is 1/2aT^2+aTt-1/2at^2?
Is there any way to combine them into one vector?
Conceptual approach to rocket launch.
While the rocket motor is burning, the speed increases to a maximum (V). the distance covered (height gained) equals the average speed multiplied by the time interval (V/2 x t)
Once the rocket engine fails, the rocket slows. With the same numeric value of acceleration it will take the same time to stop. Its initial and final speeds will be the same as when it was
accelerating up [just V initial and 0 final, rather than 0 initial and V final) so the extra distance travelled up will be an equal amount.
If the rocket gained height H with the engine burning, it will eventually reach maximum height of 2H
The s = ut + 1/at
formula shows us that when u is zero, and s is doubled, t is increased by a factor of √2.
When this rocket, having reached a height of 2H, falls back to Earth this is indeed the situation.
So the Rocket will have traveled up, with engine firing, for time T.
It will have continued to drift up for an additional time T.
It will then fall back to Earth in time √2.T
Total trip: (2 + √2)T
or as you found (1 + √2)T from the end of the first segment of the motion.
|
{"url":"http://www.physicsforums.com/showthread.php?t=650375","timestamp":"2014-04-20T18:34:22Z","content_type":null,"content_length":"52553","record_id":"<urn:uuid:f5049e06-3c38-41ca-82c4-6671de08178e>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00044-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MATH 101: Algebra (3)
Coordinate systems, functions and their graphs; linear, quadratic, general polynomial, exponential, and logarithmic functions; equations and inequalities. Not open to students with credit in MATH
104. Prerequisite: MATH 002, or two years of high school algebra and a score of 22 or higher on ACT mathematics, or a qualifying score on the mathematics placement test. LEC
View current sections...
|
{"url":"http://www2.ku.edu/~distinction/cgi-bin/index.php?id=2006&dt=courses_201112_combined&classesSearchText=MATH+101","timestamp":"2014-04-19T04:55:26Z","content_type":null,"content_length":"1058","record_id":"<urn:uuid:dac080f6-2b35-4c07-ab69-e7607df06d2a>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00005-ip-10-147-4-33.ec2.internal.warc.gz"}
|