content
stringlengths
86
994k
meta
stringlengths
288
619
Cantilever spreadsheet for dummies Any (ideal) [prismatic cantilever] beam of length L, bent with a deflection of delta, will have the same [S:parabolic:S] [cubic] curve. Dingomack: True, only if the type of applied loading is the same; i.e., in your case, only if there is a concentrated load (P) at the cantilever free end. Q2. At what distance from the fixed end of the beam does the Cmax occur? If my first assumption is correct, this should be a constant. My following answer applies if the cantilever deflection at the free end (y1max) does not exceed 10 % of the cantilever length (L). Cmax occurs at 42.265 % of L, measured from the fixed end; i.e., Cmax occurs at x = 0.42265*L. If y1max exceeds 10 % of L, Cmax might still occur at 42.265 % of L, but I currently do not know if it does or not, because I have not checked that case. Q3. For a given Cmax and beam length L, what is the deflection required? What do you mean by "required"? Did you instead mean, "What is the deflection"? Which deflection? Deflection where, at what point? I need to produce an excel spreadsheet where I can input a desired Cmax and a beam length L, and output the required deflection of the beam, all in millimeters. What do you mean by "required deflection"?
{"url":"http://www.physicsforums.com/showthread.php?s=02ea70d1292f00a62cbd84e0a33e2e76&p=4653803","timestamp":"2014-04-19T09:51:13Z","content_type":null,"content_length":"30191","record_id":"<urn:uuid:168ed290-5abb-4dd5-a24f-f4331409bff2>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00545-ip-10-147-4-33.ec2.internal.warc.gz"}
How do mathematicians work alongside physicists? Well there is a fine border between the two, which includes, as an example, quantum field theory. QFT is a very strong physical theory, but needs a lot of backing up in terms of rigorous mathematics. Topological quantum field theory is something I just recently read about and is more of the mathematics side of QFT, whereas as QFT is more physical. But they share ideas, concepts, and tools. They play off of one another. Physicists readily use the Feynman path integral, but to my knowledge, has not been shown to be a completely rigorous technique, especially compared to the mathematical techniques applied in quantum mechanics, which are very rigorous. So there is an example of the physicists using something that was introduced physically, but can be bolstered mathematically, giving even more credit to the physical theories it's applied to. They are different fields, but are very closely related, at least on the borders of theoretical physics. Physics is really very special in that it can attack problems in two very distinct ways. It has experimental techniques, and it has theoretical techniques. Where one falters, the other can pick up until the other catches back up. It's really a fantastic interplay. For general relativity, the mathematics came first, which was then bolstered with experimentation. For the Higgs boson, it has been predicted mathematically, but is waiting for the experimentation. Things like Heisenberg's uncertainly principle was sort of discovered on a physical basis, but now there are strict and rigorous mathematical proofs of the uncertainty principles. Faraday and others observed the interplay between electricity and magnetism experimentally and then the observations were supplemented by the mathematics developed later on which came to the discovery of Maxwell's equations. Maxwell and his peers originally had somewhere around 23 equations, which were whittled down to 4. Now, using the mathematical development of differential forms, Maxwell's 4 equations can now be stated in extremely simple terms in just 2 equations. This discussion could go on and on, as the interplay between math and physics is fundamental. Even David Hilbert, a great mathematician and theoretical physicist who independently discovered general relativity, suggested as one of his 23 problems to provide an axiomatic development of physics, similar to what was done for mathematics in the 18th and 19th centuries.
{"url":"http://www.physicsforums.com/showthread.php?p=2298306","timestamp":"2014-04-16T22:09:23Z","content_type":null,"content_length":"62857","record_id":"<urn:uuid:d4a44c27-0942-4494-8ed1-8e7842bf712e>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00322-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: Salem numbers and uniform distribution modulo 1 Shigeki Akiyama and Yoshio Tanigawa 1 Introduction Uniform distribution of sequences of exponential order growth is an at- tractive and mysterious subject. Koksma's Theorem assures that the se- quence ( n ) (n = 0; 1; : : :) is uniformly distributed modulo 1 for almost all > 1. See [6]. To nd an example of such has been an open problem for a long time. In [7], M. B. Levin constructed an > 1 with more strong distribution properties. His method gives us a way to approximate such step by step. (See also [4, pp.118{130].) However, no `concrete' examples of such are known to date. For instance, it is still an open problem whether (e n ) and ((3=2) n ) are dense or not in R=Z. (c.f. Beukers [2]) On the other hand, one can easily construct > 1 that ( n ) is not uniformly distributed modulo 1. A Pisot number gives us such an example. We recall the denition of Pisot and Salem numbers. A Pisot number is a real algebraic integer greater than 1 whose conjugates other than itself have modulus less than 1. A Salem number is a real algebraic integer greater than 1 whose conjugates other than itself have modulus less than or equal to 1 and at least one conjugate has modulus equal to 1. It is shown that ( n ) tends to
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/249/3796893.html","timestamp":"2014-04-19T08:41:19Z","content_type":null,"content_length":"8396","record_id":"<urn:uuid:7556fb85-b91a-402b-ad9d-70f990de210a>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00223-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: February 2002 [00045] [Date Index] [Thread Index] [Author Index] RE: Simplify • To: mathgroup at smc.vnet.net • Subject: [mg32663] RE: [mg32654] Simplify • From: "Florian Jaccard" <jaccardf at eicn.ch> • Date: Mon, 4 Feb 2002 03:23:27 -0500 (EST) • Sender: owner-wri-mathgroup at wolfram.com Hello ! Your solution is ok only if a end b are positive real numbers ! Don't forget that (a*b)^n is not a^n * b^n if n is rational and a and b not real positive ! So if you write : PowerExpand[Simplify[(a^(1/3) - b^(1/3)) * ((a^(2/3) + (a*b)^(1/3) + You will get what you want ! Meilleures salutations Florian Jaccard e-mail : jaccardf at eicn.ch -----Message d'origine----- De : Stich Sebastian [mailto:seb_stich at gmx.ch] Envoyé : ven., 1. février 2002 22:11 À : mathgroup at smc.vnet.net Objet : [mg32654] Simplify How can I simplify the following term? (a^(1/3) - b^(1/3)) * ((a^(2/3) + (ab)^(1/3) + b^(2/3))/(a-b))^(1/3) If I use the commands "Simplify", "FullSimplify", "Expand" or "PowerExpand" mathematica doesn't find the solution. The solution is (a^(1/3)-b^(1/3))^(2/3) If it's possible to find the solution in mathematica could mathematica show me the way this solution? Thanks for your answers!
{"url":"http://forums.wolfram.com/mathgroup/archive/2002/Feb/msg00045.html","timestamp":"2014-04-19T04:40:08Z","content_type":null,"content_length":"35021","record_id":"<urn:uuid:3b3f32c3-3e88-4333-ae9c-70ad66c4792b>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00486-ip-10-147-4-33.ec2.internal.warc.gz"}
Results 1 - 10 of 88 - SIAM JOURNAL ON SCIENTIFIC COMPUTING , 1998 "... Recently, a number of researchers have investigated a class of graph partitioning algorithms that reduce the size of the graph by collapsing vertices and edges, partition the smaller graph, and then uncoarsen it to construct a partition for the original graph [Bui and Jones, Proc. ..." Cited by 797 (12 self) Add to MetaCart Recently, a number of researchers have investigated a class of graph partitioning algorithms that reduce the size of the graph by collapsing vertices and edges, partition the smaller graph, and then uncoarsen it to construct a partition for the original graph [Bui and Jones, Proc. , 1995 "... We present a unified framework for designing polynomial time approximation schemes (PTASs) for "dense" instances of many NP-hard optimization problems, including maximum cut, graph bisection, graph separation, minimum k-way cut with and without specified terminals, and maximum 3-satisfiability. By d ..." Cited by 174 (28 self) Add to MetaCart We present a unified framework for designing polynomial time approximation schemes (PTASs) for "dense" instances of many NP-hard optimization problems, including maximum cut, graph bisection, graph separation, minimum k-way cut with and without specified terminals, and maximum 3-satisfiability. By dense graphs we mean graphs with minimum degree &Omega;(n), although our algorithms solve most of these problems so long as the average degree is &Omega;(n). Denseness for non-graph problems is defined similarly. The unified framework begins with the idea of exhaustive sampling: picking a small random set of vertices, guessing where they go on the optimum solution, and then using their placement to determine the placement of everything else. The approach then develops into a PTAS for approximating certain smooth integer programs where the objective function and the constraints are "dense" polynomials of constant degree. - Proc. ACM/IEEE Int’l Symp. Physical Design (ISPD 99), ACM , 1998 "... From 1985-1993, the MCNC regularly introduced and maintained circuit benchmarks for use by the Design Automation community. However, during the last five years, no new circuits have been introduced that can be used for developing fundamental physical design applications, such as partitioning and pla ..." Cited by 126 (1 self) Add to MetaCart From 1985-1993, the MCNC regularly introduced and maintained circuit benchmarks for use by the Design Automation community. However, during the last five years, no new circuits have been introduced that can be used for developing fundamental physical design applications, such as partitioning and placement. The largest circuit in the existing set of benchmark suites has over 100,000 modules, but the second largest has just over 25,000 modules, which is small by today’s standards. This paper introduces the ISPD98 benchmark suite which consists of 18 circuits with sizes ranging from 13,000 to 210,000 modules. Experimental results for three existing partitioners are presented so that future researchers in partitioning can more easily evaluate their heuristics. 1 , 1995 "... this paper is organized as follows: Section 2 briefly describes the various ideas and algorithms implemented in METIS. Section 3 describes the user interface to the METIS graph partitioning and sparse matrix ordering packages. Sections 4 and 5 describe the formats of the input and output files used ..." Cited by 122 (5 self) Add to MetaCart this paper is organized as follows: Section 2 briefly describes the various ideas and algorithms implemented in METIS. Section 3 describes the user interface to the METIS graph partitioning and sparse matrix ordering packages. Sections 4 and 5 describe the formats of the input and output files used by METIS. Section 6 describes the stand-alone library that implements the various algorithms implemented in METIS. Section 7 describes the system requirements for the METIS package. Appendix A describes and compares various graph partitioning algorithms that are extensively used. , 2001 "... Problems such as bisection, graph coloring, and clique are generally believed hard in the worst case. However, they can be solved if the input data is drawn randomly from a distribution over graphs containing acceptable solutions. In this paper we show that a simple spectral algorithm can solve all ..." Cited by 87 (3 self) Add to MetaCart Problems such as bisection, graph coloring, and clique are generally believed hard in the worst case. However, they can be solved if the input data is drawn randomly from a distribution over graphs containing acceptable solutions. In this paper we show that a simple spectral algorithm can solve all three problems above in the average case, as well as a more general problem of partitioning graphs based on edge density. In nearly all cases our approach meets or exceeds previous parameters, while introducing substantial generality. We apply spectral techniques, using foremost the observation that in all of these problems, the expected adjacency matrix is a low rank matrix wherein the structure of the solution is evident. , 2001 "... A bisection of a graph with n vertices is a partition of its vertices into two sets, each of size n=2. The bisection cost is the number of edges connecting the two sets. ..." Cited by 72 (7 self) Add to MetaCart A bisection of a graph with n vertices is a partition of its vertices into two sets, each of size n=2. The bisection cost is the number of edges connecting the two sets. "... We present a new top-down quadrisection-based global placer for standard-cell layout. The key contribution is a new general gain update scheme for partitioning that can exactly capture detailed placement objectives on a per-net basis. We use this gain update scheme, along with an efficient multileve ..." Cited by 50 (7 self) Add to MetaCart We present a new top-down quadrisection-based global placer for standard-cell layout. The key contribution is a new general gain update scheme for partitioning that can exactly capture detailed placement objectives on a per-net basis. We use this gain update scheme, along with an efficient multilevel partitioner, as the basis for a new quadrisection-based placer called QUAD. Even though QUAD is a global placer, it can achieve significant improvements in wirelength and congestion distribution over GOR-DIAN-L/DOMINO [SDJ91] [DJS94] (a leading quadratic placer with linear wirelength objective and detailed placement improvement). QUAD can be easily extended to capture various practical considerations; our timing-driven placement can obtain wirelength savings (as well as small cycle time improvements) versus the SPEED [RE95].
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=182434","timestamp":"2014-04-18T06:27:26Z","content_type":null,"content_length":"32949","record_id":"<urn:uuid:c116c72f-6ff8-4865-b920-05f45f3b43f4>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00113-ip-10-147-4-33.ec2.internal.warc.gz"}
This Article Bibliographic References Add to: Near Optimality of Chebyshev Interpolation for Elementary Function Computations June 2004 (vol. 53 no. 6) pp. 678-687 ASCII Text x Ren-Cang Li, "Near Optimality of Chebyshev Interpolation for Elementary Function Computations," IEEE Transactions on Computers, vol. 53, no. 6, pp. 678-687, June, 2004. BibTex x @article{ 10.1109/TC.2004.15, author = {Ren-Cang Li}, title = {Near Optimality of Chebyshev Interpolation for Elementary Function Computations}, journal ={IEEE Transactions on Computers}, volume = {53}, number = {6}, issn = {0018-9340}, year = {2004}, pages = {678-687}, doi = {http://doi.ieeecomputersociety.org/10.1109/TC.2004.15}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, RefWorks Procite/RefMan/Endnote x TY - JOUR JO - IEEE Transactions on Computers TI - Near Optimality of Chebyshev Interpolation for Elementary Function Computations IS - 6 SN - 0018-9340 EPD - 678-687 A1 - Ren-Cang Li, PY - 2004 KW - Elementary function computation KW - libm KW - Chebyshev Interpolation KW - Remez KW - best polynomial KW - accuracy. VL - 53 JA - IEEE Transactions on Computers ER - Abstract—A common practice for computing an elementary transcendental function in an libm implementation nowadays has two phases: reductions of input arguments to fall into a tiny interval and polynomial approximations for the function within the interval. Typically, the interval is made tiny enough so that polynomials of very high degree aren't required for accurate approximations. Often, approximating polynomials as such are taken to be the best polynomials or any others such as the Chebyshev interpolating polynomials. The best polynomial of degree n has the property that the biggest difference between it and the function is smallest among all possible polynomials of degrees no higher than n. Thus, it is natural to choose the best polynomials over others. In this paper, it is proven that the best polynomial can only be more accurate by at most a fractional bit than the Chebyshev interpolating polynomial of the same degree in computing elementary functions or, in other words, the Chebyshev interpolating polynomials will do just as well as the best polynomials. Similar results were obtained in 1967 by Powell who, however, did not target elementary function computations in particular and placed no assumption on the function and, remarkably, whose results imply accuracy differences of no more than 2 to 3 bits in the context of this paper. [1] J. Harrison, T. Kubaska, S. Story, and P.T.P. Tang, The Computation of Transcendental Functions on the IA-64 Architecture Intel Technology J., no. Q4, pp. 1-7, Nov. 1999. [2] R.-C. Li, S. Boldo, and M. Daumas, Theorems on Efficient Argument Reductions Proc. 16th IEEE Symp. Computer Arithmetic, June 2003. [3] P. Markstein, IA-64 and Elementary Functions: Speed and Precision, 2000. [4] J.-M. Muller, Elementary Functions: Algorithms and Implementation. Boston, Basel, Berlin: Birkhåuser, 1997. [5] K.C. Ng, Argument Reduction for Huge Arguments: Good to the Last Bit SunPro, technical report, 1992, http:/www.validlab. com/. [6] M.J.D. Powell, On the Maximum Errors of Polynomial Approximations Defined by Interpolation and by Least Squares Criteria The Computer J., vol. 9, no. 4, pp. 404-407, Feb. 1967. [7] P.T.P. Tang, Table-Driven Implementation of the Exponential Function in IEEE Floating-Point Arithmetic ACM Trans. Math. Software, vol. 15, no. 2, pp. 144-157, June 1989. [8] P.T.P. Tang, Table-Driven Implementation of the Logarithm Function in IEEE Floating-Point Arithmetic ACM Trans. Math. Software, vol. 16, no. 4, pp. 378-400, Dec. 1990. [9] P.T.P. Tang, “Table-Lookup Algorithms for Elementary Functions and Their Error Analysis,” Proc. 10th Symp. Computer Arithmetic, pp. 232-236, 1991. [10] P.T.P. Tang, Table-Driven Implementation of theexpm1Function in IEEE Floating-Point Arithmetic ACM Trans. Math. Software, vol. 18, no. 2, pp. 211-222, June 1992. [11] E. Remez, Sur un ProcédéConvergent d'Approximations Successives pour Déterminer les Polynômes d'Approximation C.R. Académie des Sciences, Paris, vol. 198, 1934. [12] R.-C. Li, P. Markstein, J. Okada, and J. Thomas, The libm Library and Floating-Point Arithmetic in HP-UX for Itanium II June 2000, http://h21007.www2.hp.com/dspp/files/unprotected/ [13] E.W. Cheney, Approximation Theory, second ed. Providence, R.I.: AMS, 1981. [14] R.A. DeVore and G.G. Lorentz, Constructive Approximation. Berlin, Heidelberg, New York: Springer-Verlag, 1993. [15] G.G. Lorentz, Approximation of Functions, second ed. New York: Chelsea Publishing, 1986. [16] INTEL, Intel IA-64 Architecture Software Developer's Manual. Intel Corp., 2002, vol. 3: Instruction Set Reference, document No. 245319-002,http://developer.intel.com/design/itanium/ [17] Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, M. Abramowitz and I.A Stegun, eds., ninth ed. New York: Dover Publications, 1970. Index Terms: Elementary function computation, libm, Chebyshev Interpolation, Remez, best polynomial, accuracy. Ren-Cang Li, "Near Optimality of Chebyshev Interpolation for Elementary Function Computations," IEEE Transactions on Computers, vol. 53, no. 6, pp. 678-687, June 2004, doi:10.1109/TC.2004.15 Usage of this product signifies your acceptance of the Terms of Use
{"url":"http://www.computer.org/csdl/trans/tc/2004/06/t0678-abs.html","timestamp":"2014-04-17T16:11:37Z","content_type":null,"content_length":"54693","record_id":"<urn:uuid:e2a55ddd-79b7-4636-959d-0197dc25bc77>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00149-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help October 26th 2009, 05:32 PM #1 Oct 2009 This is my first post on the forum, but I have been reading this forum for at least 2 years now. It has always been very useful to read some of the posts you guys make. However, now, it seems like I can't solve my problem simply by reading the forum...I would need a little extra help. Here is how it goes: (Note: Let bold letters be vectors) I know that u=(u1,u2,u3), v=(v1,v2,v3) and w=(w1,w2,w3) Let A= $\begin{array}{ccc}u1&u2&u3\\v1&v2&v3\\w1&w2&w3\end {array}$ I have to prove that: (Note: Let u,v and w, be vectors) det(A) $\leq$$\parallel u \parallel \parallel v \parallel \parallel w \parallel$ I tried to replace the norm of u times norm of v times norm of w by its equivalent using their values (u1,u2,u3), (v1,v2,v3), (w1,w2,w3). Hence, $\parallel u \parallel = \sqrt{u \cdot u}$ ... But then, all I got was a huge equation and the strange impression that I am not even close from doing the right thing... Anyone could help me head in the right direction ? Thanks a lot ! This is my first post on the forum, but I have been reading this forum for at least 2 years now. It has always been very useful to read some of the posts you guys make. However, now, it seems like I can't solve my problem simply by reading the forum...I would need a little extra help. Here is how it goes: (Note: Let bold letters be vectors) I know that u=(u1,u2,u3), v=(v1,v2,v3) and w=(w1,w2,w3) Let A= $\begin{array}{ccc}u1&u2&u3\\v1&v2&v3\\w1&w2&w3\end {array}$ I have to prove that: (Note: Let u,v and w, be vectors) det(A) $\leq$$\parallel u \parallel \parallel v \parallel \parallel w \parallel$ I tried to replace the norm of u times norm of v times norm of w by its equivalent using their values (u1,u2,u3), (v1,v2,v3), (w1,w2,w3). Hence, $\parallel u \parallel = \sqrt{u \cdot u}$ ... But then, all I got was a huge equation and the strange impression that I am not even close from doing the right thing... Anyone could help me head in the right direction ? Thanks a lot ! The scalar triple product should do the trick. $\vec{a} \cdot (\vec{b} \times \vec{c})=\begin{vmatrix}a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3 \\ c_1 & c_2 & c_3 \end{vmatrix}$ Now just use the geometric defintions of the dot and cross product $\vec{b} \times \vec{c}=||b|||c||\sin(\theta) \vec{n}$ where $\vec{n}$ is a unit vector normal to the plane formed by b and c $||n||=1$ and $\vec{a} \cdot(\vec{b} \times \vec{c})=||a|||b||||c|\sin(\theta_1)\cos(\theta_2)$ and remember the sine and cosine are always less than or equal to 1. Last edited by TheEmptySet; October 27th 2009 at 02:00 PM. Reason: editied for clairity Hmmm, I'm not sure I understand the role that $\vec{n}$ has to play in this. When you cross two vectors you get a vector that is normal(perpendicular) to the plane spanned by the two vectors. The n is just giving you the direction that the resulting vector is going. The $||b|||c||\sin(\theta)$is the length(magnitude) of the resulting vector. This is needed becuase the dot product is a binary operation on two vectors, not a vector and magnitude. If we didn't have the unit normal the dot product would not make sense. I hope this helps. I have editied the above post for clarity as well. Last edited by TheEmptySet; October 27th 2009 at 02:01 PM. Reason: To make the argument clearer Ok... So $<br /> \vec{b} \times \vec{c}=||b|||c||\sin(\theta) \vec{n}<br />$ $<br /> \vec{a}=||a||\cos(\theta) \vec{n}<br />$ $<br /> \vec{a} \cdot(\vec{b} \times \vec{c})=||a|||b||||c|\sin(\theta_1)\cos(\theta_2)<br />$ Is that right? $<br /> \vec{a}=||a||\cos(\theta) \vec{n}<br />$ Is incorrect. a is just the vector a. $d\cdot e=||d||||e||cos(\theta)$ where $\theta$ is the angle beteween d and e. $a\cdot(b\times c)=||a||\Bigl(||\bigl(||b|||c||\sin(\theta) \vec{n}<br /> \bigr)||\Bigr)\cos(\theta_2)=||a||||b||||c||\sin(\ theta)\cos(\theta_2)||n||$ Now n is a unit vector so: $||a||||b||||c||\sin(\theta)\cos(\theta_2)||n||=||a ||||b||||c||\sin(\theta)\cos(\theta_2)$ Be careful, don't confuse $\theta$ and $\theta_2$ together. The one inside sin is the angle between b and c, and the one inside cos is the angle between a and the resulting vector of (bxc). So basically, $||a||||b||||c||\sin(\theta)\cos(\theta_2)<br />$ is equal to Det(A)...but how does this prove that the det(A) is smaller than $<br /> ||a|||b|||c||<br />$ Edit: Oh, is it simply that $\sin(\theta)$ and $\cos(\theta_2)$ will always give values bellow 1 (or equal to one), and therefore the value of $||a||||b||||c||$ times 2 values that are equal or smaller to 1 will always give a smaller or equal value than $||a||||b||||c||$ Yes, that is correct. Ok, I understand everything except one thing. Isn't the formula supposed to be $||\vec{b} \times \vec{c}||=||b|||c||\sin(\theta) \vec{n}$ and not : $\vec{b} \times \vec{c}=||b|||c||\sin(\theta) \vec{n}$ ??? $\vec{b} \times \vec{c}$ is a vector while $||\vec{b} \times \vec{c}||$ is a length of the vector. The length of the vector that is the result of a cross product $\vec{b} \times \vec c$ is $||\vec{b} \times \vec c||=||b|||c||\sin(\theta)$, but the vector it self is $||b|||c||\sin(\theta) \vec{n}$ Read again what TheEmptySet posted above: When you cross two vectors you get a vector that is normal(perpendicular) to the plane spanned by the two vectors. The n is just giving you the direction that the resulting vector is going. The $||b|||c||\sin(\theta)$is the length(magnitude) of the resulting vector. This is needed becuase the dot product is a binary operation on two vectors, not a vector and magnitude. If we didn't have the unit normal the dot product would not make sense. I hope this helps. I have editied the above post for clarity as well. October 26th 2009, 06:33 PM #2 October 27th 2009, 01:22 PM #3 Oct 2009 October 27th 2009, 01:40 PM #4 October 27th 2009, 03:42 PM #5 Oct 2009 October 27th 2009, 05:06 PM #6 Sep 2009 October 27th 2009, 05:32 PM #7 Oct 2009 October 28th 2009, 03:56 AM #8 Sep 2009 October 28th 2009, 12:32 PM #9 Oct 2009 October 28th 2009, 01:28 PM #10 Sep 2009
{"url":"http://mathhelpforum.com/calculus/110702-proof.html","timestamp":"2014-04-19T12:36:18Z","content_type":null,"content_length":"71004","record_id":"<urn:uuid:2020d2dc-e5af-438d-b60e-b0ee8b22e780>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00257-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum - Ask Dr. Math Archives: High School Number Theory This page: number theory Dr. Math See also the Dr. Math FAQ: 0.9999 = 1 0 to 0 power n to 0 power 0! = 1 dividing by 0 number bases Internet Library: number theory About Math basic algebra linear algebra linear equations Complex Numbers Discrete Math Fibonacci Sequence/ Golden Ratio conic sections/ coordinate plane practical geometry Negative Numbers Number Theory Square/Cube Roots Browse High School Number Theory Stars indicate particularly interesting answers or good places to begin browsing. Selected answers to common questions: Diophantine equations. Infinite number of primes? Testing for primality. What is 'mod'? Sk = 1^k+2^K+3^k+...+n^k. Find Sk as a formula. Is there a way to get the answer to a factorial without having to multiply out all the numbers? A heap of 201 stones is divided in several steps into heaps of three stones each... A question on subsets and another on greatest common divisor (GCD). Am I correct in saying that both the whole number set and the integer set have an infinite number of numbers within them, and therefore are of the same size? I'm having trouble with the idea of how to borrow when I am subtracting two numbers in a base other than base ten. Can you help? For numbers A, B, C, and D, subtract A from B, (or vice-versa; you must be left with a whole number, not a negative one). Repeat with B and C, C and D, and D and A. After about 6 steps, you will always end up with 0000. The puzzle is to get as many steps as possible. How does subtraction using the "method of complements" work? Why does it give the correct answer all of the time? How do you compute the sum of B(n)/(n(n+1)) from 1 to infinity, where B(n) denotes the sum of the binary digits of n? I asked my students to keep adding random integers from 1 to 100 until the sum exceeded 100. We then found the average number of terms added. The answer seems to be e. Why? The more we do it, the closer we get. How can I add up a series like 1*1! + 2*2! + 3*3! ... n*n! ? Express 1994 as a sum of consecutive positive integers, and show that this is the only way to do it. Is there a general formula for summing the n^k, where k is a positive integer? What is the formula for the sum of 1/sqrt(i) for i = 1 to n? Can you show me the proof by induction? Given an integer N, can N can be written as a sum of consecutive odd integers? If so, how can I identify *all* the sets of consecutive odd integers that add up to N? Can you prove that in a sequence of 39 consecutive natural numbers there exists at least one number such that the sum of its digits is divisible by 11? Can you prove that if you add the digits of any multiple of nine, then add the digits of that result, and keep going, you eventually wind up with 9? For example, 99 => 9 + 9 = 18 => 1 + 8 = 9. Why does it work? How do you show that every positive integer is a sum of distinct terms of the Fibonacci sequence? Is there a shortcut to find (1^3-1^2)+(2^3-2^2)+(3^3-3^2)... (15^3-15^ 2)? Factorial refers to the product of the first n natural numbers. Is there a name and symbol for the SUM of the first n natural numbers? How many integers are 13 times the sum of their digits? Is there a formula for calculating the summation of numbers from 1 through n? I want to derive a formula for the sum of powers of 2. How can I prove that the sum of the squares of two odd integers cannot be a perfect square? Given that a and b are two consecutive odd prime integers, prove that their sum has three or more prime divisors (not necessarily distinct). Find the smallest number that can be expressed as the sum of two cube numbers in two different ways. Can the sum of two different primes ever be a factor of the product of those primes? What is the smallest number that can be expressed in twelve different ways as the sum of two squares? Can you generate the sequence [400, 399, 393, 392, 384, 375, 360, 356, 337, 329, 311, 300]? By induction, prove that every proper fraction p/q with p less than q can be written as a finite sum of distinct reciprocals of positive integers. Why is the sum of a number with an even number of digits and that same number written in reverse always divisible by 11? How many different ways can 2000 be expressed as the sum of two or more consecutive positive integers? What numbers can be expressed as the sum of a string of consecutive positive integers? Find all sets of positive consecutive integers that sum to 100, and whose digits sum to greater than 30. In what way(s) can 1000 be expressed as the sum of consecutive numbers? Can the sum of two consecutive even integers ever equal the sum of two consecutive odd integers? Why or why not? Why are the powers of 2 the only numbers you cannot get as the sum of a series of consecutive positive integers? Given several sets of prime numbers, use each of the nine non-zero digits exactly once. What is the smallest possible sum such a set could have? How many numbers from 1-100 can be expressed as the sum of the squares of two positive integers? What numbers cannot be expressed as the sum of three squares? Page: [<<first] [<prev] 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 [next>]
{"url":"http://mathforum.org/library/drmath/sets/high_number_theory.html?start_at=761&num_to_see=40&s_keyid=39896850&f_keyid=39896851","timestamp":"2014-04-16T11:11:06Z","content_type":null,"content_length":"25678","record_id":"<urn:uuid:e33ee08f-ecaf-44d1-ac5b-212b7672198d>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00084-ip-10-147-4-33.ec2.internal.warc.gz"}
$Grp$ is the category with groups as objects and group homomorphisms as morphisms. More abstractly, we can think of $Grp$ as the full subcategory of $Cat$ with groups as objects. Since groups may be identified with one-object groupoids, it is sometimes useful to regard $Grp$ as a $2$-category, namely as the full sub-$2$-category of Grpd on one-object groupoids. In this case the $2$-morphisms between homomorphisms of groups come from “intertwiners”: inner automorphisms of the target group. On the other hand, if we regard $Grp$ as a full sub-$2$-category of $Grpd_*$, the $2$-category of pointed groups, then this is locally discrete and equivalent to the ordinary $1$-category $Grp$. This is because the only pointed intertwiner between two homomorphisms is the identity. Precisely analogous statements hold for the category Alg of algebras. Revised on June 13, 2013 17:27:08 by Urs Schreiber
{"url":"http://ncatlab.org/nlab/show/Grp","timestamp":"2014-04-20T00:39:32Z","content_type":null,"content_length":"20304","record_id":"<urn:uuid:acf94ac1-831d-4e60-9823-ae3a7212c1c2>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00654-ip-10-147-4-33.ec2.internal.warc.gz"}
June 20 M Theory Lesson 203 As usual Carl has jumped ahead with a post on mixing matrices as magic squares. For reference, let us collect here some actual figures for the matrix, given in this article from the Particle Data Group . Absolute value signs are omitted. $M_{ud} = 0.97377 \pm 0.00027$ $M_{us} = 0.2257 \pm 0.0021$ $M_{ub} = 4.31 \pm 0.30 \times 10^{-3}$ $M_{cd} = 0.230 \pm 0.011$ $M_{cs} = 0.957 \pm 0.017 \pm 0.093$ $M_{cb} = 41.6 \pm 0.6 \times 10^{-3}$ $M_{td} = 7.4 \pm 0.8 \times 10^{-3}$ $M_{ts} = 40.6 \pm 2.7 \times 10^{-3}$ $M_{tb} > 0.78$ This is a little different to the values given in the wikipedia article. Standard Model analyses of these quantities can be quite complicated. Following the from before, in a very simple ideal double circulant the magic square property demands that $a + b = c + d$. For the CKM values (squared) we see that rows and columns do indeed sum to 1, and $c + d \ simeq 1$ because $b$ is so small. So combining 1-circulants and 2-circulants is interesting in the context of mixing. Let us play with combinations of the Fourier operators for both cases. For example, Recall that these two matrices can be combined to describe an element of $B_{3}$ using a representation linked to the theory of the field with one element. We obtain which is a Bilson-Thompson type braid for particles. Since $U_ {2}^{\dagger} U_{1} = U_{1}^{\dagger} U_{2}$, we can rewrite the quadruple product as $U_{1}^{\dagger} ( U_{2} U_{1} U_{2}^{\dagger} )$. Let's get back to neutrino mixing. Today, Carl Brannen links to some slides by Smirnov on Universality versus Complementarity for quarks and leptons. Complementarity is the observation that for tribimaximal mixing one has the relations $\theta_{12} + \theta_{12}^{qu} \simeq \frac{\pi}{4}$ $\theta_{23} + \theta_{23}^{qu} \simeq \frac{\pi}{4}$ (and $\theta_{13}$ small) despite the fact that the CKM matrix is very different to tribimaximal. Smirnov then discusses an implied $\nu_{\mu}$, $\nu_{\tau}$ permutation symmetry of the form But observe that this matrix can also be expressed as the sum of two (Hermitian) $3 \times 3$ circulants, since 2-circulants describe the 2-cycles in the permutation group $S_{3}$. As a formal combination of elements of $S_{3}$ (or the braid group $B_{3}$ if appropriate phases are added) we can represent this matrix sum as an element of a diagram algebra on three strands. CV pointed out that the Phoenix Mars Lander has a Facebook account, so now I'm a friend of Phoenix! Today Phoenix told us that the Martian soil is certainly friendly for life! Its pH is between 8 and 9 and quite salty! I also became a fan of GLAST on Facebook. Ben Webster asks us where and how we would build a new research institute. As previously mentioned, I would build a centre for pure Category Theory and its applications in everything from physics and computer science to neuroscience and linguistics. John Armstrong has already signed up for an NZ based institute. The first question is, urban or rural? Come on. This is the 21st century, so let's choose somewhere pleasant to live, where time off from the office can be spent on a variety of outdoor activities. Kaikoura is the place. Lonely Hapuku hut is only a few hours walk from potential institute sites in the lower Hapuku valley. There is access to a large area of scenic mountain hiking. One could take an easy walk up the Mt Fyffe road in the morning and then enjoy whale watching in the afternoon. If tired of swimming with the seals, or skiing, or wine tasting, one could always turn to a geological tour. The so called Kaikoura orogeny, beginning about 25 million years ago, is the uplifting process that forms the Southern Alps. This institute would be cheap to build, since Kaikoura is sparsely populated. It is easily accessible by road and rail from Christchurch, three hours away. It is confusing that we sometimes talk about 0, 1 and 2, and sometimes about 1, 2 and 3, when we really mean the same thing. But one denotes truth values by the former triple and elements of sets by the latter, even if the set contains the number 0. Of course it doesn't matter what one calls objects in a logos, so long as one is careful to explain what structure is being described. Let's stick with 1, 2 and 3 today, because this is conventional notation for the permutations on three objects, given as usual by the $3 \times 3$ circulant matrices with entries 0 and 1. In logos theory multicategories are more important than ordinary categories, not least because operads are examples of multicategories. Consider the basic triangle category, with only three non-identity arrows. If the triangle is viewed as a multicocategory, what arrows can we draw with it? Any number of inputs is allowed, but for the category 3 repeats soon become inevitable. Heavy use of the identity arrows is made. Now consider a triangle with two way arrows between distinct objects. A pair of two way arrows can represent a 2-cycle permutation on two objects, denoted by the Pauli matrix $\sigma_{x}$. But then naive composition of arrows does not give the composition of 2-cycles in $S_{3}$. To obtain such a 3-cycle it is more natural to involve multiarrows! That is, let a trivalent vertex represent the 3-cycle, as we often do in M Theory. I live on a hill only about 250m above the low lying Canterbury plains, in fact on the other side of one of the low hills in the centre of this picture, on the outskirts of the city. But many winter mornings it is like living in the sky (although unfortunately it is often brown). Yesterday the fog sat even lower than in this (stolen) picture, and from home I could see across 100s of kilometres of cloud from a clear sunny day. Now imagine that $\omega$ is an $N$th root of unity for some initially arbitrary $N$. Then our generators $\sigma_{1}$ and $\sigma_{2}$ obey ordinary matrix relations of the form and $\sigma_{1} \ sigma_{2} \sigma_{1} = \sigma_{2} \sigma_{1} \sigma_{2}$ holds. We also have $(\sigma_{1} \sigma_{2} \sigma_{1})^{2} = \omega^{6} \cdot 1$, so if $\omega$ is a 6th root of unity the modular relation holds. One also has that $(\sigma_{1} \sigma_{2})^{3} = \omega^{6} \cdot 1$. This is the operator usually chosen to represent $ST$ in the modular group. Let $\omega$ be the primitive cubed root of unity. Using the ordinary matrix product one finds that the prospective $B_3$ braid generator satisfies $\sigma_{2}^{2} = 1$, but as Lieven points out one can consider fancier matrix products, such as and it follows that instead $\sigma_{2}^{3}$ might be a permutation matrix. Anyhow, one easily verifies that the braid relation $\sigma_{1} \sigma_{2} \ sigma_{1} = \sigma_{2} \sigma_{1} \sigma_{2}$ holds. Moreover, in this case of cubed roots of unity, using ordinary matrix product one gets the relation which reduces the braid group to the modular group. Recall that this process views the group $B_{3}$ as the fundamental group of the complement of the trefoil knot in three dimensional space. Note that the generator $\sigma_{1}$ behaves similarly, reduced by the properties of $\omega$, but never quite to the identity. For powers of $\sigma_{1}$ we have the relations where the big dot means the permutation operation, which has no knowledge of the crossing. What a nice way of looking at the modular group! Category theorists have a fancy way of thinking of semidirect products as a piece of two dimensional group structure, but these simple matrices are enough to see what is going on. A nicer way to represent the neutrino tetrahedron group with $3 \times 3$ operators is to choose since this also obeys $T^{3} = 1$ and $(TS)^{3} = 1$ but $T$ looks a lot more like the circulant $S$ than a diagonal operator. Moreover, the quantum Fourier diagonal still appears in the relation from which it also follows that $T = SD$ and $T^{2} = S \overline{D} = S D^{2}$ where $\overline{D}$ is just the rotation in the opposite direction in the plane. Observe how the squaring of $T$ shifts the horizontal phase factors to vertical ones. Some circulants that pop up in M Theory have very nice properties. For example, consider the idempotent operator $\frac{1}{3} C$ in If we were doing arithmetic modulo 3 this would look like the equation making the democratic matrix into a nilpotent operator. For a phase of $\delta = \pi$ the eigenvalues of $C$ are $(0, \sqrt{3}^{-1}, \sqrt{3}^{-1})$, which may be normalised to $(0,1,1)$. Note that, modulo 3, $C$ is the same as the modular operator $S$, which squares to unity and represents inversion in the unit circle. Modulo 2, the operator $C$ is the complement of the identity $S$. Let's start with $(\mathbb{R}, \mathbb{C}, \mathbb{H})$ and the triple of Riemann surface moduli $(M(0,6), M(1,3), M(2,0))$, which have Euler characteristics $-6, - \frac{1}{6}, - \frac{1}{120}$ respectively. Observe that 120 is the number of elements in the icosahedral group, whereas 6 is the number of elements in $S_3$. The triple of (orthogonal, unitary, symplectic) appeared in Mulase-Waldron T duality for partition functions over twisted graphs. Here, the unitary case is self dual, just like the Platonic tetrahedron. The real (orthogonal) case has half the number of matrix dimensions (punctures) as the quaternionic case, suggesting we associate the genus 1 moduli to $\mathbb{R}$ and the genus 0 moduli to $\mathbb{H}$. The dual graph to the cube is basically the 6 punctured sphere. This leaves the genus 2 moduli for the icosahedron and indeed the 120 in the Euler characteristic suggests a relation. Observe that without the octonions, one does not naturally encounter nonassociative structures in the triples, but such triples are also highly relevant to M Theory. From a categorical perspective, one views these trinities as models of the category 3, the basic triangle, because they naturally form categories with only 3 objects and one natural map between any two objects. The collection of all such sets of three elements is the object 3 as an ordinal which counts cardinalities of sets, except that we have categorified the sets by making them categories! This is why it is not surprising to encounter grouplike cardinalities in the Euler characterstics of these models. (Actually, it is the orbifold structure of the moduli that gives them a groupoid Lieven Le Bruyn has an absolutely wonderful post about Arnold's trinities. Examples include the Platonic groups, the exceptional triple $(E_6, E_7, E_8)$ and the fields $\mathbb{C}$, $\mathbb{H}$ and the octonions. Lieven asks, do you have other trinities you like to worship? In M Theory we have all of these and lots more! The Riemann surface moduli triple $(M(0,6), M(1,3), M(2,0))$ of twistor dimension. Idempotent triples for the particle generations. Three kinds of being in ternary logic. The three squares on an associahedron in dimension 3. Three parity cubes for the exceptional Jordan algebra over the octonions. The three states of Peirce's Hegelian philosophy. The three crossings on a trefoil knot and the braid group $B_3$. The triple $(B_{3}, PSL(2, \mathbb{Z}), S_3)$ of braids, modular group and hexagon (or triangle). Update: A pdf version of Arnold's paper has kindly been provided by Lieven. Aside: I just installed the latest version of Firefox and it has ruined some of the maths fonts. Is this problem going to be fixed? The $2 \times 2$ component of the second factor diagonalises a $2 \times 2$ circulant which is why Harrison et al selected such an operator for their neutrino mass matrix. But mixing is about sending mass states to weak states, so it makes more sense to consider a factorisation $U_{m}^{\dagger}V_{w}$ where $U_{m}$ is the universal $3 \times 3$ circulant diagonalisation operator. One can have fun switching rows or columns. For example, a codiagonalisation of $3 \times 3$ 2-circulants is given by One can combine a row switch in $U_{m}^{\dagger}$ with a column switch in the second operator to obtain which is just the tribimaximal mixing matrix again, up to some phase factors. Let us imagine adding an identity matrix factor as a one dimensional operator on the right, thus forming a triple product of Fourier operators, one for each dimension up to three. Aside: Check out Carl's post on Koide fits for mesons. Following Carl Brannen's convention for tribimaximal mixing, a phase corrected Harrison et al factorisation would look like (sorry about the missing square root in the normalisation constant) where the second factor $V$ has the property that $V^{\dagger} V = 2 I$, for $I$ the identity matrix (the factor of 2 goes away with the appropriate normalisation factor). Let's have fun thinking about other properties of this operator! Speaking of platonic groups in neutrino physics, Lieven Le Bruyn beautifully clarifies the story in a post on Galois. As he points out, these three groups, the tetrahedral, octahedral and in turn correspond to the three exceptional Lie algebras $E_6$, $E_7$, $E_8$ via the McKay correspondence (wrt. their 2-fold covers). Yesterday we came across $\Gamma (3)$ in connection with the neutrino mixing tetrahedron. Recall that the generating function for $\Gamma (3)$ is $j^{\frac{1}{3}}$, where the dimension of $E_8$ appears in the second term of the expansion. But these connections to the exceptional Lie groups have much more to do with lattices and operads than with strings or toes, as Lieven promises to explain soon. M Theory is the theory that explains the structure of stringy geometry, not the theory that confirms so called stringy physics. One place where the origin in the plane naturally appears in the theory of the modular group is in Reduction Theory, nicely explained in a paper recommended by Thomas Riepe. One relaxes the condition that the group action on the upper half plane shift the fundamental domain so that there is no intersection between the two domains, and allows a finite intersection. Then the region shown on the right, which is three times bigger than the usual domain, is allowed as a fundamental domain. Now let $\Gamma (N)$ be the congruence subgroup of the modular group. The translations of the new fundamental region give quotient spaces with punctures at vertices. For $\Gamma (3)$ one has a tetrahedron (like the neutrino tetrahedron), for $\Gamma (4)$ an octahedron and for $\Gamma (5)$ an Recall from Mulase's lectures on the modular group $PSL (2, \mathbb{Z})$ that the generators are given by where $T$ represents a translation by 1 in the complex plane. Note that $S$ really does square to the identity because $\pm 1$ are identified, but $T$ is quite distinct from the tetrahedral generator $(1, \omega , \omega^{2})$ used in neutrino mixing, which is more naturally associated with the quantum Fourier transform. Consider how the diagonal $(\omega , \omega^{2})$ acts on $z$. As a modular transformation it would act via $z \mapsto \frac{az + b}{cz + d} = \omega^{-1}z$ that is, a rotation by $\frac{2 \pi}{3}$ in the plane. This is like the action of $TS$, which also rotates a third of a circle but fixing instead the point $z = e^{\frac{\pi i}{3}}$ which is a vertex of the Grothendieck ribbon graph for the notorious j invariant. Carl Brannen rightly pointed out that the matrices appearing in the tribimaximal mixing papers are in fact basically the same as those that characterise MUBs and the quantum Fourier transform for the prime 3. In fact, let where $(231)$ (oops, it should be $(312)$) denotes as usual the cyclic permutation in $S_{3}$ (sometimes drawn as ribbon diagrams) and $M$ is Carl's notation. Both $(231)$ and $3M$ cube to the identity. The democratic matrix is given by $D = \frac{1}{3}[1 + (231) + (231)^{2}]$, which can be thought of as a vector $(\frac{1}{3} , \frac{1}{3} , \frac{1}{3})$. Observe that the operator $S$ from the $A_{4}$ representation obeys the rules $D \cdot 3S = 3D$ $2D - 3S = 3I$ where $I$ is the identity. I'm beginning to wonder if those poor experimenters are ever going to detect a $\theta_{13} > 0$. Note also that the (norm square) $2 \times 2$ form of the neutrino mass matrix, which was used by Harrison et al, is expressed as which utilises the 2-circulant $3 \times 3$ matrix that happens to square to the identity. In other words, this is a $3 \times 3$ representation of the Pauli spin Fourier polynomial. Thus tribimaximal mixing is expressed as a composition of mass Fourier and spin Fourier components. Recall that a renormalised circulant matrix is a kind of magic square, where we don't worry about summing along diagonals. In neutrino physics, the unitarity of mixing forces the (squared) mixing matrix to be a magic square with rows and columns summing to 1. The tribimaximal case was first discussed by Harrison et al, where the whole matrix follows from the entries $U_{13}$, $U_{23}$ and $U_ {12}$. Labelling columns by $\nu_{1}$, $\nu_{2}$, $\nu_{3}$ and rows by $e$, $\mu$, $\tau$ the matrix $U^{2}$ is $\frac{2}{3}$ $\frac{1}{3}$ $0$ $\frac{1}{6}$ $\frac{1}{3}$ $\frac{1}{2}$ $\frac{1}{6}$ $\frac{1}{3}$ $\frac{1}{2}$ In terms of the standard mixing angles this corresponds to $\theta_{13} = 0$, $\theta_{23} = \frac{\pi}{4}$ and $\textrm{sin} \theta_{12} = \frac{1}{\sqrt{3}}$ with no additional (Dirac) CP violating phase. Given the excellent experimental agreement with this case, the question is, what is the justification for choosing $U_{13} = 0$, $U_{23} = \frac{1}{\sqrt{2}}$ and $U_{12} = \frac{1}{\sqrt{3}} $? Most physicists expect some deviation from tribimaximal mixing, but perhaps there is a good reason for things being so simple. For instance, observe that we can reorder the columns arbitrarily so that $U^{2}$ is derived (assuming one democratic column) from a diagonal $\frac{2}{3}$, $\frac{1}{2}$, $\frac{1}{3}$ which is the length 3 Farey sequence. That is, it has the modular group property that for consecutive fractions $\frac{a}{b}$ and $\frac{c}{d}$, one has $bc - ad = 1$. On the other hand, what mixing do we get if we substitute Carl's neutrino Koide rule for the one assumed by Harrison et al? Note that Harrison et al use the $3 \times 3$ circulant mass matrix for the charged leptons. On using the same quantum Fourier diagonalisation operator for both the charged leptons and neutrinos (see page 7 in Harrison et al) one would find that $U^{\dagger} U = 1$, so the tribimaximal mixing matrix would be replaced by the identity! It is the interplay of $3 \times 3$ circulants and $2 \times 2$ circulants that gives rise to the observed tribimaximal mixing. A paper by Altarelli and Feruglio on tribimaximal mixing was mentioned in talks at Neutrino 08, so I thought I should take a look at it. They begin by describing the 3 dimensional representation of $A_{4}$ which has generators $S$ and $T$ satisfying $(ST)^{3} = S^{2} = 1$, just like the modular group, and also the relation $T^{3} = 1$. Then, letting $\omega$ be the cubed root of unity, one has for $T$ the matrix 0 $\omega^{2}$ 0 0 0 $\omega$ and for $3 S$ the circulant matrix -1 2 2 2 -1 2 2 2 -1 The 12 elements of $A_{4}$ are given by all possible combinations of these generators. Gee, it already sounds a bit like M Theory. I can't imagine what they want with all the fairy fields and SUSY's mumbo jumbo, although in short shrift the tribimaximal mixing matrix appears. The denominator of the Fermi function is derived from the partition function $Z = 1 + \textrm{exp}(- \frac{E - \mu}{kT})$ for the 2 possible occupancies of a fermion state, namely 0 or 1. A ternary analogue resulting in tripled Pauli statistics would require $Z = 1 + 3 \textrm{exp} (- \frac{E}{kT})$ where we arbitrarily shift the energy scale, momentarily. Presumably this corresponds to the three possible ways of occupying the state with one particle, whereas for ordinary fermions there is only one way of occupying a state. Another interpretation is to write $3 \textrm{exp} (- \frac{E}{kT}) = \textrm{exp} (- \frac{E}{kT} + \textrm{log} 3)$ where $\textrm{log} 3$ is an energy level (for one prime object) in the Riemann gas system, whose complete partition function is the Riemann zeta function $\zeta (s)$ for $s = (kT)^{-1}$. A more respectable result using Riemann zeta values is $(\zeta (2) - 1) + (\zeta (3) - 1) + (\zeta (4) - 1) + \cdots = 1$ because the terms in this series start at 0.6449 and rapidly approach zero. It is well known that for even ordinals $\zeta (2k) = \frac{(-1)^{k+1} (2 \pi)^{2k}}{2 (2k)!} B_{2k}$ for Bernoulli numbers $B_{2k}$. More recently, formulas for odd ordinals have been found by Linas Vepstas. From his 2006 paper we have $\zeta (4m - 1) = - 2 \sum_{n} \textrm{Li}_{4m - 1} (e^{- 2 \pi n}) - \frac{1}{2} (2 \pi)^{4m - 1} \sum_{j=0}^{2m} (-1)^{j} \frac{B_{2j} B_{4m - 2j}}{(2j)!(4m - 2j)!}$ $\zeta (4m + 1) = (1 + (-4)^{m} - 2^{4m + 1})^{-1} [-2 \sum_{n} \textrm{Li}_{4m + 1} (e^{- 2 \pi n + \pi i})$ $ + 2(2^{4m+1} - (-4)^{m}) \sum_{n} \textrm{Li}_{4m + 1} (e^{- 2 \pi n}) $ $+ (2 \pi)^{4m+1} \sum_{j=0}^{m} (-4)^{m+j} \frac{B_{4m - 4j + 2}B_{4j}}{(4m - 4j + 2)!(4j)!} $ $+ \frac{1}{2} (2 \pi)^{4m+1} \sum_{j=0}^{2m+1} (-4)^{j} \frac{B_{4m - 2j + 2}B_{2j}}{(4m - 2j + 2)!(2j)!} ]$ for $\textrm{Li}_{s}(x)$ the polylogarithm function, which generalises the Riemann zeta function. In other words, one can think of $\zeta (4m - 1)$ as the $n = 0$ term in a formula relating polylogarithm values to the Bernoulli numbers. Todd and Vishal's Problem of the Week number 3 (solution here) was to compute, for any $n > 1$, the series (from $k = 0$) $S(n) \equiv \sum_{k} B(n+k ; k)^{-1}$ where $B(n+k ; k)$ is a binomial coefficient. In the case $n = 2$ we see that the sum takes the form $S(2) = 1 + \frac{1}{3} + \frac{1}{6} + \frac{1}{10} + \cdots = 2$ which is a sum of reciprocals of triangular numbers $\frac{1}{2} k(k+1)$ (from $k = 1$). For $n = 3$ we obtain the reciprocals of the tetrahedral numbers, and $S(3) = \frac{3}{2}$. The tetrahedral number $T_{k} = \frac{1}{6} k(k+1)(k+2)$ is the sum of the first $k$ triangular numbers. By the way, only three tetrahedral numbers are perfect squares, namely 1, 4 and $T_{48} = 19600$. One guesses that in general $S(n)$ is a series of reciprocals of tetrahedral numbers in dimension $n$. Indeed $S(n) = \frac{n}{n - 1}$ But whenever discussing infinite series of simple polytopes, an M theorist cannot help thinking of the Riemann zeta function. Observe that for $n = 2$ $S(2) = \sum_{k} \frac{2}{k^{2} + k} = 2 \zeta (2) - \sum_{k} \frac{2}{k^{3} + k^{2}}$ $= 2 [ \zeta (2) - \zeta (3) + \zeta (4) - \zeta (5) + \cdots ] = 2$ from which one deduces, allowing cancellation of infinities (!), that $\zeta (2) - \zeta (3) + \zeta (4) - \zeta (5) + \cdots = 1$ What kind of zeta sums do we get in general? Since my brief comment regarding the new F Theory paper was naturally deleted by Woit, although quite evidently very much on topic, I will post it here: Hmmm, that’s odd. A 200 page paper on F theory should be able to recover Brannen’s precise mass values for the three Standard Model neutrinos ($\sum m_{i}$ = 0.06 eV) because the 12 dimensions are recovered very simply from the three Riemann moduli spaces of twistor dimension (= 6 over $\mathbb{R}$) via marked points = spatial dimension and also holes = times (one hole for the torus and two for the genus two surface). While I was busy at Neutrino 08, the NCG blog posted an update on the Vanderbilt meeting. In particular, they note that Manin's lectures on Zeta functions and Motives are available at Katia Consani's homepage! Niranjan Ramachandran spoke about this paper at Vanderbilt. This work, originating in the physical ideas of Deninger, looks at the field over one element (which is fast becoming a popular subject). Deninger writes the zeta function, completed with the infinite prime, in the form $\zeta (s) = \frac{R}{s (s - 1)}$ where $R$ is a regularized determinant to be viewed as an infinite dimensional analogue of a determinant of an endomorphism of a finite dimensional vector space (according to Connes and Consani). Subir Sarkar is one of those rare individuals who can be very critical of his audience without offending anybody (that I know of) because he just makes so much sense. He was assigned the task (by the organisers) of analysing the implications of cosmic ray results. The talk included phrases such as: one could use some Mickey Mouse model of mirror [something] ... or even better, to the audience: please stop drawing limits ... in reference to ideas which have been completely ruled out. The talk began with the catchphrase guaranteed cosmogenic neutrino flux, to which Sarkar had added strong quotation marks to the first word, the main point being that cosmic ray primaries might well be heavy nuclei rather than protons. Auger data was used as evidence for this hypothesis, which is consistent with the observed energy spectrum and predicts a smaller cosmogenic flux. Another 2006 paper looks at Auger bounds for QCD. Colour gluon condensates were mentioned. The steep rise of gluon density at low x should saturate, leading to a supression of the neutrino-nucleus cross section. In summary, he says that neutrino observations are a unique laboratory for both Standard Model and Beyond Standard Model physics. ANITA is a radio balloon experiment, which is flown around Antarctica to view a vast expanse of ice and then (ideally) landed neatly. Despite problems with the last flight, 18 days of good live time were recovered for an average 1.2 km depth of ice. For the next flight they expect a factor of 5 improvement in the $\nu$ rate. Some candidate geosynchrotron events were observed, but satellite data still needs to be checked carefully (solid state relays on satellites can cause false events). B. Dingus overviewed multiwavelength astronomy and, taking advantage of the late lecture slot, showed a few photos of her trip to Franz Josef glacier (and Arthur's Pass). Potential neutrino sources were introduced with this stunning image of the Crab nebula, along with other examples. Unidentified high latitude EGRET sources were also mentioned. And GLAST is due to launch on Thursday! One day of GLAST operation should match 9 years of EGRET. Dingus actually works for Milagro, a TeV gamma ray observatory in Mexico that was turned off in April 2008. HAWC was discussed as a promising future E. Roulet reported from the Pierre Auger Cosmic Ray Observatory in Argentina. This consists of 1600 detectors spread over 3000 $\textrm{km}^{2}$ along with 24 telescopes looking at the sky over the region. Pierre Auger recently confirmed the infamous GZK cutoff (more on this later) associated to proton energies greater than $6 \times 10^{19}$ eV. Rather, although high energy events were observed, the flux falls off by about a half with more than $6 \sigma$. High energy events are extragalactic. For 2006-2007 data, 8 strong correlations between events and nearby AGN were found, as compared to 3 expected. Centaurus A (the closest AGN) corresponds to 2 events located within 3 degrees. No candidates for diffuse neutrino flux were observed. Construction of Pierre Auger is almost Everyone who goes down to Antarctica to work on IceCube must pass through Christchurch, and it is not surprising that UC has a neutrino physics group, which is led by Jenni Adams. The IceCube talk on Saturday morning was given by S. Klein, who began with a description of the detector: 4800 optical modules on 80 strings reaching down 2450m into the ice at the South Pole. This depth represents roughly 100000 years of atmospheric history, and one can see ancient volcanic eruptions due to dust layers observed in the calibration data. There is also a 1 $\textrm{km}^{2}$ surface array of tanks. When the full detector (IC80) is in operation in 2011 they estimate 200 $\nu$ events per day. There are a number of trigger systems, for example, the firing of 5 of 7 adjacent optical modules on a single string within $1.5 \mu\textrm{s}$. About 6% of events are considered sufficiently interesting to send north via satellite. IceCube is close to releasing a skymap for IC22, which is about 5 times more sensitive than IC9. Some searches were triggered, including the very bright GRB080119B event, although IceCube only expects 0.1 associated $\nu_{\mu}$ events. Results: The solar WIMP search found no excess and the AMANDA limits have been improved. The solar outburst of December 13, 2006 indicated no large spectral changes. A preliminary cosmic ray spectrum was shown.
{"url":"http://kea-monad.blogspot.com/2008_06_01_archive.html","timestamp":"2014-04-17T04:05:40Z","content_type":null,"content_length":"123472","record_id":"<urn:uuid:fb924af4-d3cf-4820-ab33-626a4c3c89f7>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00419-ip-10-147-4-33.ec2.internal.warc.gz"}
An Example on - - Please install Math Player to see the Math Symbols properly Attempt following question by selecting a choice to answer. What is the slope of the line on which for every point the ordinate is same? DD ff Infinitely large Infinitely small View Solution Steps
{"url":"http://www.icoachmath.com/solvedexample/sampleworksheet.aspx?process=/__cstlqvxbefxaxbgefkxkjbahxegebjxb&.html","timestamp":"2014-04-17T15:49:53Z","content_type":null,"content_length":"40889","record_id":"<urn:uuid:50ad818b-58ac-4542-8b58-a1c0cd218b65>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00147-ip-10-147-4-33.ec2.internal.warc.gz"}
[R] Question on estimating standard errors with noisy signals using the quantreg package [R] Question on estimating standard errors with noisy signals using the quantreg package Thorsten Vogel vogeltho at staff.hu-berlin.de Tue Nov 1 09:29:51 CET 2011 Many thanks for your comments. The median of the r_i is something around 1000. And for the time being there are no covariates, though this might change in the future. We are only starting to exploit a very nice data set. Regarding the probability of being in the data, p, I would say it is indeed constant across doctors. The data set is a subset of a larger administrative data set. While the administrative data cover all patients, the data we use cover all patients born on one of four days of the month (which are specified a priori). Since I regard this sampling procedure akin to drawing patients at random from the complete administrative data set, I think p=4/30 is constant across doctors. Again, I very much appreciate any comments or suggestions. Regards, Thorsten -----Ursprüngliche Nachricht----- Von: Roger Koenker [mailto:rkoenker at illinois.edu] Gesendet: Montag, 31. Oktober 2011 21:24 An: Thorsten Vogel Cc: r-help at r-project.org help Betreff: Re: [R] Question on estimating standard errors with noisy signals using the quantreg package On Oct 31, 2011, at 7:30 AM, Thorsten Vogel wrote: > Dear all, > My question might be more of a statistics question than a question on R, > although it's on how to apply the 'quantreg' package. Please accept my > apologies if you believe I am strongly misusing this list. > To be very brief, the problem is that I have data on only a random draw, > all of doctors' patients. I am interested in the, say, median number of > patients of doctors. Does it suffice to use the "nid" option in > More specifically, if the model generating the number of patients, say, > of doctor i is > r_i = const + u_i, > then I think I would obtain the median of the number of doctors' patients > using rq(r~1, ...) and plugging this into summary.rq() using the option > se="iid". How big are the r_i? I presume that they are big enough so that you don't want to worry about the integer "features" of the data? Are there really no covariates? If so then you are fine with the iid option, but if not, better to use "nid". If the r_i can be small, it is worth considering the approach of Machado and Santos-Silva (JASA, 2005). > Unfortunately, I don't observe r_i in the data but, instead, in the data I > only have a fraction p of these r_i patients. In fact, with (known) > probability p a patient is included in the data. Thus, for each doctor i > number of patients IN THE DATA follows a binomial distribution with > parameters r_i and p. For each i I now have s_i patients in the data where > s_i is a draw from this binomial distribution. That is, the problem with > data is that I don't observe r_i but s_i. Is it reasonable to assume that the p is the same across doctors? This to be some sort of compound Poisson problem to me, but I may misunderstand your description. > Simple montecarlo experiments confirm my intuition that standard errors > should be larger when using the "noisy" information s_i/p instead of (the > unobserved) r_i. > My guess is that I can consistently estimate any quantile of the number of > doctors' patients AND THEIR STANDARD ERRORS using the quantreg's rq > rq(I(s/p)~1, ...) and the summary.rq() command with option se="nid". > Am I correct? I am greatful for any help on this issue. > Best regards, > Thorsten Vogel > ______________________________________________ > R-help at r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide > and provide commented, minimal, self-contained, reproducible code. More information about the R-help mailing list
{"url":"https://stat.ethz.ch/pipermail/r-help/2011-November/294304.html","timestamp":"2014-04-18T23:20:40Z","content_type":null,"content_length":"7431","record_id":"<urn:uuid:64dce582-3587-4832-aa8c-d121f61f6861>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00641-ip-10-147-4-33.ec2.internal.warc.gz"}
How strict can I be in the definition of "2-group"? up vote 5 down vote favorite Recall that a group is an associative, unital monoid $G$ such that the map $(p_1,m) : G \times G \to G\times G$ is an isomorphism of sets. Here $p_1$ is the first projection and $m$ is the multiplication, so the map is $(g_1,g_2) \mapsto (g_1,g_1g_2)$. My question is a basic one concerning the definition of "2-group". Recall that a monoidal category is a category $\mathcal G$ along with a functors $m : \mathcal G \times \mathcal G \to \mathcal G$ and $e: 1 \to \mathcal G$, where $1$ is the category with one object and only identity morphisms, such that certain diagrams commute up to natural isomorphism and those natural isomorphisms satisfy some axioms of their own (the natural isomorphisms are part of the data of the monoidal category). Then a 2-group is a monoidal category $\mathcal G$ such that the functor $(p_1,m): \mathcal G \times \mathcal G \to \mathcal G \times \mathcal G$ is an equivalence of categories. I.e. there exists a functor $b: \mathcal G \times \mathcal G \to \mathcal G \times \mathcal G$ such that $b\circ (p_1,m)$ and $(p_1,m) \circ b$ are naturally isomorphic to the identity. Note that $b$ is determined only up to natural isomorphism of functors. Question: Can I necessarily find such a functor $b$ of the form $b = (p_1,d)$, where $d : \mathcal G \times \mathcal G \to \mathcal G $ is some functor (called $d$ for "division")? If so, can I necessarily find $d = m\circ(i \times \text{id})$, where $i: \mathcal G \to \mathcal G$ is some functor (called $i$ for "inverse")? In any case, the natural follow-up question is to ask all these at the level of 3-groups, etc. ct.category-theory higher-category-theory add comment 2 Answers active oldest votes You can always do this. Take any $b$ and define $d = p_2 b$. Then $b' = (p_1, d)$ is equivalent to the original $b$. To see this note that $$(p_1, m) \circ b = (p_1b, m \circ (p_1 b, d)) \simeq id = (p_1, p_2) $$ The first component shows $p_1 b \simeq p_1$. We use this transformation $\times id$ to show that $b \simeq b' = (p_1, d)$. Now we consider the equivalence $(p_1, m) \circ b' \simeq id$. Here we have, $$(p_1, m) \circ (p_1, d) = (p_1, m \circ (p_1, d)) \simeq id = (p_1, p_2) $$ restricting to $G = G \times \{ 1\} \subseteq G \times G$, this gives a natural isomorphism $m(x, d(x, 1)) \simeq 1$. You can take $i(x) = d(x,1)$, and we have $x i(x) \cong 1$. We also have up vote 3 down vote accepted $$(p_1, d) \circ (p_1, m) = (p_1, d \circ (p_1, m)) \simeq (p_1, p_2) $$ which gives a natural isomorphism, $ d(x, xy) \simeq y$ (writing $m(x,y) = xy$). Thus we have, $$d(x,y) \simeq d(x,1 y) \simeq d(x, x i(x) y) \simeq i(x) y, $$ which is the formula you were after. So we can replace $d(x,y)$ with $m(i(x), y)$ to get a third inverse functor b''. Note that this doesn't mean that we have a strict 2-group, just that we can define the inverse functors and difference functors you asked about. Notice also that we didn't really use anything about G being a 1-category as opposed to an n-category (except the associator and unitors) so this argument generalizes to the n-group setting basically verbatim. As an additional remark, it is also possible to choose isomorphisms $i(x) \cdot x \cong 1$ in such a way as to be compatible with the isomorphism $x \cdot i(x) \cong 1$. This takes a little more work, but the argument is explained in the paper by Baez and Lauda on 2-groups. – Chris Schommer-Pries Feb 16 '10 at 21:28 The Baez-Lauda paper is available here: arxiv.org/abs/math/0307200 – Chris Schommer-Pries Feb 17 '10 at 3:47 add comment I think the answer is no, but cannot give a counterexample off hand. What is true is that there will be an equivalent monoidal category such that the corresponding functor $b$, does have the property. Any 2-group is equivalent to a strict 2-group (which can be assumed to come from a crossed module). What would be an interesting subsidiary question would be can the up vote 2 structure be `deformed' to one which is strict. (Deformed in the sense of a deformation theory.) down vote add comment Not the answer you're looking for? Browse other questions tagged ct.category-theory higher-category-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/15486/how-strict-can-i-be-in-the-definition-of-2-group?sort=newest","timestamp":"2014-04-21T09:49:58Z","content_type":null,"content_length":"58329","record_id":"<urn:uuid:6fd3d607-cb40-49b3-a4aa-4513e0b108ef>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00076-ip-10-147-4-33.ec2.internal.warc.gz"}
Understanding Binary and Hex numbers Home : Library : Tutorials : Understanding Binary and Hex Understanding Binary and Hex numbers Computers only understand numbers. The first thing to understand about computers is that they are nothing more than a powerful, glorified calculator. The only thing they know, the only thing they understand, is numbers. You may see words on the screen when you're chatting with your friend via AOL, or breathtaking graphics while playing your favorite game, but all the computer sees are numbers. Millions and millions of numbers. That is the magic of computers - they can calculate numbers, lots of numbers - really fast. But why is this? Why do computers only understand numbers? To understand that we need to go deep into the heart of a computer, break it down to its most basic functionality. When you strip away all the layers of fancy software and hardware, what you will find is nothing but a collection of switches. You know the kind, you have them all over your house - light switches. They only have two positions: On or Off. It's the same for computers, only they have millions and millions of the little buggers. Everything a computer does comes down to keeping track of and flipping these millions of switches back and forth between on and off. Everything you type, download, save, listen to or read eventually gets converted to a series of switches in a particular on/off pattern that represents your data. What does this have to do with Binary and Hexidecimal numbers? Let's back up for a minute and look at how human beings deal with numbers first. Most people today use the Arabic numbering system, which is known as the decimal, or Base-10, numbering system (dec means ten). What this means is that we have ten digits in our numbering system: We use these ten digits in various combinations to represent any number that we might need. How we combine these numbers follows a very specific set of rules. If you think back to grade school, you can probably remember learning about the ones, tens, hundreds and thousands places: When counting, you increase each digit in the right-most place column until you reach 9, then you return to zero and increment the next column to the left: I know this all probably seems very remedial and unimportant, but going back to these basic, simplistic rules is very important when learning to deal with other number formats. Would it surprise you to learn that there other numbering systems that have a different base? Somebody, somewhere, a long time ago decided that having ten digits would work best for us. But there really is no reason why our numbering scheme couldn't have had seven, or eight, or even twelve digits. The number of digits really makes no difference (except for our familiarity with them). The same basic rules apply. As it turns out, computers have a numbering system with only two digits. Remember all those switches, each of which can only be on or off? Such an arrangement lends itself very nicely to a Base-2 numbering system. Each switch can represent a place-column with two possible digits: 0 = off, 1 = on. We call such numbers binary numbers (bin means two), and they follow the same basic rules that decimal numbers do: Start with 0, increment to 1, then go back to 0 and increment the next column to the left: binary equivelent Binary numbers are well and good for computers but having only two digits to work with means that your place-columns get very large very fast. As it turns out, there is another numbering scheme that is very common when dealing with computers: Hexidecimal. Hex means six, and recall that dec means ten, so hexidecimal numbers are part of a Base-16 numbering scheme. Years ago, when computers were still a pretty new-fangled contraption, the people designing them realized that they needed to create a standard for storing information. Since computers can only think in binary numbers, letters, text and other symbols have to be stored as numbers. Not only that, but they had to make sure that the number that represented 'A' was the same number on every computer. To facilitate this the ASCII standard was born. The ASCII Chart listed 128 letters (both upper- and lower-case), punctuation and symbols that could be used and recognized by any computer that conformed to the ASCII standard. It also included non-printable values that aren't displayed but perform some other function, such as a tab placeholder (09), an audible bell (07) or an end-of-line marker (13). The various combinations of only eight binary digits, or bits, could be used to represent any character on the ASCII Chart (2^8 = 128). (There were also other competing standards at the time, some of which used a different number of bits and defined different charts, but in the end ASCII became the dominant standard.)^1 128 characters may have seemed like a lot but it didn't take long to notice that the ASCII Chart lacked many of the special vowels used by latin-based languages other than English, such as ä, é, û and Æ. Also lacking were common mathmatical symbols (±, µ, °, ¼) and monetary symbols other than the dollar sign ($) for United States currency (£, ¥, ¢). To make up for this oversight these symbols and a series of simple graphical shapes, mostly for drawing borders, were assembled as an extension to the original ASCII Chart. These additional 128 characters brought the new total to 256 (2^16), with the pair of charts being referred to collectively as the Extended ASCII Chart. Did you notice that the value 256 can be represented as 2 (the base of a binary numbering system) to the 16^th power? This brings us back to hexidecimal (Base-16) numbers. It turns out, through the magic of mathmatical relationships, that every character on the Extended ASCII Chart can be represented by the a two-digit hexidecimal number: 00 - FF (0 - 255 decimal). Whoa! What's up with this FF stuff? Hexidecimal is a Base-16 numbering system, which means that every places column counts up to sixteen individual digits. The decimal system that we humans are familiar with only has a total of ten unique digits, however, so we need to come up with something to represent each of the remaining six digits. We do this by using the first six letters of the alphabet.^2 This means the digits for the hexidecimal numbering system are: 0 1 2 3 4 5 6 7 8 9 A B C D E F And, of course, hexidecimal numbers follow the same basic rules that decimal and binary numbers do. Count up to the last digit, then return to zero and increment the next column to the left: hexidecimal equivelent A 10 B 11 E 14 F 15 1A 26 1F 31 As you can see, the hexidecimal numbering system doesn't advance through the place-columns as quickly as decimal numbers do - and certainly not at the rate of growth experienced by binary numbers! This, coupled with its relationship to the Extended ASCII Chart and subsequent relationship to various other computer concepts, has made the hexidecimal numbering system, or hex, a standard for computer programmers and engineers the world over. It is common when viewing a raw data dump to use a Hex Viewer - software that displays the hex values of each character. This allows one to see every character in the Extended ASCII Chart, even the ones that are not normally printed or visible. If you are a programmer, or aspiring to be one, it is also worth noting that the variable type Byte is, depending on the programming language, 8 bits in size. This means that it can be represented by a single digit hexidecimal number (0-F). If you are programming for the Windows platform in C or C++ you have probably noticed the commonly used variable type DWORD (Double-WORD). A WORD is 16 bits (0-FF) in size, which makes a DWORD 32 bits (0-FFFF). If you are an HTML programmer you have probably seen color values that are composed of hex numbers. Colors are represented as a mixture of Red, Green and Blue values (RGB). Each of these three primary colors can have a value from 0-255 (decimal), which translates into three sets of two-digit hexidecimal numbers: 00 1A FF. This tutorial just touches on the basics of the hexidecimal and binary numbering systems and their importance when working with computers, but I hope that it has provided a good base of understanding from which to start. As always, I welcome feedback, suggestions and corrections. If you have enjoyed this tutorial please be sure to check out others I have written. Home : Library : Tutorials : Understanding Binary and Hex copyright | about | site map © Shawn South, 2002. All rights reserved. Republication, duplication or redistribution of any kind is prohibited without express written permission.
{"url":"http://www.codemastershawn.com/library/tutorial/hex.bin.numbers.php","timestamp":"2014-04-18T08:19:31Z","content_type":null,"content_length":"20581","record_id":"<urn:uuid:5707ca2c-6559-437c-ad33-b568c877d684>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00341-ip-10-147-4-33.ec2.internal.warc.gz"}
Math_Matrix Bug #10728 fou [2007-04-14 19:07] http://pear.php.net/bugs/10728 [Closed] multiply() method en-us pear-webmaster@lists.php.net pear-webmaster@lists.php.net hourly 1 2000-01-01T12:00+00:00 multiply() method Description: ------------ in the file Matrix.php in the function multiply(&$m1) line 1106 if ($nc1 != $nr) { should be if ($nc != $nr1) { The matrix multiplication doesn't work otherwise. Also I believe this is truly what the author intended because the error message in this if-block says: 'Incompatible sizes columns in matrix must be the same as rows in parameter matrix' It's clear from the code that $nc belongs to the matrix and that $nr1 belongs to the parameter matrix. fou http://pear.php.net/bugs/10728 Math_Matrix Bug Reported by fou 2007-04-14T14:00:53+00:00 PHP: 4.4.4 OS: Mac OS X Version 10.4.9 Package Version: 0.8.0 Description: ------------ in the file Matrix.php in the function multiply(&$m1) line 1106 if ($nc1 != $nr) { should be if ($nc != $nr1) { The matrix multiplication doesn't work otherwise. Also I believe this is truly what the author intended because the error message in this if-block says: 'Incompatible sizes columns in matrix must be the same as rows in parameter matrix' It's clear from the code that $nc belongs to the matrix and that $nr1 belongs to the parameter matrix.]]> Math_Matrix Bug Reported by fou 2007-04-14T14:00:53+00:00 PHP: 4.4.4 OS: Mac OS X Version 10.4.9 Package Version: 0.8.0 Description: ------------ in the file Matrix.php in the function multiply(&$m1) line 1106 if ($nc1 != $nr) { should be if ($nc != $nr1) { The matrix multiplication doesn't work otherwise. Also I believe this is truly what the author intended because the error message in this if-block says: 'Incompatible sizes columns in matrix must be the same as rows in parameter matrix' It's clear from the code that $nc belongs to the matrix and that $nr1 belongs to the parameter matrix.]]> 2007-04-14T14:00:53+00:00 http://pear.php.net/bugs/10728#1176577654 This actually was fixed in the beta 0.8.5 (beta) version.]]> This actually was fixed in the beta 0.8.5 (beta) version.]]> 2007-04-14T19:07:34+00:00
{"url":"http://pear.php.net/feeds/bug_10728.rss","timestamp":"2014-04-19T12:27:50Z","content_type":null,"content_length":"4947","record_id":"<urn:uuid:a09f4d16-6058-43c6-8f48-91327dc86a21>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00630-ip-10-147-4-33.ec2.internal.warc.gz"}
the definition of modern algebra modern algebra branch of mathematics concerned with the general algebraic structure of various sets (such as real numbers, complex numbers, matrices, and vector spaces), rather than rules and procedures for manipulating their individual elements Learn more about modern algebra with a free trial on Britannica.com.
{"url":"http://dictionary.reference.com/browse/modern+algebra?qsrc=2446","timestamp":"2014-04-20T11:20:17Z","content_type":null,"content_length":"90148","record_id":"<urn:uuid:a7c94fe3-6f5f-49ab-86e6-9f93a561f942>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00040-ip-10-147-4-33.ec2.internal.warc.gz"}
Moment generating function October 24th 2009, 01:33 PM #1 Junior Member Feb 2009 Moment generating function How could I solve this problem? If Y is a random variable with a moment-generating function m(t)and if W is given by W=aY + b, show that the moment-generating function of W is e^(tb)m(at), Start with the definition: $M_W(t) = E(e^{(aY + b)t}$. Please show all your work and state where you get stuck. October 24th 2009, 01:47 PM #2
{"url":"http://mathhelpforum.com/advanced-statistics/110159-moment-generating-function.html","timestamp":"2014-04-21T15:40:08Z","content_type":null,"content_length":"34071","record_id":"<urn:uuid:bf54b08d-955c-454a-833f-34014ec7a0f4>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00433-ip-10-147-4-33.ec2.internal.warc.gz"}
Logic-based Control Tutorial workshop for the 10th Mediterranean Conference on Control and Automation July 9-12, 2002, in Lisbon, Portugal The overall objective of this tutorial session is to overview a variety of theoretical tools for synthesizing and analyzing logic-based switching control systems. By a logic-based control system we mean a system that combines continuous dynamics (typically modeled by differential or difference equations) with logic-driven elements. These systems are often also called hybrid. An important category of such systems are those consisting of a continuous-time process to be controlled, a family of fixed-gain or variable-gain candidate controllers, and an event-driven switching logic called a supervisor whose job is to determine in real time which controller should be applied to the process. Examples of supervisory control systems include re-configurable systems, fault correction systems, and certain types of parameter-adaptive systems. Major reasons for introducing logic and switching are to deal with communication, actuator and sensor constraints, with model uncertainty, with unforeseen events or to avoid performing difficult tasks e.g., precise equipment calibration which might otherwise be necessary were one to consider only conventional controls. The aim of this workshop is to provide an overview of algorithms with these capabilities, as well as to discuss various techniques for analyzing the types of switched systems that result. Joćo Hespanha Daniel Liberzon University of California at Santa Barbara University of Illinois at Urbana-Champaign Room 3121, Engineering I Coordinated Science Laboratory Electrical & Computer Eng. Univ. of Illinois University of California 1308 W. Main Street Santa Barbara, CA 93106 USA Urbana, IL 61801 USA Tel: +1 (805) 893-7042 Tel: +1 (217) 244-6750 Fax: +1 (805) 893-3262 Fax: +1 (217) 244-1653 hespanha at ece.ucsb.edu liberzon at uiuc.edu Session I: Switched Control Systems (Liberzon) 1. Why switched control systems? 2. Applications of switched control 3. Stability of switched systems I 4. Stability of switched systems II These lectures (PowerPoint slides) are based on the manuscript: Daniel Liberzon, Control using Logic and Switching, Dec 2001. Session II: Switched Supervisory Control (Hespanha) 5. Supervisory control architecture 6. Estimator-based linear supervisory control 7. Estimator-based nonlinear supervisory control 8. Supervisory control applications These lectures (PowerPoint slides) are based on the manuscript: Joćo Hespanha, Tutorial on Supervisory Control, Nov. 2001.
{"url":"http://www.ece.ucsb.edu/~hespanha/med02-logic/","timestamp":"2014-04-21T12:40:18Z","content_type":null,"content_length":"28571","record_id":"<urn:uuid:ffc38fba-a54e-4ec3-936e-3f34529d6593>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00268-ip-10-147-4-33.ec2.internal.warc.gz"}
Laplacian Growth and Related Topics (Web site soon online) August 18 - 23, 2008 Organizers: N. Makarov (Caltech), P. Wiegmann (Chicago) This workshop is devoted to mathematical aspects of Laplacian Growth. A broad class of non-equilibrium growth processes have a common law: the normal velocity of the growing boundary of a region is proportional to the gradient of a harmonic field on the exterior. This type of growth (called Laplacian growth) is unstable for nearly all initial configurations. Instabilities develop into fractal singular patterns. Similar instabilities occur in the hydrodynamics of immiscible fluids. In recent years it has been recognized that the theory of Laplacian growth is deeply related to a number of modern branches of mathematical physics and mathematics. Among these are fundamental aspects of complex analysis, deformations of Riemann surfaces, integrable systems and the theory of random matrices. Connections between the Laplacian growth problem and Random Matrix Theory leads to new interrelations between probability theory, singularities in hydrodynamics and conformal analysis. Artem Abanov (Texas A&M University) Oded Agam (The Hebrew University of Jerusalem) Iana Anguelova (Université de Montréal) Ferenc Balogh (Concordia University) Dmitri Beliaev (Princeton University) Marco Bertola (Concordia University) Eldad Bettelheim (The Hebrew University of Jerusalem) Lennart Carleson ( KTH) Darren Crowdy ( Imperial College London) Pavel Etingof (M.I.T.) Giovanni Felder (ETH Zentrum) Ilya Gruzberg (University of Chicago) Bjorn Gustafson (KTH) Matthew Haisting ( Los Alamos National Laboratory) John Harnad (Université de Montréal) Haakan Hedenmalm (KTH) Sam D. Howison (University of Oxford) Dmitry Khavinson (University of South Florida) Dmitry Korotkin (Concordia University) Gregory Lawler (University of Chicago) Seung-Yeop Lee (Université de Montréal) Leonid Levitov (Massachusetts Institute of Technology) Kenneth McLaughlin (University of Arizona) Mark Mineev-Weinstein (Center for Nonlinear Studies, Los Alamos National Laboratory) Yuval Peres (UC Berkeley) Aleix Prats Ferrer (Université de Montréal) Itamar Procaccia (The Weizmann Institute of Science) Bélà Gabor Pusztai (Concordia University) Mihai Puttinar (University of California at Santa Barbara) Steffen Rohde (University of Washington) Yvan Saint-Aubin (Université de Montréal) Makato Sakai (Tokyo Metropolitan University) Kanehisa Takasaki (Kyoto University) Takashi Takebe (Ochanomizu University) Razvan Teodorescu (Los Alamos National Laboratory) Alexander Varchenko (The University of North Carolina at Chapel Hill) Alexander Vasil'iev (Universitetet i Bergen) Anton Zabrodin (Institute of Biochemical Physics) Michel Zinsmeister (Université d’Orléans)
{"url":"http://www.crm.umontreal.ca/Mathphys2008/laplacian_e.shtml","timestamp":"2014-04-18T00:13:16Z","content_type":null,"content_length":"12175","record_id":"<urn:uuid:4514ec47-c885-432e-8aba-689f02c09549>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00200-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] unexpected behavior of __array_wrap__ in matrix subclass [Numpy-discussion] unexpected behavior of __array_wrap__ in matrix subclass Aronne Merrelli aronne.merrelli@gmail.... Thu Dec 22 12:44:43 CST 2011 Hello NumPy list, While experimenting with a subclass of numpy.matrix, I discovered cases where __array_wrap__ is not called during multiplication. I'm not sure whether this is a bug or my own misunderstanding of np.matrix & __array_wrap__; if nothing else I thought it would be helpful to describe this in case other people run into the same problem. If the matrix is created from an array of integers, then __array_wrap__ is called if the matrix is multiplied by an integer. It appears that in all other cases, __array_wrap__ is not called for multiplication (int times float scalar, float times float scalar, float matrix times float matrix, etc). For addition, __array_wrap__ is called for all cases that I checked. I did find a possible workaround. If you define a __mul__ method in the matrix subclass, and then just call np.multiply, then __array_wrap__ is called in all cases I expect it to be called. I uploaded a example script here: https://gist.github.com/1511354 Hopefully it is not too confusing. I'm basically abusing the python exception handler to tell whether or not __array_wrap__ is called for any particular case. The MatSubClass shows the problem, and the MatSubClassFixed as the __mul__ method defined. Here are the results I see in my working environment (ipython in EPD 7.1): In [1]: np.__version__ Out[1]: '1.6.0' In [2]: execfile('matrix_array_wrap_test.py') In [3]: run_test() array_wrap called for o2 = o * 2 after o=MatSubClass([1,1]) array_wrap NOT called for o2 = o * 2.0 after o=MatSubClass([1,1]) array_wrap NOT called for o2 = o * 2 after o=MatSubClass([1.0, 1.0]) array_wrap NOT called for o2 = o * 2.0 after o=MatSubClass([1.0, 1.0]) array_wrap called for o2 = o * 2 after o=MatSubClassFixed([1,1]) array_wrap called for o2 = o * 2.0 after o=MatSubClassFixed([1,1]) array_wrap called for o2 = o * 2 after o=MatSubClassFixed([1.0, 1.0]) array_wrap called for o2 = o * 2.0 after o=MatSubClassFixed([1.0, 1.0]) -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.scipy.org/pipermail/numpy-discussion/attachments/20111222/4cb0135f/attachment.html More information about the NumPy-Discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2011-December/059665.html","timestamp":"2014-04-21T04:43:17Z","content_type":null,"content_length":"5027","record_id":"<urn:uuid:e4c5ecf1-73cd-4f83-ac90-361859af5c4c>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00605-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts about centering on HLP/Jaeger lab blog Jessica Nelson (Learning Research and Development Center, University of Pittsburgh) uploaded a step-by-step example analysis using mixed models to her blog. Each step is nicely annotated and Jessica also discusses some common problems she encountered while trying to analyze her data using mixed models. I think this is a nice example for anyone trying to learn to use mixed models. It goes through all/most of the steps outlined in Victor Kuperman and my WOMM tutorial (click on the graph to see it full size): One of the most common issues in regression analyses of even balanced experimental data is collinearity between main effects and interactions. To avoid this problem, a simple first step is to center all predictors. In my experience folks often fail to do that simply because it’s a bit more work and we’re all lazy. So here’s an attempt at a simple R function that takes single variables as well as entire dataframes. Read the rest of this entry » While Victor Kuperman and I are preparing our slides for WOMM, I’ve been thinking about how to visualize the process from input variables to a full model. Even though it involves many steps that hugely depend on the type of regression model, which in turn depends on the type of outcome (dependent) variable, there are a number of steps that one always needs to go through if we want interpretable coefficient estimates (as well as unbiased standard error estimates for those coefficients).
{"url":"https://hlplab.wordpress.com/tag/centering/","timestamp":"2014-04-16T07:14:33Z","content_type":null,"content_length":"59695","record_id":"<urn:uuid:2a48cade-23ad-4adc-9a2d-f82e1141707d>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00183-ip-10-147-4-33.ec2.internal.warc.gz"}
Area and Perimeter October 9th 2007, 07:40 PM Area and Perimeter I need help with a work problem consiting of area and perimeter. I understand that A=1/2bh and that P = 2W x 2L. I am confused though when you have two different area's and you are to find the area and perimeter. Please advise, thanks dtofte October 10th 2007, 12:22 AM If you will not show us how the two rectangles are connected to form an L-shaped room, then the perimeter of the room is hard to guess. The area, at any arrangement of the two rectangles, remains the same, Area = (8.5)(11) +(6.5)(7.5) = 93.5 +48.75 = 142.25 sq.ft. October 10th 2007, 01:53 AM Attached L-shaped room I tried making an L-shaped room with the equation editor as best as I could. I hope it helps. Thanks, for the help with getting the area on my question. October 10th 2007, 02:06 AM Okay. I think I got it. So we get the perimeter of the L-shaped room. We start counting from the upper lefthand corner, going clockwise. Perimeter = 8.5 +(11 -6.5) +7.5 +6.5 +7.5 +8.5 + 11 Perimeter = 8.5 +4.5 +7.5 +6.5 +7.5 +8.5 + 11 Perimeter = 54 ft.
{"url":"http://mathhelpforum.com/algebra/20300-area-perimeter-print.html","timestamp":"2014-04-20T16:07:03Z","content_type":null,"content_length":"5400","record_id":"<urn:uuid:f9873e9b-8fcf-4ae0-be99-7cdfc9d96726>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00120-ip-10-147-4-33.ec2.internal.warc.gz"}
Guide to the Alfred L. Putnam Papers 1928-1977 PDF | XML University of Chicago Library Guide to the Alfred L. Putnam Papers 1928-1977 © 2007 University of Chicago Library Title: Putnam, Alfred L. Papers Dates: 1928-1977 Size: 3.5 linear feet (7 boxes) Special Collections Research Center University of Chicago Library Repository: 1100 East 57th Street Chicago, Illinois 60637 U.S.A. As a professor in the Department of Mathematics, Alfred L. Putnam surveyed mathematics research in Eastern Europe and the Soviet Union, and developed the influential mathematics core Abstract: requirement in the University of Chicago College. This collection contains lecture notes collected by Alfred L. Putnam, documenting the teaching of some of the most influential mathematicians of the 20th century. Information on Use Open for research. No restrictions When quoting material from this collection, the preferred citation is: Putnam, Alfred L. Papers, [Box #, Folder #], Special Collections Research Center, University of Chicago Library Biographical Note Mathematics professor Alfred L. Putnam was born in Dunkirk, New York on March 10, 1916. He was educated at Hamilton College (B.S., 1938) and Harvard University (Ph.D., 1942), where he studied under Saunders Mac Lane. After teaching at Yale for a short time, Putnam joined the faculty of University of Chicago as Assistant Professor of Mathematics in 1945, becoming a Professor Emeritus in 1987. Putnam's work focused on mathematics education research and undergraduate teaching. During the Cold War, Putnam surveyed mathematics education and research in Eastern Europe and the Soviet Union. Interest in this area exploded after the launch of Sputnik, and Putnam's research led to the translation and broader publication of important Soviet research in mathematics. During Robert Hutchins' term as University president, Putnam served as chair of the College Mathematics Staff. This group designed a mathematics core requirement that influenced mathematics curricula at the college level as well as elementary and secondary schools. Alfred Putnam died of cancer at his home in Chesterton, Indiana on March 11, 2004. Scope Note This collection contains lecture notes collected by Alfred L. Putnam during his work as a mathematician. Most of the notes are in bound, printed form, and others were mimeographed and collected in folders; some contain additional annotations or have sheets of handwritten notes inserted. The notes are arranged alphabetically by lecturer; where an editor, translator, or other contributor is known, the name is noted, as is information given about the date and location of the lecture Represented here are many of the most influential mathematicians of the 20th century, including Abraham Adrian Albert, Emil Artin, Garrett Birkhoff, Richard Brauer, Henri Cartan, David Hilbert, Nathan Jacobson, Carl L. Siegel, and Hermann Weyl. In addition to the lecture notes, a copy of mathematician Harley Flanders's doctoral dissertation is also included. Related Resources The following related resources are located in the Department of Special Collections: Albert, Abraham Adrian. Papers Mathematics, Department of. Lecture Notes Mathematics, Department of. Records Subject Headings Box 1 Folder 1 Albert, Abraham Adrian, "Solid Analytical Geometry," University of Chicago, 1947 Box 1 Folder 2 Artin, Emil, "Modern Higher Algebra," notes by Albert A. Blank, New York University, 1947 Box 1 Folder 3 Artin, Emil, "Modern Higher Algebra, Part III, Algebraic Theory, notes by Albert A. Blank, New York University, 1948 Box 2 Folder 1 Artin, Emil, "Algebraic Numbers and Algebraic Functions," Princeton University and New York University, 1950-1951 Box 2 Folder 2 Artin, Emil, "Elements of Algebraic Geometry," notes by G. Bachman, New York University, 1955 Box 2 Folder 3 Birkhoff, Garrett, "A First Course in Modern Algebra," n.d. Box 2 Folder 4 Brauer, Richard, "Galois Theory," Harvard University, 1958 Box 3 Folder 1 Brauer, Richard, Mathematics 211-212, n.d. Box 3 Folder 2 Cartan, Henri, "Algebraic Topography," edited by George Springer and Henry Pollak, Harvard University, 1949 Box 3 Folder 3 Hilbert, David, "Hilbert's Theoretical Logic," translated by George Gaines Leckie and Lewis M. Hammond, 1928 Box 3 Folder 4 Jacobson, Nathan, "Theory of Rings," Mathematics 330, n.d. Box 4 Folder 1 Kaplansky, Irving, "Topological Algebra," 1952 Box 4 Folder 2 Kaplansky, Irving, "Theory of Fields," Mathematics 322, University of Chicago, 1965 Box 4 Folder 3 Kaplansky, Irving, "Homological Dimensions of Rings and Molecules," University of Chicago, ca. 1960s Box 4 Folder 4 Kaplansky, Irving, "Hilbert's Problems," University of Chicago, 1977 Box 4 Folder 5 Kaplansky, Irving, "Infinite Abelian Groups," n.d. Box 4 Folder 6 Mackey, George W., "Theory of Group Representations," notes by James M.G. Fell and David B. Lowdenslager, University of Chicago, 1955 Box 4 Folder 7 Rademacher, Hans, "Analysis," Haverford College, 1952-1953 Box 5 Folder 1 Rademacher, Hans, "Elementary Mathematics from an Advanced Viewpoint," University of Oregon, 1954 Box 5 Folder 2 de Rham, Georges, "On Multiple Integrals," Hamburg, 1938 Box 5 Folder 3 de Rham, Georges, and Kunihiko Kodaira, "Harmonic Integrals," Institute for Advanced Study, Princeton University, 1950 Box 5 Folder 4 Schilling, O.F.G., "Modern Aspects of the Theory of Algebraic Functions," University of Chicago, 1938 Box 5 Folder 5 Serrin, James, "Foundations of Classical Thermodynamics," University of Chicago, 1975 Box 5 Folder 6 Siegel, Carl L., "Analytic Number Theory," notes by B. Friedman, 1945 Box 6 Folder 1 Siegel, Carl L., "Geometry of Numbers," notes by B. Friedman, New York University, 1945-1946 Box 6 Folder 2 Siegel, Carl L., "Analytic Functions of Several Complex Variables," notes by P.T. Bateman, Institute for Advanced Study, Princeton University, 1948-1949 Box 6 Folder 3 Siegel, Carl L., "Lectures on the Analytic Theory of Quadratic Forms," notes by Morgan Ward, Institute for Advanced Study, Princeton University, 1949 Box 7 Folder 1 Weyl, Hermann, "Structure and Representation of Continuous Groups," notes by Richard Brauer, Institute for Advanced Study, Princeton University, 1934-1935 Box 7 Folder 2-3 Whitney, Hassler, "Basic Concepts of Algebra," 1964 Box 7 Folder 4 Geometry of Numbers Seminar, Institute for Advance Study, Princeton University, 1949 Box 7 Folder 5 Flanders, Harley, "Unification of Class Field Theory," dissertation, University of Chicago, 1949
{"url":"http://www.lib.uchicago.edu/e/scrc/findingaids/view.php?eadid=ICU.SPCL.ALPUTNAM","timestamp":"2014-04-19T10:25:13Z","content_type":null,"content_length":"16107","record_id":"<urn:uuid:5d9d0fa3-4e98-4bf9-ab80-c2b9beda968c>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00635-ip-10-147-4-33.ec2.internal.warc.gz"}
A Guide to the Galen L. Seever Papers, A Guide to the Galen L. Seever Papers, 1953-1977 Creator: Seever, Galen L. Title: Galen L. Seever Papers Dates: 1953-1977 Abstract: Born in Wichita, Kansas in 1934, Galen Seever received an undergraduate degree in Mathematics from the University of Kansas in 1958. A specialist in the field of functional analysis, Seever taught at UT until the late 1980s. The collection’s contents span the course of Seever’s education and career. Accession 91-5 Extent: 4 ft., 10 in. Laguage: Materials are written in English. Repository: Dolph Briscoe Center for American History, The University of Texas at Austin Born in Wichita, Kansas in 1934, Galen Seever received an undergraduate degree in Mathematics from the University of Kansas in 1958. He completed a doctorate in 1963 at the University of California at Berkeley, under the advisorship of William G. Bade. He taught at the University of California, Los Angeles and the California Institute of Technology before coming to University of Texas at Austin in 1970. A specialist in the field of functional analysis, Seever taught at UT until the late 1980s. The collection’s contents span the course of Seever’s education and career. Included from his student work are course notes from Seever’s days at the University of Kansas and the University of California at Berkeley, and notes leading up to the work of Seever’s Ph. D. dissertation, “Measures on F-Spaces.” Items related to his academic career include further research notes and research materials, such as preprints and unpublished reports from other mathematicians, course materials from Seever’s classes at UT, as well as correspondence and other related personal and professional Forms part of the Archives of American Mathematics. Access Restrictions Unrestricted Access Use Restrictions These papers are stored remotely at CDL. Advance notice required for retrieval. Contact repository for retrieval. Seever, Galen L. University of California at Berkley. Dept. of Mathematics. University of Texas at Austin. Dept.of Mathematics Galen L. Seever Papers, 1953-1977, Archives of American Mathematics, Dolph Briscoe Center for American History, University of Texas at Austin. Detailed Description of the Papers 91-5/1 "Math 250A-250B," "NSF Project, Summer 1953," 1953, 1958 "Math 250A: Complex Variables", "Math 305A: Ordinary Differential Equations," 1958 Research notes and materials: 91-5/1 Dissertation: "Multipliers of p-Integrable Functions" by Alessandro Figi-Talamanca, "215 Spanier," 1959 "250," 1959? "205: Godement," 1959 "205," 1959 "Seminaire Cartan, Exposes XII-XIX," 1951-1952? "Algebra," ca. 1960 "Seminaire Cartan", 1951-1952? "252 Hochschild," undated "Berkeley-Stanford Joint Seminar on Function Algebras," 1961-62 Notes, Research Materials, 1958-1961 Notes, ca.1965 91-5/1 "Operator Representations of Uniform Algebras I," "Algebras of Continuous Functions on Hypersonian Spaces," undated "Convergence Rates of Ergodic Limits for Semigroups and Cosine Functions," 1975, 1978, undated Research Notes and Materials: 91-5/1 Lebeague Measure on IR, Riemann-Stieltjies Integral, undated "Class Notes, Measure Theory": Topology, Miscelleaneous, 1959, 1966, undated 91-5/1 "Measures on F-Spaces," 1963 Research Notes and Materials: 91-5/1 "Class Records: Advanced Calculus 108," 1967 Proposal to the national science foundation for research in analysis, 1967 "Berkeley Memorabilia": [Notes/Papers on F-Spaces, correspondence with Luxemburg, article on Serpinski], ca. 1963 "L(μ) х λ E" [Notes, Correspondence, Reprints] 1966 "143 Notes," ca. 1963 Work for "A Harmonic Space Glossary," Unpublished papers, "The Herve Calculus" ca.1973 Notes, Correspondence, Publications 1963-1975 Kolmogonoff-Krein Theorem, F. and M. Reiz Theorem, Misc., ca. 1970 Notes, Correspondence, Reprints ca. 1975 "Frechet Lattices of Functions," "The Kolmogov Construction," "Separable Processes," Stochastic Intervals," ca.1976 Correspondence, Notes ca.1978 Notes ca.1978 The Banach Lattice, ca. 1978 91-5/2 Report: "Lerbuch der Topologie," Seifert and Trelfall, 1958 Course Materials: 91-5/2 Mathematics 212 (Bishop), Glicksberg, Research Notes and Materials: 91-5/2 "Operator Representations of Uniform Algebras," ca.1971 "Inclusions and Interposition," undated "Math Papers, Class Notes," undated Notes, Reports ca. 1974-1976 W.G. Bade, "Notes on the Space of Continuous Functions on a Compact Hausperff Space," Fall 1957 "Kwon, Young," 1969-1971 "Vector Lattices," ca.1964 Notes, undated "The Gelfland Theory," undated "G. Seever," undated Notes, Preprints: Garnett, Sills, undated Report, undated Notes, undated Dissertation, Bennett, undated Class Notes: Hochschild, undated "206 Bade," 1960 Dissertation: Robertson, "Homogenous Dual Pairs of Locally Compact Abellian Groups," 1965 "Vector Lattices: L and M Spaces," undated "H. Tong's Theorem: u ≤ f ≤ ℓ," undated "Banach Lattice," undated Differentation and Differentials, Stieltjes Integral, ca. 1967 "Vector Lattices," undated Lecture Notes: Malgrange, Narasimhan, 1958 "Kelly Sem. On F Spaces," undated Notes, [See list in folder], undated "250 A," undated "Hirsch, Notes on Differential Topology," undated "Analysis," undated "Dimension Theory, Algebraic Topology," undated "Seminar- Phelps," 1961 Notes: Ruess, Banach Lattice, "AMS Proc in Pure Math," Gaspar, 1972-1973 91-5/3 "Non-Negative Projections Etc.": Non-Negative Projections on Co(X), Inclusions and Interposition, ca.1966 Notes, Reports, Correspondence, ca. 1966 Notes, undated Research, Conrad and Diem, undated Heat Equation, undated Notes, ca.1972 "Operator Representations of Uniform Algebra," ca. 1978 Notes, ca. 1978? "Vector Measures," 1977 Research: Albrecht, Berkson, Dowson, 1978 Research Materials, ca. 1976 91-5/4 Notes, undated, ca. 1978 "Differences of Open Sets," undated Notes, ca. 1986 "§§ 1,2," undated "§§ 3,4," undated "§§ 6," undated "O. Vector Lattices," undated "Stein’s Lectures on HP," 1973-74 "HP in the Setting of Several Complex Variables," 1974 "HP Spaces," 1974 "HP Theory of Rn," 1974 Notes, Report: Interposition of Semi-Continuous Functions on Hypersonian Spaces, 1973 "Fall 1971 Texts": Algebras of Continuous Functions on Hypersonian Spaces, 1971 Notes, Correspondence: "The Disc Algebra is not and Existence Subspace of its Bidual", "An Extension of a Theorem of Linderstrauss," ca. Notes, unknown, ca. 1971 "Konig Reprints," 1960-69 "Stuff with Jorg," undated "Konig’s Caltech Notes,” undated Notes, ca. 1974 Correspondence, Research Materials, undated Research Materials, undated Research Materials: Approximation Theory, Bourgain, Rosenthal, undated Notes: Vector Lattices, Abstract Harmonic Spaces, undated Notes, Research Materials: Luxemburg, Konig, Frieler, undated "Interposition of Semi-Continuous Functions by Continuous Functions," undated Research Materials: Hewitt, Browder, Bichteler, Behrens, undated
{"url":"http://www.lib.utexas.edu/taro/utcah/00346/00346-P.html","timestamp":"2014-04-16T10:57:20Z","content_type":null,"content_length":"24325","record_id":"<urn:uuid:a0377e4f-e29c-486a-8403-7b23d49901ca>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00410-ip-10-147-4-33.ec2.internal.warc.gz"}
A note on polynomial congruences, in: Recent Progress in Analytic Number Theory - J. Number Theory , 1999 "... Integers without large prime factors, dubbed smooth numbers, are by now firmly established as a useful and versatile tool in number theory. More than being simply a property of numbers that is conceptually dual to primality, smoothness has played a major role in the proofs of many results, from mult ..." Cited by 10 (1 self) Add to MetaCart Integers without large prime factors, dubbed smooth numbers, are by now firmly established as a useful and versatile tool in number theory. More than being simply a property of numbers that is conceptually dual to primality, smoothness has played a major role in the proofs of many results, from multiplicative questions to Waring’s problem to complexity , 1998 "... We investigate the problem of showing that the values of a given polynomial are smooth (i.e., have no large prime factors) a positive proportion of the time. Although some results exist that bound the number of smooth values of a polynomial from above, a corresponding lower bound of the correct ord ..." Cited by 3 (1 self) Add to MetaCart We investigate the problem of showing that the values of a given polynomial are smooth (i.e., have no large prime factors) a positive proportion of the time. Although some results exist that bound the number of smooth values of a polynomial from above, a corresponding lower bound of the correct order of magnitude has hitherto been established only in a few special cases. The purpose of this paper is to provide such a lower bound for an arbitrary polynomial. Various generalizations to subsets of the set of values taken by a polynomial are also obtained. "... Abstract. We prove a conjecture of Yamauchi which states that the level N for which the new part of J0(N) is Q-isogenous to a product of elliptic curves is bounded. We also state and partially prove a higher-dimensional analogue of Yamauchi’s conjecture. In order to prove the above results, we deriv ..." Add to MetaCart Abstract. We prove a conjecture of Yamauchi which states that the level N for which the new part of J0(N) is Q-isogenous to a product of elliptic curves is bounded. We also state and partially prove a higher-dimensional analogue of Yamauchi’s conjecture. In order to prove the above results, we derive a formula for the trace of Hecke operators acting on spaces S new (N, k) of newforms of weight k and level N. We use this trace formula to study the equidistribution of eigenvalues of Hecke operators on these spaces. For any d ≥ 1, we estimate the number of normalized newforms of fixed weight and level, whose Fourier coefficients generate a number field of degree less than or equal to d. 1. "... Abstract. For suitable pairs of diagonal quadratic forms in 8 variables we use the circle method to investigate the density of simultaneous integer solutions and relate this to the problem of estimating linear correlations among sums of two squares. 1. ..." Add to MetaCart Abstract. For suitable pairs of diagonal quadratic forms in 8 variables we use the circle method to investigate the density of simultaneous integer solutions and relate this to the problem of estimating linear correlations among sums of two squares. 1. "... Dedicated to Étienne Fouvry on his sixtieth birthday Abstract. A new “polynomial sieve ” is presented and used to show that almost all integers have at most one representation as a sum of two values of a given polynomial of degree at least 3. 1. ..." Add to MetaCart Dedicated to Étienne Fouvry on his sixtieth birthday Abstract. A new “polynomial sieve ” is presented and used to show that almost all integers have at most one representation as a sum of two values of a given polynomial of degree at least 3. 1. "... www.elsevier.com/locate/jnt Effective equidistribution of eigenvalues of Hecke operators ..." Add to MetaCart www.elsevier.com/locate/jnt Effective equidistribution of eigenvalues of Hecke operators
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=846883","timestamp":"2014-04-18T22:20:27Z","content_type":null,"content_length":"25240","record_id":"<urn:uuid:75eaef39-6ca6-4f25-9cbe-878f1b977c14>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00328-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: A farmer has 336 ft of fencing and wants to build two identical pens for his prize-winning pigs. The pens will be arranged as shown. Determine the dimensions of a pen that will maximize its area. • one year ago • one year ago Best Response You've already chosen the best response. Best Response You've already chosen the best response. Fencing = 3x + 4y 336 = 3x + 4y Area = 2xy A = 2xy x = (336-4y) / 3 A = 2[(336-4y)/3] y Find dA/dy and equate to 0. solve for y. Best Response You've already chosen the best response. I dont understand this part: x = (336-4y) / 3 A = 2[(336-4y)/3] y Find dA/dy and equate to 0. solve for y. Best Response You've already chosen the best response. i rearranged the fencing equation in terms of x and subbed it into the area equation. find the derivative of A with respect to y. make it equal 0. solve for y. sub y into original equation and solve for x. Best Response You've already chosen the best response. I dont know deriatives yet! Best Response You've already chosen the best response. pls help me! I have to study for my final exam!!! Best Response You've already chosen the best response. instead of taking the derivative, find the vertex of the quadratic equation. Best Response You've already chosen the best response. ok! I will try and call for you when I dont get it! Best Response You've already chosen the best response. the x value of your vertex will be y Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50bc2263e4b0bcefefa067ad","timestamp":"2014-04-17T16:00:41Z","content_type":null,"content_length":"86757","record_id":"<urn:uuid:45d4b0ed-6573-47f9-950a-ed289224e82f>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00519-ip-10-147-4-33.ec2.internal.warc.gz"}
Circles and right June 16th 2009, 10:37 AM #1 Circles and right Give an equation of the circle which passes by the points of intresections of both circles of equations knowing that his centre belongs to right of equation : I think there are infinite solutions June 18th 2009, 07:24 AM #2
{"url":"http://mathhelpforum.com/pre-calculus/93032-circles-right.html","timestamp":"2014-04-23T20:58:28Z","content_type":null,"content_length":"31145","record_id":"<urn:uuid:31414ad7-6c88-4fb4-a188-496bf7c5c55d>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00141-ip-10-147-4-33.ec2.internal.warc.gz"}
Unrealistic RRSP Contribution Expectations We hear frequently that young people should start contributing to an RRSP early in life. Recently, I encountered yet another of these arguments. However, there are some unrealistic expectations buried in the assumptions used. Here is a typical version. If you start contributing $500 per month to an RRSP at age 25 and make an 8% return each year, you'll have about $1.7 million by the time you're 65. But, if you delay making contributions until you're 35, you'll only have about $800,000 at age 65. This is less than half as much. The math is right. The contributions during the first decade really do count for more than the remaining three decades because of the magic of compound interest. However, there is a serious problem with the built-in assumptions. Let's look at this from the point of view of a 65-year old who is just making his last $500 RRSP contribution and is about to retire with $1.7 million. Supposedly, he made $500 RRSP contributions each month starting 40 years ago. But what about inflation? If we assume inflation has averaged 4% per year over those 40 years, that first $500 contribution when our retiree was 25 years old is the equivalent of $2400 today! How many 25-year olds do you know who can contribute $2400 per month (or $28,800 per year) to their RRSPs? I'm a believer in saving money from a young age, but the typical justification of the type I described here is hopelessly flawed. Your early years of saving are important, but not quite as important as they can be made to seem. 23 comments: 1. Michael, You have it backwards and are building towards a weak proposition. The critical piece that you're missing is the diminutive effect of inflation over time. The $2400 amount in current dollars will have a purchasing power equivalent to $500 in 40 years ($2400/(1.04)^40). The $1.7M end state has the equivalent purchasing power of $354,091 under 4% inflation. If you draw that down at a conservative rate of 3% or 4% annually, that provides an annualized income of $10,622 or $14,164. From this baseline, subtract 100 (bonds) or 200 (equities) basis points annually for mutual fund fees and then add on the mutual fund annualized returns net of fees (3-5% for bonds, and 5-7% for equities). After inflation and fees, there's some measly winnings after 40 years. The $500 amount monthly makes more sense than you're concluding in the article. Normally, your writing is quite sensible, but I would suggest you reconsider this one. 2. @Anonymous: You have looked at it from the point of view of a 25-year old today. I looked at it from the point of view of a 65-year old today. If you want to look at it from the point of view of a 25-year old, then my point is that he cannot make $2400 per month contributions for 40 years in order to get $1.7 million in purchasing power at age 65. I believe you have made the same mistake that these analyses usually make. It makes no sense to assume a constant level of contributions over a lifetime. A 25-year might start at $500 per month today, but in 40 years, that amount will have the purchasing power of only $104. It makes no sense to assume that a person would make ever smaller contributions (in purchasing power) as he enters his higher-earning years. A 25-year old who can start at $500 per month today should be planning to increase this amount over time so that he will have more than $354,091 (in today's dollars) at age 65. 3. @Michael: You make a great point. Really, any retirement planning calculator that assumes a person will make a consistent monthly contribution for decades is rather useless. It makes more sense to think in terms of saving a fixed percentage of your income. This accounts for both inflation and career path. And let's face it, as great as it is to contribute early, most of us will focus on other financial goals in our 20s and 30s and then ramp up our RRSP savings after the kids are gone and the house is paid for. 4. Returns ReaperAugust 26, 2010 at 10:45 AM Excellent analysis. I've never thought of this before but I agree completely. Another way to look at it from the 25 yr. old's point of view is to say that instead of earning 8%, he'll really only earn 4% (subtract inflation from the gains). If he contributes $500 per month for 40 years, he'll end up with around $590k (in the 25 yr. old's dollars). But if the 25 yr. old waits 10 years, and then starts contributing $500 per month for 30 years at a real gain of 4%, he ends up with about $350k (in dollars from the point in time where he was So the difference in the methods of analysis is noteworthy. Looking at it one way, you end up with about 130% more money by starting 10 years earlier. Looking at it the other way, there's only a 70% advantage. I guess the key points for this type of analysis are: a) coming up with a realistic rate of return you can expect from your portfolio b) use expected real rates of return, since purchasing power is all that matters in retirement, and you'd expect contributions to (at least) keep pace with inflation. In fact, it's probably realistic (as you suggest) to expect contributions to out-pace inflation. As an individual gets a mortgage paid off and/or children leave home, they are likely to have more money available to save. As a counterpoint though, for most people out there I expect the last thing they need to hear is a good argument why it isn't such a big deal to save much in your early adult years. :-) 5. And don't ignore the fact that an 8% return is extremely optimistic. It's numbers such as those that placed local governments into such debt. Pension obligations can never be met due to underfunding and optimistic expectations. 6. @Michael, Great reply. Hopefully, we can shift to the critical parts for anyone else reading this discussion. [BTW, not in the industry; just have a math and finance interest] Whether you look forward or backward is relatively moot; that's just time-shifting. Your assertion of lowering contribution start points is, to reiterate a point, a weak proposition. For the past 20 years, I have had a modest income and started putting away a minimum of $5000/annum which has varied upward depending on employer contributions. I lived independently since age 18 and bought a first home at age 30, and had a family with my wife staying home for most of the child-raising period. Your proposition is now 20 years in the future of my starting point, but advocates delaying cumulative contributions and advocates a lower start point than my personal example. At a 4% annualized inflation rate, good luck. I've lived the counter-example to your original article advocating 1) delay and 2) reduced contributions at an early age. Initial starting points were extremely important and set the conditions for success. I have the first million in common stock securities along with a furnished 4500-square foot house with a very modest mortgage, with a substantial 20-year road ahead. To recap, the strong proposition is based on setting good/better/best starting conditions (larger initial contributions), paying attention to things that diminish returns (e.g. mutual fund fees), and sticking to your knitting (contribute more if you can annually) through all the ups and downs of each year. The start point is the most critical of the three since it sets habits, shapes the end-vision, and rewards with success. 7. @Canadian Couch Potato: You make a good point about basing the saving rate on income. The actual percentage may vary a little over time, but trying to keep this percentage stable is a sensible @Returns Reaper: You're right that there is an advantage to starting early. It's just that this advantage isn't as large as the flawed analysis makes it out to be. I definitely don't want to send the message that young people don't need to save. @Mark: In today's low inflation environment, you're right that expecting 8% returns is likely unrealistic. However, a real return of 4% over an extended period seems more reasonable. 8. @Anonymous: I think we are arguing about different things. My claim is that most people are in a position to make larger RRSP contributions through the middle of their careers than they are at age 25 due to higher real incomes and inflation. This may not be true for a small minority (possibly including you personally), but it is true for most people. The typical simple analysis of a lifetime of RRSP contributions has the hidden assumption that contributions remain constant in nominal dollars for decades. I don't think this makes sense. If your point is that it is possible for young people to save substantial amounts, I agree. My wife and I paid off our first house in 4 years. Paying off a mortgage isn't exactly the same as saving, but it is similar. 9. Great analysis Michael. It seems like a 25 year old would be better off paying down debt and starting a TFSA, especially if he/she has a lower marginal tax rate. Save the RRSP room for when you're older and (hopefully) have a higher income and the tax rate to match. 10. These popular articles that ignore the time value of money drive me nuts. Yes, investment returns grow over time but over that same time period inflation is relentlessly eating away its value. Also, investment returns are lumpy and luck plays a large role. That's not to say one shouldn't save but one has to have realistic expectations too. 11. @Balance Junkie: You're right that a TFSA is often a better option when you income is low. @CC: Your comment reminds me of when I was a kid and I asked my grandfather if he had earned a million dollars in his life. I reasoned that if he had earned $20,000 per year for 50 years, that would make a million. He tried to be nice about it, but he basically laughed himself silly. I don't think he earned $20,000 total in his first decade of work. Dollars were much bigger and people earned fewer of them back then. 12. I would argue that saving a constant percentage of your income will only be effective if the rate is high enough ie 10%. I think it's more realistic that younger people will save a smaller percentage than older people - they make less money and have more demands (ie kids, first house etc). 13. Good point, Michael. I especially enjoyed the comment you wrote about your grandfather. It reminded me of something I read (I think it was part of Zweig's commentary in the Intelligent Investor. It was a quote from a stand-up comic, I think, who said "Kids are getting stronger these days. 30 years ago it would take two adults to carry home $25 worth of groceries... now any 12 year old can do it." I paraphrased from memory, so am not sure on the numbers, but the point is the same. 14. @Money Smarts: I agree that the saving percentage has to be high enough (particularly through higher income years) to make a difference. Saving 1% won't do much good unless you make millions each @Myke: That quote from a comic is a good one. I'll have to use that sometime. 15. This is an interesting post, and the comments add a lot to the argument and counter-argument as well. Anonymous's comments helped to flesh out some salient points, and I admire his savings It's an intriguing question as to what a 25-year-old investor should be able to save. In my experience, my savings rate was high in my pre-marriage years because I had a decent income, cheap rent, and not much to spend money on. This was a great time to start building a nest egg. I would be tempted, if I had a blog, to write a post along the lines of "The 40-year-old Virgin: The Path to Investment Success". 16. @Gene: Your "40-year old virgin" idea sounds like a conspiracy to keep others from saving money :-) 17. I also have to disagree with your conclusions and think you are looking at it backwards. It's not that the investor would need to contribute $2400 a month at 25, but rather his $500 a month contribution would have to at least keep up with inflation during the next 40 years. You also assume that the investor, although maintaining an absolute dollar amount ($500), would actually be decreasing the value of their contributions with time due to inflation. If you are going to assume inflation, then you should realistically also assume that the contributions per month increase by the rate of inflation each year, e.g.: assuming 4% inflation: 25 - $500 a month 26 - $520 a month 27 - $540 a month etc. Now starting at 25, with 4% inflation and 8% return, the portfolio at 65 will be worth ~$2.8 million. If the investor started at 35, they would begin contributing at $740 a month ($500 at 4% inflation for 10 years). At 65 his portfolio would be worth ~$1.7 million. A difference of ~$1.1 million in the final portfolio value, with the difference in contributions being ~$72,000 in absolute dollars (~$315,000 in actual value at age 65). 18. @Xenko: As the saying goes, I think we are in violent agreement. My point is that it makes no sense to assume that contributions will remain constant for a lifetime. This is the same point you are making. It doesn't matter whether you look at it from age 25 looking forward or age 65 looking backward, the conclusion is the same: contributions should increase over time. 19. Young &amp; SavingAugust 27, 2010 at 11:17 PM I agree that it makes no sense to assume that contributions will remain constant for a lifetime, but I still think these illustrations are helpful. The point is that they demonstrate the importance of compound interest and starting early. Reading similar material in the Wealthy Barber at 15 the lesson I took was contribute as much as possible as early as possible. I started from $50/month into an RRSP in high school to $1000/month into an RRSP and TFSA by 22. Once you understand the importance of compound interest and starting early it follows that you should continue to maximize your investments by increasing your 20. Michael, Make it even simpler so it can be more easily understood. To begin with, assume returns from investment are exactly the same as the inflation rate - I know this is generally not true but it helps illustrate the point I would like to make. $500 now might grow to $1000 in x years, but with the investment returns exactly matching the inflation rate, then the $1000 in x years will have EXACTLY the same purchasing power as $500 now. i.e. they cancel So if they cancel out, then you may as well just talk in 2010 dollars for the entire exercise by assuming that the inflation rate and the return from investment are 0 for the 40 year period. So how much will 500 a month for 40 years accumulate to? A very simple calculation: only $240,000 in 2010 purchasing power. That's all. But that is only the starting point since investment returns GENERALLY do exceed the rate of inflation. By how much? 2%? 3%? And what about taxes? And, of course, there have been long long periods in history when investment returns have been LESS than the inflation rate so what happens if the 25 year old has the misfortune to live in one of those eras? It all gets complicated doesn't it but I'll leave it to others to work out the required adjustment under whatever assumptions they choose. but put another way, the point is that the $1.7Million in 40 years time has to be discounted back to 2010 dollars. And when you do that, it ain't going to be able to buy you anything near $1.7M worth of goods in today's money. Something over $240,000? Almost certainly. But nothing like the fanciful $1.7M quoted. 21. The canonical example of "saving $500 per month" just needs to be refined to "saving $500 in todays dollars per month". That way, whether you projecting into the future (current 25 year old), or looking back at the road already travelled (current 65yo), it's clearly implied that you need to index that $500 (and the accumulated total) by the inflation rate. 22. @Mark: I agree with your conclusion that the $1.7 million is misleading if it is in 40-year old dollars. If it is in present-day dollars for the 65-year old, then the contribution amount for the 25-year old is unrealistic. @Young and @Anonymous: I agree that these illustrations can me motivating. Perhaps they should take the idea of Anonymous and do the calculation in present-day dollars. The result would be less dramatic, but it would still show that saving when young is a good thing. 23. I also think there is a lot of marketing of the pipe dream that the average middle class Joe, contributing $500 a month for 40 years is going to end up with millions of dollars for retirement; marketing done by the Government, who are undoubtedly crapping their pants about paying Old Age Pensions out to baby boomers and do not want the public to panic, and by financial institutions and investment specialists, who want to reap a hefty cut off the mutual funds fees etc. By making the monthly contribution sound reasonable, and the reward mind bogglingly good, lots of people hop on the RRSP bandwagon only to be hit with the realities of all the previous posts- contributions are not constant because of inflation, rates of returns are not guaranteed, and half way through the 40 years, they do not have anywhere near half the money promised. Investing young is undoubtedly the smart move, but for most people they will never achieve the promised nest egg or anything close to it.
{"url":"http://www.michaeljamesonmoney.com/2010/08/unrealistic-rrsp-contribution.html","timestamp":"2014-04-17T21:30:51Z","content_type":null,"content_length":"177976","record_id":"<urn:uuid:971be7fb-fd5e-41fd-99d7-3df51d84e7a6>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00502-ip-10-147-4-33.ec2.internal.warc.gz"}
100 times table chart Best Results From Wikipedia Yahoo Answers Youtube From Wikipedia Pie chart A pie chart (or a circle graph) is a circularchart divided into sectors, illustrating proportion. In a pie chart, the arc length of each sector (and consequently its central angle and area), is proportional to the quantity it represents. When angles are measured with 1 turn as unit then a number of percent is identified with the same number of centiturns. Together, the sectors create a full disk. It is named for its resemblance to a pie which has been sliced. The earliest known pie chart is generally credited to William Playfair's Statistical Breviary of 1801. The pie chart is perhaps the most ubiquitous statistical chart in the business world and the mass media. However, it has been criticized, and some recommend avoiding it, pointing out in particular that it is difficult to compare different sections of a given pie chart, or to compare data across different pie charts. Pie charts can be an effective way of displaying information in some cases, in particular if the intent is to compare the size of a slice with the whole pie, rather than comparing the slices among them. Pie charts work particularly well when the slices represent 25 to 50% of the data, but in general, other plots such as the bar chart or the dot plot, or non-graphical methods such as tables, may be more adapted for representing certain information.It also shows the frequency within certain groups of information. The following example chart is based on preliminary results of the election for the European Parliament in 2004. The table lists the number of seats allocated to each party group, along with the derived percentage of the total that they each make up. The values in the last column, the derived central angle of each sector, is found by multiplying the percentage by 360°. *Because of rounding, these totals do not add up to 100 and 360. The size of each central angle is proportional to the size of the corresponding quantity, here the number of seats. Since the sum of the central angles has to be 360°, the central angle for a quantity that is a fraction Q of the total is 360Q degrees. In the example, the central angle for the largest group (European People's Party (EPP)) is 135.7° because 0.377 times 360, rounded to one decimal place(s), equals 135.7. Use, effectiveness and visual perception Pie charts are common in business and journalism, perhaps because they are perceived as being less "geeky" than other types of graph. However statisticians generally regard pie charts as a poor method of displaying information, and they are uncommon in scientific literature. One reason is that it is more difficult for comparisons to be made between the size of items in a chart when area is used instead of length and when different items are shown as different shapes. Stevens' power law states that visual area is perceived with a power of 0.7, compared to a power of 1.0 for length. This suggests that length is a better scale to use, since perceived differences would be linearly related to actual differences. Further, in research performed at AT&T Bell Laboratories, it was shown that comparison by angle was less accurate than comparison by length. This can be illustrated with the diagram to the right, showing three pie charts, and, below each of them, the corresponding bar chart representing the same data. Most subjects have difficulty ordering the slices in the pie chart by size; when the bar chart is used the comparison is much easier.. Similarly, comparisons between data sets are easier using the bar chart. However, if the goal is to compare a given category (a slice of the pie) with the total (the whole pie) in a single chart and the multiple is close to 25 or 50 percent, then a pie chart can often be more effective than a bar graph. Variants and similar charts Polar area diagram The polar area diagram is similar to a usual pie chart, except sectors are equal angles and differ rather in how far each sector extends from the center of the circle. The polar area diagram is used to plot cyclic phenomena (e.g., count of deaths by month). For example, if the count of deaths in each month for a year are to be plotted then there will be 12 sectors (one per month) all with the same angle of 30 degrees each. The radius of each sector would be proportional to the square root of the death count for the month, so the area of a sector represents the number of deaths in a month. If the death count in each month is subdivided by cause of death, it is possible to make multiple comparisons on one diagram, as is clearly seen in the form of polar area diagram famously developed by Florence Nightingale. The first known use of polar area diagrams was by André-Michel Guerry, which he called courbes circulaires, in an 1829 paper showing seasonal and daily variation in wind direction over the year and births and deaths by hour of the day. Léon Lalanne later used a polar diagram to show the frequency of wind directions around compass points in 1843. The wind rose is still used by meteorologists. Nightingale published her rose diagram in 1858. The name "coxcomb" is sometimes used erroneously. This was the name Nightingale used to refer to a book containing the diagrams rather than the diagrams themselves. It has been suggested that most of Nightingale's early reputation was built on her ability to give clear and concise presentations of data. Spie chart A useful variant of the polar area chart is the spie chart designed by Feitelson . This superimposes a normal pie chart with a modified polar area chart to permit the comparison of a set of data at two different states. For the first state, for example time 1, a normal pie chart is drawn. For the second state, the angles of the slices are the same as in the original pie chart, and the radii vary according to the change in the value of each variable. In addition to comparing a partition at two times (e.g. this year's budget distribution with last year's budget distribution), this is useful for visualizing hazards for population groups (e.g. the distribution of age and gener groups among road casualties compared with these groups's sizes in the general population). The R Graph Gallery provides an example. Multi-level pie chart Multi-level pie chart, also known as a radial tree c Multiplication table In mathematics, a multiplication table (sometimes, less formally, a times table) is a mathematical table used to define a multiplication operation for an algebraic system. The decimal multiplication table was traditionally taught as an essential part of elementary arithmetic around the world, as it lays the foundation for arithmetic operations with our base-ten numbers. Many educators believe it is necessary to memorize the table up to 9 × 9. In his 1820 book The Philosophy of Arithmetic, mathematician John Leslie published a multiplication table up to 99 × 99, which allows numbers to be multiplied in pairs of digits at a time. Leslie also recommended that young pupils memorize the multiplication table up to 25 × 25. Traditional use In 493 A.D., Victorius of Aquitaine wrote a 98-column multiplication table which gave (in Roman numerals) the product of every number from 2 to 50 times and the rows were "a list of numbers starting with one thousand, descending by hundreds to one hundred, then descending by tens to ten, then by ones to one, and then the fractions down to 1/144" (Maher & Makowski 2001, p.383) The traditional rote learning of multiplication was based on memorization of columns in the table, in a form like 1 × 10 = 10 2 × 10 = 20 3 × 10 = 30 4 × 10 = 40 5 × 10 = 50 6 × 10 = 60 7 × 10 = 70 8 × 10 = 80 9 × 10 = 90 10 x 10 = 100 11 x 10 = 110 12 x 10 = 120 13 x 10 = 130 14 x 10 = 140 15 x 10 = 150 16 x 10 = 160 17 x 10 = 170 18 x 10 = 180 19 x 10 = 190 100 x 10 = 1000 This form of writing the multiplication table in columns with complete number sentences is still used in some countries instead of the modern grid above. Patterns in the tables There is a pattern in the multiplication table that can help people to memorize the table more easily. It uses the figures below: → → 1 2 3 2 4 ↑ 4 5 6 ↓ ↑ ↓ 7 8 9 6 8 � � 0 0 Fig. 1 Fig. 2 For example, to memorize all the multiples of 7: 1. Look at the 7 in the first picture and follow the arrow. 2. The next number in the direction of the arrow is 4. So think of the next number after 7 that ends with 4, which is 14. 3. The next number in the direction of the arrow is 1. So think of the next number after 14 that ends with 1, which is 21. 4. After coming to the top of this column, start with the bottom of the next column, and travel in the same direction. The number is 8. So think of the next number after 21 that ends with 8, which is 28. 5. Proceed in the same way until the last number, 3, which corresponds to 63. 6. Next, use the 0 at the bottom. It corresponds to 70. 7. Then, start again with the 7. This time it will correspond to 77. 8. Continue like this. Figure 1 is used for multiples of 1, 3, 7, and 9. Figure 2 is used for the multiples of 2, 4, 6, and 8. These patterns can be used to memorize the multiples of any number from 1 to 9, except 5. In abstract algebra Multiplication tables can also define binary operations on groups, fields, rings, and other algebraic systems. In such contexts they can be called Cayley tables. For an example, see octonion. Standards-based mathematics reform in the USA In 1989, the National Council of Teachers of Mathematics (NCTM) developed new standards which were based on the belief that all students should learn higher-order thinking skills, and which recommended reduced emphasis on the teaching of traditional methods that relied on rote memorization, such as multiplication tables. Widely adopted texts such as Investigations in Numbers, Data, and Space (widely known as TERC after its producer, Technical Education Research Centers) omitted aids such as multiplication tables in early editions. It is thought by many that electronic calculators have made it unnecessary or counter-productive to invest time in memorizing the multiplication table. NCTM made it clear in their 2006 Focal Points that basic mathematics facts must be learned, though there is no consensus on whether rote memorization is the best method. Line chart A line chart or line graph is a type of graph, which displays information as a series of data points connected by straight line segments. It is a basic type of chart common in many fields. It is an extension of a scatter graph, and is created by connecting a series of points that represent individual measurements with line segments. A line chart is often used to visualize a trend in data over intervals of time – a time series– thus the line is often drawn chronologically. In the experimental sciences, data collected from experiments are often visualized by a graph that includes an overlaid mathematical function depicting the best-fit trend of the scattered data. This layer is referred to as a best-fit layer and the graph containing this layer is often referred to as a line graph. For example, if one were to collect data on the speed of a body at certain points in time, one could visualize the data by a data table such as the following: The table "visualization" is a great way of displaying exact values, but a very bad way of understanding the underlying patterns that those values represent. Because of these qualities, the table display is often erroneously conflated with the data itself; whereas it is just another visualization of the data. Understanding the process described by the data in the table is aided by producing a graph or line chart of Speed versus Time. In this context, Versus (or the abbreviations vs and VS), separates the parameters appearing in an X-Y (two-dimensional) graph. The first argument indicates the dependent variable, usually appearing on the Y-axis, while the second argument indicates the independent variable, usually appearing on the X-axis. So, the graph of Speed versus Time would plot time along the x-axis and speed up the y-axis. Mathematically, if we denote time by the variable t, and speed by v, then the function plotted in the graph would be denoted v(t) indicating that v (the dependent variable) is a function of t. It is simple to construct a "best-fit" layer consisting of a set of line segments connecting adjacent data points; however, such a "best-fit" is usually not an ideal representation of the trend of the underlying scatter data for the following reasons: 1. It is highly improbable that the discontinuities in the slope of the best-fit would correspond exactly with the positions of the measurement values. 2. It is highly unlikely that the experimental error in the data is negligible, yet the curve falls exactly through each of the data points. A true best-fit layer should depict a continuous mathematical function whose parameters are determined by using a suitable error-minimization scheme, which appropriately weights the error in the data In either case, the best-fit layer can reveal trends in the data. Further, measurements such as the gradient or the area under the curve can be made visually, leading to more conclusions or results from the data. From Yahoo Answers Question:Can i Plz get a website tht i can print out the time tables plz ? and thnk u if u do it will really help me on my test OKAy i dont need no 1-12 i need it all the up 1-100 orrr 1-30 :) thnx Answers:This one: http://neoparaiso.com/imprimir/multiplos-y-submultiplos.html It goes to 100 vertically and to 20 horizontally. Question:ill looking for a times table chart with all the timetable or up to tenth times table that i can print off Question:First of all I'm doing an A-level in mathematics right now so I know there's a lot more to maths than arithmetic. But anyway I still love it and am going to compete in the mental calculation world cup in 2008 or 2010 (depending on how good I get) (see: http://www.recordholders.org/en/events/worldcup/index.html). Right now (on average) it takes me 3.5 seconds to do a 2 by 2 in my head (e.g. 86 *67). This is nowhere near fast enough. I need to learn the 100 times tables to take the lead but have no idea how to. Any strategies will be welcome. No ExO11287, I know mental arithmetic has little to do with intelligence it's just a mind sport and i want to get one step ahead of my soon to be competitors. Just wanted some help from you guys because I can't remember how I learnt my 10 times tables. Answers:You probably mean that you would like to know all the times tables up 100. A good starting point is to learn the tables of squares up to 100 X 100 This will mean that if you knew what , say 18 x 18 is ( i.e. 324), then you can work out what 17 X19 is (its one less than 324, or 323) also 16 X 20 (trivial i know but it is 324 minus 4) or 320 and 15 X 21 ( which = 324-9) or 315 The pattern is, if you have a product of two numbers that are either side of a know square product ,then because the algebra of this is :- (x-1)(x+1) is identical to (x^2-1) i.e expanding the brackets, x^2 -x +x -1 which is x^2 -1. and (x-2)(x+2) is x^2-4, and (x-3)(x+3) is X^2 -9 and so on. I would advise getting some books on number manipulation. I have many puzzle books and mental arithmatic books. If you would like to know some titles, please e mail me. Answers:Try: http://www.grammarcard.com/tables.html From Youtube Learn Times Tables - 100% Instant Recall :How to learn times tables, Learning times tables, The best and easiest method that there is, 100% Instant Recall Method, Your children will know the times tables as well as they know their own names, Learn Times Tables :Learning the Times Tables is hugely important for your childrens future. The Learning Well Instant Recall method takes away the struggle of learning and remembering Multiplication Tables and does this in a very simple, easy, and fun way that achieves 100% Instant Recall. The method boosts learning, increases ability to remember, and results in a greatly improved performance by your children. Your children will know the times tables and will be able to answer any question put to them in an instant, the method is that good. Goto www.timestablesmaths.com
{"url":"http://www.edurite.com/kbase/100-times-table-chart","timestamp":"2014-04-17T15:29:29Z","content_type":null,"content_length":"87006","record_id":"<urn:uuid:0caa23b0-1a92-4cc6-952e-d3ea14e8d226>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00307-ip-10-147-4-33.ec2.internal.warc.gz"}
Use Moving average convergence divergence to take trading decisions The rate of change (ROC) is calculated as the difference between two price points, and, hence, is slightly volatile. To make the indicator line smoother, technical analysts use two moving averages, wherein the lines are already smooth. The best example of this is the moving average convergence divergence (MACD). As the name suggests, the MACD tracks the convergence and divergence between two moving averages. Convergence takes place when these moving averages come close to each other, and divergence, when they move away from each other.
{"url":"http://articles.economictimes.indiatimes.com/2013-03-25/news/38010524_1_zero-line-crossover-momentum","timestamp":"2014-04-21T12:41:09Z","content_type":null,"content_length":"44410","record_id":"<urn:uuid:01fcb10d-2d74-4840-8aa3-e06e069f941e>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00504-ip-10-147-4-33.ec2.internal.warc.gz"}
Please note: This is NOT the most current catalog. Mathematics, Statistics, and Computer Science Chair, 2007-08: Steven McKelvey, operations research, wildlife modeling Faculty, 2007-08: Richard J. Allen, logic programming, intelligent tutoring systems; Peder A. Bolstad, precalculus, graph theory; Richard A. Brown, reliable real time systems, pedagogical software techniques; Clifton E. Corzatt, number theory, combinatorics; Jill M. Dietz, algebraic topology; Kristina Garrett, combinatorics; Rosemary Gundacker, mathematics education; Olaf Hall-Holt, computer graphics, computational geometry; Bruce H. Hanson, complex analysis; Paul D. Humke, real analysis, dynamical systems; Christine Kohnen (MSCS), statistics and Bayesian methods; Julie Legler, biostatistics and latent variable modeling; Urmila Malvadkar, mathematical biology; Steven McKelvey, operations research, wildlife modeling; Arnold M. Ostebee, applied mathematics; Matthew Richey, mathematical physics, computational mathematics; Paul Roback, statistics; James Scott, biostatistics epidemiology; Kay E. Smith, logic, discrete mathematics; Lynn Steen, analysis, education; Eric R. Ufferman, mathematical logic, algebraic structures; Martha Tibbetts Wallace, mathematics education; Michael Weimerskirch, probability, mathematics education; Katherine Ziegler-Graham (MSCS), biostatistics; Paul Zorn, complex analysis, mathematical exposition The Department of Mathematics, Statistics, and Computer Science offers programs in all three disciplines, including majors in mathematics and computer science and a concentration in statistics. For more information on each program, see the separate listings under COMPUTER SCIENCE, MATHEMATICS, and STATISTICS.
{"url":"http://www.stolaf.edu/catalog/0708/academicprogram/mscs.html","timestamp":"2014-04-17T04:31:03Z","content_type":null,"content_length":"7497","record_id":"<urn:uuid:8c04878f-2887-4b81-b70d-cc4321b28ee4>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00515-ip-10-147-4-33.ec2.internal.warc.gz"}
Applied Mechanics From inside the book 31 pages matching zero in this book Where's the rest of this book? Results 1-3 of 31 What people are saying - Write a review We haven't found any reviews in the usual places. Related books Force Vector Quantities 6 Composition and Resolution of Vectors 12 Equilibrium of Concurrent Coplanar Forces 24 17 other sections not shown Common terms and phrases algebraic sum angle angular acceleration angular velocity applied forces axis block body moving center of gravity clockwise coefficient of friction coefficient of restitution coplanar forces cord couple curve determine direction displacement equal and opposite equation equilibrium feet per second Find the center Find the resultant force F force of friction forces acting formula free body ft per sec given forces Hence impact inches inclined plane kinetic energy law of sines lb-ft line of action magnitude mass method miles per hour moments of inertia momentum moving body Newton's Newton's second law particle perpendicular distance plate pounds Problem pull pulley radians radius reaction rest revolutions per minute right-angle components rope rotation sec2 second per second shown in Fig single force Solve exercise space forces speed surface system of forces Taking moments three forces vector quantity weight wheel whence zero Bibliographic information Force Vector Quantities 6 Composition and Resolution of Vectors 12
{"url":"http://books.google.com/books?id=jsQ0AAAAMAAJ&q=zero&dq=related:UOM39015030768678&source=gbs_word_cloud_r&cad=6","timestamp":"2014-04-21T15:24:54Z","content_type":null,"content_length":"106799","record_id":"<urn:uuid:5433d7b9-1d89-4c76-81ec-1ad36814089e>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00192-ip-10-147-4-33.ec2.internal.warc.gz"}
[whatwg] Unicode mappings for &lang; and &rang; Øistein E. Andersen html5 at xn--istein-9xa.com Sun Jul 1 12:06:31 PDT 2007 HTML5 currently maps &lang; and &rang; to U+3008 LEFT ANGLE BRACKET, U+3009 RIGHT ANGLE BRACKET, both belonging to `CJK angle brackets' in U+3000--U+303F CJK Symbols and Puntuation. HTML 4.01 maps them to U+2329 LEFT-POINTING ANGLE BRACKET, U+232A RIGHT-POINTING ANGLE BRACKET from `Angle brackets' in the range U+2300--U+23FF Miscellaneous Technical. Unicode 5.0 notes: > These are discouraged for mathematical use because of their > canonical equivalence to CJK punctuation. It would probably be better to use U+27E8 MATHEMATICAL LEFT ANGLE BRACKET, U+27E9 MATHEMATICAL RIGHT ANGLE BRACKET from `Mathematical brackets' in U+27C0--U+27EF Miscellaneous Mathematical Symbols-A, characters that did not yet exist when HTML 4.01 was published. This approach is suggested by > 27E8; lang; ISOTECH; ** # &#10216; MATHEMATICAL LEFT ANGLE BRACKET > 27E9; rang; ISOTECH; ** # &#10217; MATHEMATICAL RIGHT ANGLE BRACKET Moreover, the (few) browsers I have tested render &lang/&rang, &#x2329/&#x232a and &#x27e8/&#x27e9 identically or simalarly (as "<"/">" in approximative ASCII), whereas &#x3008/&#x3009 are rendered as full-width East-Asian characters (" <"/" >"). Øistein E. Andersen More information about the whatwg mailing list
{"url":"http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2007-July/012108.html","timestamp":"2014-04-20T05:44:02Z","content_type":null,"content_length":"4125","record_id":"<urn:uuid:cab4b3b6-a013-4816-bccf-046aa294b9b8>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00129-ip-10-147-4-33.ec2.internal.warc.gz"}
Pray at the cross... Pray at the cross... Hey all I was wondering if anyone of you knew how to calculate the angle of a cross brace... Say you have a 2x4 and you want to put is at an angle between 2 joits that are at x" ... u'd have to calculate the angle. I am sure in the old days the carpenters must have had a tool to do so... or a formula. I made myself a jig that I use to trace the angle but cannot figure out how to calculate it! thanks in advance! Re: Pray at the cross... ?? X 1.414= we use this in plumbing. Re: Pray at the cross... That is from the angle formula for a 45. Square root of 2. I'm not sure but it sounds like he wants a more generic formula that would work with different angles. I don't think you'll have one formula that will do the job, but if you know some trig or geometry you can figure it yourself. Of course, in that case he probably wouldn't be asking us Re: Pray at the cross... unfortunately, a square + b square = c square is NOT it! imagine 2 joist. draw a line from the top of one joist to the bottom of the other joist. the formula above ir right... but this is for a LINE! not take your 2x4 and try to fit it inside the joists... your formula is no longer valid. what i am trying to do refers to euclidian geometry. your 2x4 is actually 3.5 inches high and this does not jive between the joist using pythagore's formula. look at the picture i've attached... the angle of the line "A" is about 32.5 degrees. The angle of drawing #B is about 28 degrees.... even if the distance between the 2 joists are identical. anybody has an idea about how to calculate this? Attached Files Re: Pray at the cross... just measure it. Re: Pray at the cross... Originally posted by HornyPotter View Post unfortunately, a square + b square = c square is NOT it! cause your'e doin it wrong well then do it... calculate it! 2 joists 24" apart... try and fit INSIDE a 2x3 or a 2x4 or a 1x1 from the top of one to the bottom of the other using pythagore's theorem. good luck! you'll never make it. Re: Pray at the cross... and you call yourself an expert woodworker? Re: Pray at the cross... his name is "vince the plumber" I missed "expert" and "woodworker". BTW.......the formula is still valid. You just have to account for the thickness of the wood and not just base it on a line of zero thickness. The angles are the same. It will just require a slightly shorter piece. How much shorter depends on the angle you are working with. Using your geometry skills a second time could yield the second dimension. Based on the angle you determined and the thickness of the wood you could easily come up with how much shorter the piece will have to be than a line of a certain length but no thickness. Re: Pray at the cross... Is this something you need to do a few times or a process you need to be able to repeat over and over? I think what you need to do is a combination of calculation and trial and error. Re: Pray at the cross... Carpenters use a framing square. Assuming your joist is 9& 1/4 inches high and14& 1/2 inches between joist (16" on center) lay the square on the 2"x 4' and hold the square at 9 1/4 and 14 1/2 and mark on the 9 1/4 mark (or side) Works whatever dimensions you have. Re: Pray at the cross... on A Good framing square there are rafter tables, which would work in your instance, and there is also a brace table on the better ones, that will tell you the length of knee type braces, on figuring a rafter on can just use the square and stair step it to lay it out or do the math off the chart and calculate the length, and then one uses the rise by 12" of run to get the angles, check out the pages beyond the URL there is a few pages on rafters and then one on the brace scale and some others that you may find useful, this s another on rafters there is some links at the bottom of the second URL that you may find of some help as well, Last edited by BHD; 04-18-2009, 05:19 PM. Push sticks/blocks Save Fingers "The true measure of a man is how he treats someone who can do him absolutely no good." attributed to Samuel Johnson PUBLIC NOTICE: Due to recent budget cuts, the rising cost of electricity, gas, and oil...plus the current state of the economy............the light at the end of the tunnel, has been turned off. Re: Pray at the cross... Originally posted by rofl View Post his name is "vince the plumber" I missed "expert" and "woodworker". BTW.......the formula is still valid. You just have to account for the thickness of the wood and not just base it on a line of zero thickness. The angles are the same. It will just require a slightly shorter piece. How much shorter depends on the angle you are working with. Using your geometry skills a second time could yield the second dimension. Based on the angle you determined and the thickness of the wood you could easily come up with how much shorter the piece will have to be than a line of a certain length but no thickness. wrong! look at the drawing i've enclosed in a prebious post... the angles are different! Re: Pray at the cross... Originally posted by BHD View Post on A Good framing square there are rafter tables, which would work in your instance, and there is also a brace table on the better ones, that will tell you the length of knee type braces, on figuring a rafter on can just use the square and stair step it to lay it out or do the math off the chart and calculate the length, and then one uses the rise by 12" of run to get the angles, check out the pages beyond the URL there is a few pages on rafters and then one on the brace scale and some others that you may find useful, this s another on rafters there is some links at the bottom of the second URL that you may find of some help as well, hey! i'll look at these links! it looks promissing! many thanks! i'll get back to you after analyzing them!
{"url":"https://www.ridgidforum.com/forum/power-tools/woodworking-discussion-forum/25623-pray-at-the-cross?t=24954","timestamp":"2014-04-25T00:55:53Z","content_type":null,"content_length":"144602","record_id":"<urn:uuid:dacb3dfd-799e-4478-acf8-3b326af22cfc>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00649-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by kristen on Wednesday, July 16, 2008 at 6:35pm. establish the identity: cot(A-B)= cotAcotB+1/cotB-cotA thanks for any help at all • trig - drwls, Wednesday, July 16, 2008 at 6:48pm Is the final term 1/(cotB-cotA) or (1/cotB)-cotA ?? Is the numerator term (cotA cotB +1) or 1 ? When you write equations like that, we can't tell what you mean. I suggest you start with the identities for cos (A-B) and sin (A-B) and take the ratio of the two. • trig - Anonymous, Saturday, May 11, 2013 at 12:02pm multiplying and dividing by sinasins then it becomes Related Questions Trig - HELP!!!! I dont know how to do the trig identity with this problem csc^4 ... trig help - verify that cos(A-B)/cosAsinB=tanA+cotB is an identity PreCalculus (PLEASE HELP, IM BEGGING!) - Can anyone please explain these ... Trig - verify the following identity: cos(A-B)/cosAsinB=tanA+cotB trig - prove the identity sec^2x times cot x minus cot x = tan x mathematics trignometry - prove that the following are identity 1. (cotA+cosecA-... Trigonometry - Establish the identity: (1 - cos^2 0)(1 + cot^2 0) = 1 No idea! trig - Match each ratio in the first row with an equivalent ratio from the ... trig - Establish the trigonometric identity (Hint: Use sum to product) cos(5x)+... trig - For each expression in column I, choose the expression from column II to ...
{"url":"http://www.jiskha.com/display.cgi?id=1216247745","timestamp":"2014-04-18T10:56:29Z","content_type":null,"content_length":"8703","record_id":"<urn:uuid:e60b00cd-2fc1-4273-8ac3-3afa8597b73d>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00113-ip-10-147-4-33.ec2.internal.warc.gz"}
can this be made faster? Robert Kern robert.kern at gmail.com Mon Oct 9 00:40:41 CDT 2006 Daniel Mahler wrote: > On 10/8/06, Greg Willden <gregwillden at gmail.com> wrote: >> This next one is a little closer for the case when c is not just a bunch of >> 1's but you still have to know how the highest number in b. >> a=array([sum(c[b==0]), sum(c[b==1]), ... sum(c[b==N]) ] ) >> So it sort of depends on your ultimate goal. >> Greg >> Linux. Because rebooting is for adding hardware. > In my case all a, b, c are large with b and c being orders of > magnitude lareger than a. > b is known to contain only, but potentially any, a-indexes, reapeated > many times. > c contains arbitray floats. > essentially it is to compute class totals > as in total[class[i]] += value[i] In that case, a slight modification to Greg's suggestion will probably be fastest: import numpy as np # Set up the problem. lena = 10 lenc = 10000 a = np.zeros(lena, dtype=float) b = np.random.randint(lena, size=lenc) c = np.random.uniform(size=lenc) idx = np.arange(lena, dtype=int)[:, np.newaxis] mask = (b == idx) for i in range(lena): a[i] = c[b[i]].sum() Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys -- and earn cash More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2006-October/011234.html","timestamp":"2014-04-21T10:12:21Z","content_type":null,"content_length":"4395","record_id":"<urn:uuid:faefe43d-17f1-4a20-9a98-a46f7fbd59da>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00214-ip-10-147-4-33.ec2.internal.warc.gz"}
Kew Gardens Math Tutor Find a Kew Gardens Math Tutor ...I am also skilled at teaching Math, having taught and tutored Math for several years in extended day classes and in my school's computer lab. I can offer excellent study methods to help students improve both their grades and test scores. I successfully completed two speech classes while a student at Columbia University. 35 Subjects: including algebra 2, SAT math, precalculus, algebra 1 ...I have tutored students in both ESL and Spanish using the phonetics. I had the student create noises; the younger students I had them make faces, and for the older students have show them the position where the tongue and shape of the lips. I am currently a Spanish major with a concentration in linguistics. 13 Subjects: including prealgebra, English, algebra 1, algebra 2 I have enjoyed working as a private tutor and homeschool teacher (at the middle and high school levels) with New York City families over the past four years. In terms of credentials, I hold a Masters degree in Secondary Education (English) from Harvard University and am also currently pursuing a Ph... 33 Subjects: including prealgebra, algebra 1, English, Spanish ...She has had more than 55 publications including several in prominent magazines like Wine Enthusiast and Art in America and also writes promotional materials and biographical articles. As a researcher for a very prominent political strategist, she had the opportunity to contribute to two publishe... 27 Subjects: including algebra 2, SAT math, algebra 1, prealgebra ...I tutor all sections (Critical Reading, Math and Writing). I typically like to start off by giving a diagnostic exam and then evaluating which sections to focus on depending on the student. I typically work out of the College Board SAT book which is the standard book used to practice for the SAT... 9 Subjects: including algebra 1, algebra 2, geometry, prealgebra
{"url":"http://www.purplemath.com/kew_gardens_math_tutors.php","timestamp":"2014-04-17T07:53:41Z","content_type":null,"content_length":"23873","record_id":"<urn:uuid:afb943be-831f-4f01-b4b8-0844dbe7efca>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00123-ip-10-147-4-33.ec2.internal.warc.gz"}
From OpenWetWare Review of Homework #1 1. Fibonacci question □ Using recursion is good with Fibonacci if you're returning (and passing) a list. □ Solutions to the Fibonacci question. 2. Parsimony: □ Review of how to assess the most parsimonious explanation of nodes' states. □ Solution/notes on the Parsimony question. Parsimony Continued • Why parsimony? □ we can suggest infinite alternative explanations involving increasing numbers of mutations but they become less and less plausible □ this algorithm is sometimes called "model-free" because the model is implicit □ is parsimony better than just an extreme case of maximum likelihood? This is still debated. • One not-optimal way to look for the most parsimonious tree: try all four possibilities at every node. But that would require looking at 4^n possibilities! Using recursion in this problem, we can avoid that. Down-Pass Algorithm • Formalizing the idea of what you did by intuition in HW1. def downPass(tree): #pseudocode! if tree is a leaf: return seq l = downPass(left child) r = downPass(right child) i= l <intersect> r if i==None: return l <union> r return i Sankoff Downpass Algorithm 1. Consider different weights for different mutations: P(transitions) > P(transversions) □ The cost of no mutation will be zero. □ The cost of a transition we'll set to 1. □ The cost of a transversion will be 2.5 times as much as a transition. 2. Each leaf is associated with a vector of length 4 which stores the cost (so far) of that node being A, C, G or T 3. Initialize tree 4. On the leaves we want to do something different from the internal nodes, because we are 100% sure of what the sequence is; it's not a guess. □ put a score of zero for the letter that we KNOW it is □ put a score of infinite for the other three. 5. Work down from leaves to root. At each internal node, each entry in the vector will store the cost of the mostly likely sequence of mutation events required in its subtree to get to the that particular internal node state (A,C,G, or T). Calculating the cost vector at the first level of internal nodes should be straightforward. At the next level (the right child of the root node) consider the possibility that the ancestor had "A" at this position. There are four ways to get there, but we only care about which was the most likely! First consider the costs along the two branches that attach at this node: 1. The cost along the left branch is easy to find: we know its left child is C, so the cost of mutation from C to A is 2.5 2. The cost along the right branch depends on the identity of the right child. If the right child were (A,C,G,T) that would entail additional costs of mutation (0,2.5,1,2.5), respectively. 3. Additionally we have to accrue the costs of each possibility at the right child node: (1,5,1,5) 4. Therefore our four possible paths have cost: ☆ (2.5,2.5,2.5,2.5) + (0,2.5,1,2.5) + (1,5,1,5) = (3.5,10,4.5,10) 5. The path of least cost to get to an "A" at this node is 3.5. That becomes the first entry in our vector for the root's right child node. This process must be iterated for C,G,T at this node. 6. A counter-intuitive observation: we get the right answers at the root of the tree, but not at the other internal nodes. What is different about them?
{"url":"http://openwetware.org/wiki/20.181/Lecture4","timestamp":"2014-04-17T10:08:30Z","content_type":null,"content_length":"18248","record_id":"<urn:uuid:b1cbbf88-6a50-414e-bdf3-c5655a4fd0b1>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00589-ip-10-147-4-33.ec2.internal.warc.gz"}
College Algebra Tutors Lancaster, TX 75134 Highly Skilled n Versatile/Specialize in All Maths/Online or In-Person ...Get ready for NEXT year THIS SUMMER! LEARN IT before it counts, with NO pressure, and NO test! SCHEDULE TODAY! MATH, SCIENCE, WRITING ARE PRIORITIES! College-level Writing, Algebra, Geometry, Trigonometry, Precalculus, Calculus, Probability, Discrete Math, Chemistry,... Offering 10+ subjects including algebra 2
{"url":"http://www.wyzant.com/Cedar_Hill_TX_college_algebra_tutors.aspx","timestamp":"2014-04-20T01:21:12Z","content_type":null,"content_length":"59968","record_id":"<urn:uuid:b5992e81-7dac-45c3-b5dd-765886046e8d>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00358-ip-10-147-4-33.ec2.internal.warc.gz"}
On the 2nd Day of Christmas… A Customized Crock-Pot Giveaway! (winner announced) UPDATE: The winner of the customized Crock-Pot is: #2,258 – Brandy: “Veggie Soup” Congratulations, Brandy! Be sure to reply to the email you’ve been sent, and we’ll get your new Crock-Pot shipped out to you! Growing up, my mom routinely made recipes in her beloved Crock-Pot, my favorite among them her barbecued beans, which were a New Year’s Eve staple. Once I started stocking my own kitchen, a Crock-Pot was a must. Put everything together in the morning, let it cook, and then have a fabulous, warm dinner waiting for you? It’s one heck of an invention. I’ve made tons of different recipes in a Crock-Pot, everything from overnight oatmeal to pot roast and the phenomenal beer and brown sugar kielbasa on game days. Crock-Pot recently launched two new versions of their slow cookers – the NFL Crock-Pot Cook and Carry (one for each team), and the Crock-Pot Create-A-Crock, which is totally customizable with your own photos! I can’t tell you how excited I was when I realized I could rock a Steelers Crock-Pot for Sunday game day food. How much fun! I love slow cooker recipes and I love these new Crock-Pots, so I’m giving one away to a lucky reader! If you’re a football fan, you might want one with your team’s colors and logo, or you may opt for a more personalized version. Read below for details on how to enter to win! One (1) winner will receive a 6-quart customized Crock-Pot of their choosing. To enter to win, simply leave a comment on this post and answer the question: “What’s your favorite make-ahead or slow cooker recipe?” You can receive up to FIVE additional entries to win by doing the following: 1. Subscribe to Brown Eyed Baker by either RSS or email. Come back and let me know you’ve subscribed in an additional comment. 2. Follow @thebrowneyedbaker on Instagram. Come back and let me know you’ve followed in an additional comment. 3. Follow @browneyedbaker on Twitter. Come back and let me know you’ve followed in an additional comment. 4. Become a fan of Brown Eyed Baker on Facebook. Come back and let me know you became a fan in an additional comment. 5. Follow Brown Eyed Baker on Pinterest. Come back and let me know you became a fan in an additional comment. Deadline: Wednesday, December 5, 2012 at 11:59pm EST. Winner: The winner will be chosen at random using Random.org and announced at the top of this post. If the winner does not respond within 48 hours, another winner will be selected. Disclaimer: This giveaway is sponsored by Crock-Pot; all opinions are my own. Good Luck!! 2,496 Responses to “On the 2nd Day of Christmas… A Customized Crock-Pot Giveaway! (winner announced)” 2. spare ribs,potatoes, and sauerktaut 3. My favorite is my mom’s slow cooker recipe for white bean chicken chili. 4. Umm I love my mom’s roast, so delicious!! 5. I am subscribed to Brown Eyed Baker by RSS and email! 6. I follow @thebrowneyedbaker on Instagram! 7. I follow @browneyedbaker on Twitter! 8. I am a fan of Brown Eyed Baker on Facebook! 9. I follow Brown Eyed Baker on Pinterest! 11. Creamy Chicken and Wild Rice Soup, my family loves it. 12. I follow you on Pinterest 15. Carrots, onions, cabbage, and corned-beef. 16. French dips in the crock pot! 17. I follow you on Facebook. 18. I subscribe to your emails 21. I follow you on pinterest 22. Pot roast…simple, but one of my favorite meals ever! 23. I like pulled pork, ham barbeque or cabbage rolls! 25. Indian rice & meat dish called Biryani..i make it 1 day ahead…and the dish tastes even better the next day. 26. I follow you on Pinterest. 27. I follow you on facebook. 28. My favorite crock pot meal is barbecued pulled pork! 30. I follow you on pinterest 35. I am a fan of yours on facebook.. 36. I follow you on Pinterest.. keep those great recipes coming.. 38. My favourite slow-cooker recipe is vegetarian chilli. 39. I follow @browneyedbaker on twitter! 42. I follow you on pinterest! 45. My favorite make-ahead recipe is deer chili!!! YUMMO!!! 46. Love to make Mexican chicken in my crockpot. Add chicken thighs, taco seasoning and a little olive oil. Cook on low for 6-8 hours. Shred chicken with 2 forks. Great for tacos, nachos or 47. My favorite slow cooker meal is lima beans and smoked neck bones! 48. I follow you on Facebook! 49. My favorite Slow cooker recipe would have to be Italian beef 55. My favorite recipe to cook is a beef roast. 57. I already follow you on pinterest! 58. It’s a toss up between chili and meatballs! 59. I follow you on facebook! 60. I follow you on instagram! 61. I also follow you on pinterest! 62. NFL Crockpots. How cool! The only thing cooler would be NCAA! 63. I subscribe to your emails. 66. Whoops! I got so excited over the designs, I forgot to tell you that my favorite recipe in the Crock pot is chicken taco soup. 67. I tried to post earlier, not sure if it went through or not…but I follow you on twitter! 68. I also like your facebook page!! 70. I love stew in the crock pot! 72. I’ve liked you on Facebook. 73. I follow you on Pinterest. 74. I make oatmeal in mine all the time. Also chili. 75. Saladgoddess following you on Twitter. 80. I am following you via RSS. 81. I am following you on Instagram. 82. I am following you on Pinterest. 83. I’m a fan and did all of it! 84. Kalua pork and cabbage, yum. 85. My fav is pot roast! then i have rice set on a timer, I come home from work, and BAM – pot roast on top of rice. Major comfort food. 87. i’m a pinterest follower. thanks 88. chili from a crock pot sounds good. thnaks. 89. you are on my rss reader. thanks 90. and finally, you are liked on my facebook . thanks 91. Nothing beats some good ole Chili, a big ole pot of that goodness, yum yum yum! 92. I follow you on Twitter (@cookie_counter) 94. I follow you on Pinterest, love it!
{"url":"http://www.browneyedbaker.com/2012/12/04/on-the-2nd-day-of-christmas-a-customized-crock-pot-giveaway/comment-page-25/","timestamp":"2014-04-19T02:19:31Z","content_type":null,"content_length":"113137","record_id":"<urn:uuid:74c51aa9-0d3c-4c64-b307-2ee889519350>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00505-ip-10-147-4-33.ec2.internal.warc.gz"}
equilibrium point hypothesis KP-PC k.p.collins at worldnet.att.net%remove% Wed May 7 16:27:11 EST 2003 "siu99rnj" <siu99rnj at rdg.ac.uk> wrote in message news:b9bqdc$9m2$1 at vins1.reading.ac.uk... | Hello | What is the equilibrium point hypothesis? It is coming up in a lot | papers I have read that are to do with arm torques. | Thanks | Richard In the work I've done, it's the dynamic condition of of 'momentarily' maximized TD E/I-minimization. Think of this in terms of a 2-D least-potential-energy diagram - like a "U". Maximum TD E/I-minimization occurs at the lowest point in the "U". This correlates to an "equilibrium point" because the dynamics tend to overshoot max TD E/I-minimization, which results in their 'moving back' in the other direction, and the overall illusion of there being a 'determining oscillation', but the 'oscillation' is just an artifact of the ongoing TD E/I-minimization dynamics. An important feature of TD E/I-minimization is that all minima that are converged upon are themselves dynamic minima - not actual 'equilibrium points', but fleetingly-extant 'tools' within much larger information-processing dynamics - hierarchical TD E/I-minimization, which dynamically converges upon many TD E/I-minimizations within TD E/I-minimizations, ... . There's =nothing= at 'equilibrium' within an Living 'normal' nervous system. All there is within such is relative TD E/I-minimization which is never complete, and which ceases only at Death. I like to think that Isaac Newton intuited some of this whenever I contemplate his "never at rest" metaphor :-] K. P. Collins "Schmitd! Schmitd! Ve vill build a Shapel!" More information about the Neur-sci mailing list
{"url":"http://www.bio.net/bionet/mm/neur-sci/2003-May/054377.html","timestamp":"2014-04-18T00:59:19Z","content_type":null,"content_length":"4010","record_id":"<urn:uuid:caa109a2-5560-4b82-a814-07f907cf510c>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00025-ip-10-147-4-33.ec2.internal.warc.gz"}
Orland Park Precalculus Tutor Find an Orland Park Precalculus Tutor ...A school in which we continually outperformed out public school counterpart year after year. I currently work in the Mathematics Department for one of the most recognized scholastic public secondary districts in the State of Illinois. I believe anyone can learn math... it just takes the right person to open their eyes to it! 20 Subjects: including precalculus, physics, SAT math, trigonometry ...Therefore, I have been working with it for 7 years. I have actually had to write a program that helps better sort spam. I grew up in the era of computers. 16 Subjects: including precalculus, calculus, algebra 1, ACT Math ...Have tutored many college and high school students in statistics, finance or algebra. Business Career: Worked as an actuary for more than 30 years. Made frequent use of statistical and financial concepts in monitoring company mortality and lapse experience and in pricing policies. 13 Subjects: including precalculus, statistics, geometry, accounting ...Scored a 28 and 30 on the ACTs with a 35 in mathematics. I have my master's degree in mechanical engineering. Physics, chemistry, anatomy, and biology were all part of my curriculum to get where I am today. 20 Subjects: including precalculus, physics, calculus, algebra 1 ...I continued applying geometric skills in high school, where I was a straight A student and completed calculus as a junior. Finally, I tutored math through college to stay fresh, and would be able to work with any student needing assistance at whatever point in their development they encounter geometry. I was an advanced math student, completing the equivalent of pre-algebra in 6th 13 Subjects: including precalculus, calculus, statistics, geometry
{"url":"http://www.purplemath.com/orland_park_precalculus_tutors.php","timestamp":"2014-04-21T07:29:12Z","content_type":null,"content_length":"24060","record_id":"<urn:uuid:5172b9d0-2774-421c-89ab-32a94bbaf4c8>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00205-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent application title: METHOD FOR GENERATING A FRACTURED RESERVOIR MESH WITH A LIMITED NUMBER OF NODES IN THE MATRIX MEDIUM Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP A method for optimizing the development of a fluid reservoir using a fractured medium mesh generated from a first-order balanced octree technique is disclosed. A mesh of a discrete fracture network is generated by defining a set of cells for each fracture. A mesh of the matrix medium is then generated by dividing each cell by an octree technique, wherein a cell is divided into eight cells. The cells resulting from the division are themselves split in eight, until each cell resulting from a division comprises a number of fractures below or equal to a given threshold. Transmissivities between the cells of the fracture mesh, transmissivities between the cells of the matrix medium mesh, and transmissivities between cells of the fracture mesh and cells of the matrix medium mesh are then determined. Finally, the cells and the transmissivities are used for generating an image of the fluid reservoir from which the development of the fluid reservoir is optimized. A method for optimizing the development of a fluid reservoir traversed by a fracture network, wherein the reservoir is discretized into a set of cells, each cell comprising a matrix medium and a fracture medium, and a discrete fracture network is constructed in each cell from observations of the reservoir, comprising: generating a mesh of the discrete fracture network by defining a set of cells for each fracture; generating a mesh of the matrix medium by dividing each cell by an octree technique, wherein a cell is divided into eight cells and cells resulting from a division being themselves split in eight, until each cell resulting from a division comprises a number of fractures below or equal to a given threshold; determining transmissivities between the cells of the mesh of the discrete fracture network, transmissivities between the cells of the mesh of the matrix medium, and transmissivities between cells of the mesh of the discrete fracture medium and cells of the mesh of the matrix medium; using the cells and the transmissivities for generating an image of the fluid reservoir; and using the image of the fluid reservoir and a flow simulator implemented with a computer for optimizing the development of the fluid reservoir. A method as claimed in claim 10, wherein the mesh of the matrix medium is generated by a first-order balanced octree technique. A method as claimed in claim 10, wherein the cells resulting from a division are themselves split in eight, until each cell resulting from a division comprises a number of fractures below or equal to a given threshold, or until a number of cells is below or equal to a given threshold. A method as claimed in claim 11, wherein the cells resulting from a division are themselves split in eight, until each cell resulting from a division comprises a number of fractures below or equal to a given threshold, or until a number of cells is below or equal to a given threshold. A method as claimed in claim 10, wherein the transmissivities between the cells of the mesh of the matrix medium and the fracture medium are determined as a function of a volume of influence Ω of a fracture node i defined as all of the points of a cell of the mesh of the matrix medium whose closest fracture node is fracture node i, and as a function of an average distance l to fracture node i on Ω , while assuming that no flow circulates between the volumes of influence. A method as claimed in claim 11, wherein the transmissivities between the cells of the mesh of the matrix medium and the fracture medium are determined as a function of a volume of influence Ω of a fracture node i defined as all of the points of a cell of the mesh of the matrix medium whose closest fracture node is fracture node i, and as a function of an average distance l to fracture node i on Ω , while assuming that no flow circulates between the volumes of influence. A method as claimed in claim 12, wherein the transmissivities between the cells of the mesh of the matrix medium and the fracture medium are determined as a function of a volume of influence Ω of a fracture node i defined as all of the points of a cell of the mesh of the matrix medium whose closest fracture node is fracture node i, and as a function of an average distance l to fracture node i on Ω , while assuming that no flow circulates between the volumes of influence. A method as claimed in claim 13, wherein the transmissivities between the cells of the mesh of the matrix medium and the fracture medium are determined as a function of a volume of influence Ω of a fracture node i defined as all of the points of a cell of the mesh of the matrix medium whose closest fracture node is fracture node i, and as a function of an average distance l to fracture node i on Ω , while assuming that no flow circulates between the volumes of influence. A method as claimed in claim 10, wherein each fracture is represented within the discrete fracture network by a polygonal finite plane which is isotropic regarding dynamic properties thereof and the plane can comprise at least one intersection segment corresponding to an intersection between a fracture and another fracture of the network and comprising: a) constructing a Voronoi diagram on each fracture plane by positioning Voronoi cell centers on the at least one intersection segment; and b) calculating transmissivities between neighboring Voronoi cell centers from a ratio of surface area of neighboring cells to a distance between the neighboring cells. A method as claimed in claim 11, wherein each fracture is represented within the discrete fracture network by a polygonal finite plane which is isotropic regarding dynamic properties thereof and the plane can comprise at least one intersection segment corresponding to an intersection between a fracture and another fracture of the network and comprising: a) constructing a Voronoi diagram on each fracture plane by positioning Voronoi cell centers on the at least one intersection segment; and b) calculating transmissivities between neighboring Voronoi cell centers from a ratio of surface area of neighboring cells to a distance between the neighboring cells. A method as claimed in claim 12, wherein each fracture is represented within the discrete fracture network by a polygonal finite plane which is isotropic regarding dynamic properties thereof and the plane can comprise at least one intersection segment corresponding to an intersection between a fracture and another fracture of the network and comprising: a) constructing a Voronoi diagram on each fracture plane by positioning Voronoi cell centers on the at least one intersection segment; and b) calculating transmissivities between neighboring Voronoi cell centers from a ratio of surface area of neighboring cells to a distance between the neighboring cells. A method as claimed in claim 13, wherein each fracture is represented within the discrete fracture network by a polygonal finite plane which is isotropic regarding dynamic properties thereof and the plane can comprise at least one intersection segment corresponding to an intersection between a fracture and another fracture of the network and comprising: a) constructing a Voronoi diagram on each fracture plane by positioning Voronoi cell centers on the at least one intersection segment; and b) calculating transmissivities between neighboring Voronoi cell centers from a ratio of surface area of neighboring cells to a distance between the neighboring cells. A method as claimed in claim 14, wherein each fracture is represented within the discrete fracture network by a polygonal finite plane which is isotropic regarding dynamic properties thereof and the plane can comprise at least one intersection segment corresponding to an intersection between a fracture and another fracture of the network and comprising: a) constructing a Voronoi diagram on each fracture plane by positioning Voronoi cell centers on the at least one intersection segment; and b) calculating transmissivities between neighboring Voronoi cell centers from a ratio of surface area of neighboring cells to a distance between the neighboring cells. A method as claimed in claim 15, wherein each fracture is represented within the discrete fracture network by a polygonal finite plane which is isotropic regarding dynamic properties thereof and the plane can comprise at least one intersection segment corresponding to an intersection between a fracture and another fracture of the network and comprising: a) constructing a Voronoi diagram on each fracture plane by positioning Voronoi cell centers on the at least one intersection segment; and b) calculating transmissivities between neighboring Voronoi cell centers from a ratio of surface area of neighboring cells to a distance between the neighboring cells. A method as claimed in claim 16, wherein each fracture is represented within the discrete fracture network by a polygonal finite plane which is isotropic regarding dynamic properties thereof and the plane can comprise at least one intersection segment corresponding to an intersection between a fracture and another fracture of the network and comprising: a) constructing a Voronoi diagram on each fracture plane by positioning Voronoi cell centers on the at least one intersection segment; and b) calculating transmissivities between neighboring Voronoi cell centers from a ratio of surface area of neighboring cells to a distance between the neighboring cells. A method as claimed in claim 17, wherein each fracture is represented within the discrete fracture network by a polygonal finite plane which is isotropic regarding dynamic properties thereof and the plane can comprise at least one intersection segment corresponding to an intersection between a fracture and another fracture of the network and comprising: a) constructing a Voronoi diagram on each fracture plane by positioning Voronoi cell centers on the at least one intersection segment; and b) calculating transmissivities between neighboring Voronoi cell centers from a ratio of surface area of neighboring cells to a distance between the neighboring cells. A method as claimed in claim 18, wherein the Voronoi cell centers are positioned on the at least one intersection segment when each intersection segment is at least discretized by two intermediate Voronoi nodes at each end of the segment and refined in a case of close segments and intersecting segments. A method as claimed in claim 19, wherein the Voronoi cell centers are positioned on the at least one intersection segment when each intersection segment is at least discretized by two intermediate Voronoi nodes at each end of the segment and is refined for close segments and intersecting segments. A method as claimed in claim 20, wherein the Voronoi cell centers are positioned on the at least one intersection segment when each intersection segment is at least discretized by two intermediate Voronoi nodes at each end of the segment and is refined for close segments and intersecting segments. A method as claimed in claim 21, wherein the Voronoi cell centers are positioned on the at least one intersection segment when each intersection segment is at least discretized by two intermediate Voronoi nodes at each end of the segment and is refined for close segments and intersecting segments. A method as claimed in claim 22, wherein the Voronoi cell centers are positioned on the at least one intersection segment when each intersection segment is at least discretized by two intermediate Voronoi nodes at each end of the segment and is refined for close segments and intersecting segments. A method as claimed in claim 23, wherein the Voronoi cell centers are positioned on the at least one intersection segment when each intersection segment is at least discretized by two intermediate Voronoi nodes at each end of the segment and is refined for close segments and intersecting segments. A method as claimed in claim 24, wherein the Voronoi cell centers are positioned on the at least one intersection segment when each intersection segment is at least discretized by two intermediate Voronoi nodes at each end of the segment and is refined for close segments and intersecting segments. A method as claimed in claim 25, wherein the Voronoi cell centers are positioned on the at least one intersection segment when each intersection segment is at least discretized by two intermediate Voronoi nodes at each end of the segment and is refined for close segments and intersecting segments. A method as claimed in claim 18, wherein a number of cells is limited by assembling the Voronoi cells belonging to a same intersection segment, as long as the cell resulting from the assembly remains A method as claimed in claim 26, wherein a number of cells is limited by assembling the Voronoi cells belonging to a same intersection segment, as long as the cell resulting from the assembly remains A method as claimed in claim 34, wherein transmissivity between two cells resulting from an assembly is equal to a sum of the transmissivities via boundaries between them according to flow A method as claimed in claim 35, wherein transmissivity between two cells resulting from an assembly is equal to a sum of the transmissivities via boundaries between the two cells according to flow A method as claimed in claim 18, wherein the at least one intersection segment is determined by an octree technique wherein a cell is divided into eight cells, the cells resulting from a division being themselves split in eight, until each cell resulting from a division comprises a number of fractures below or equal to a given threshold. A method as claimed in claim 26, wherein the at least one intersection segment is determined by an octree technique wherein a cell is divided into eight cells, the cells resulting from a division being themselves split in eight, until each cell resulting from a division comprises a number of fractures below or equal to a given threshold. A method as claimed in claim 34, wherein the at least one intersection segment is determined by an octree technique wherein a cell is divided into eight cells, the cells resulting from a division being themselves split in eight, until each cell resulting from a division comprises a number of fractures below or equal to a given threshold. A method as claimed in claim 35, wherein the at least one intersection segment is determined by an octree technique wherein a cell is divided into eight cells, the cells resulting from a division being themselves split in eight, until each cell resulting from a division comprises a number of fractures below or equal to a given threshold. A method as claimed in claim 36, wherein the at least one intersection segment is determined by an octree technique wherein a cell is divided into eight cells, the cells resulting from a division being themselves split in eight, until each cell resulting from a division comprises a number of fractures below or equal to a given threshold. A method as claimed in claim 37, wherein the at least one intersection segment is determined by an octree technique wherein a cell is divided into eight cells, the cells resulting from a division being themselves split in eight, until each cell resulting from a division comprises a number of fractures below or equal to a given threshold. CROSS REFERENCE TO RELATED APPLICATION [0001] Reference is made to French application Ser. No. 11/03.112, filed Oct. 12, 2011, which application is incorporated herein by reference in its entirety. BACKGROUND OF THE INVENTION [0002] 1. Field of the Invention The present invention relates to the development of underground reservoirs, such as hydrocarbon reservoirs comprising a fracture network and in particular, relates to a method for generating a mesh of the matrix medium and generating an image of the reservoir. The invention also relates to a method using the image to optimize management of the development through a prediction of the fluid flows likely to occur through the medium to simulate hydrocarbon production according to various production scenarios. 2. Description of the Prior Art The petroleum industry, and more precisely reservoir exploration and development, notably of petroleum reservoirs, requires knowledge of the underground geology as precise as possible to efficiently provide evaluation of reserves, production modelling or development management. Indeed, determining the location of a production well, an injection well, the drilling mud composition, the completion characteristics, selecting a hydrocarbon recovery method (such as waterflooding for example) and the parameters required for implementing the method (such as injection pressure, production flow rate, etc.) requires good knowledge of the reservoir. Knowing the reservoir notably means knowing the petrophysical properties of the subsoil at any point in space and being able to predict the flows likely to occur therein. The petroleum industry has therefore combined for a long time field (in-situ) measurements with experimental modelling (performed in the laboratory) and/or numerical modelling (using softwares). Petroleum reservoir modelling thus is a technical stage that is essential for any reservoir exploration or development procedure. The goal of such modelling is to provide a description of the Fractured reservoirs are an extreme type of heterogeneous reservoirs comprising two very different media, a matrix medium containing the major part of the oil in place and having a low permeability, and a fractured medium representing less than 1% of the oil in place and which is highly conductive. The fractured medium itself can be complex, with different sets of fractures characterized by their respective density, length, orientation, inclination and opening. The fractured medium is made up of all of the fractures. The matrix medium is made up of the rock around the fractured medium. Those in charge of the development of fractured reservoirs need to know as precisely as possible the role of fractures. What is referred to as a "fracture" is a plane discontinuity of very small thickness in relation to the extent thereof, representing a rupture plane of a rock of the reservoir. On the one hand, knowledge of the distribution and of the behavior of these fractures allows optimization of the location and the spacing between wells to be drilled through the oil-bearing reservoir. On the other hand, the geometry of the fracture network conditions the fluid displacement, on the reservoir scale as well as the local scale, where it determines elementary matrix blocks in which the oil is trapped. Knowing the distribution of the fractures is therefore also very helpful at a later stage to reservoir engineers who want to calibrate the models they construct to simulate the reservoirs, in order to reproduce or to predict the past or future production curves. Geoscientists therefore have three-dimensional images of reservoirs allowing location of a large number of fractures. Thus, in order to reproduce or to predict (i.e. "simulate") the production of hydrocarbons when starting production of a reservoir according to a given production scenario (characterized by the position of the wells, the recovery method, etc.), reservoir engineers use a computing software referred to as reservoir simulator (or flow simulator) that calculates the flows and the evolution of the pressures within the reservoir represented by the reservoir model (image of the reservoir). The results of these computations enable prediction and optimization of the reservoir in terms of flow rate and/or of amount of hydrocarbons recovered. Calculation of the reservoir behavior according to a given production scenario constitutes a "reservoir simulation". A mesh of the matrix medium (rock) and a mesh of the fractured medium have to be generated in order to carry out simulations of flows around a well or on the scale of some reservoir cells (˜km ), by taking into account a geologically representative discrete network of fractures. It has to be suited to the geometric (three-dimensional diffuse faults and fractures location) and dynamic heterogeneities since the fractured medium is often much more fluid conducting than the matrix medium. These simulation zones, when fractured, can comprise up to one million fractures whose size ranges from one to several hundred meters, with very variable geometries of dip, azimuth and shape. This stage is very useful for hydrodynamic calibration of the fracture models. Indeed, the discontinuity of the hydraulic properties (dominant permeability in the fractures and storage capacity in the matrix) often leads to use the double medium approach (homogenized properties) on reservoir models (hectometric cell). These models are based on the assumption that the representative elementary volume (REV) is reached in the cell and that the medium fracturation is high enough to allow homogenization methods to be applied (stochastic fracturation periodicity for example). Within the context of petroleum reservoir development, discrete flow simulation methods are used, notably for permeability scaling (scale of a reservoir cell) and for dynamic tests (a small zone of interest (ZOI) in relation to the size of the reservoir). The computation times appear to be essential since computation is often carried out sequentially and a large number of times in optimization loops. It is well known that, in the case of dense fracturation (highly connected fractures), analytical methods are applicable whereas, in case of low connectivity indices, numerical simulation on a discrete fracture network (DFN) is necessary. The numerical model of the matrix of the transmissivities relative to the various objects (diffuse faults, matrix medium cells), has to meet a certain number of double-medium criteria: be applicable to a large number of fractures (several thousand fractures); restitute the connectivity of the fracture network; be evolutionary to account for all the fracture models (several fracturation scales, 3D disordered fractures, sealed faults, time-dependent dynamic properties, etc.); the shape of the fractures (any plane convex polygon or ellipse) and the intersection heterogeneities between the 3D fractures have to be taken into account upon plane meshing of each fracture; model the evolution of the pressure field in the "matrix" medium (less conductive and containing more fluid) as a function of the pressure field in the fractures by means of transmissivity terms (matrix/fracture exchange); the number of simulation nodes, that is the number of pressure unknowns, has to be restricted to be able to use a numerical solver (what is referred to as a node is a volume element of the fracture or matrix medium of unknown pressure value); and be fast and memory efficient (usable on a usual set and not only on a supercomputer). With such needs, conventional meshes (finite element or finite volume) and the methods derived therefrom for local transmissivity construction are not applicable due to too large a number of nodes required for simulation. The technique implemented in the FracaFlow® software (IFP Energies nouvelles, France), which allows these limits to be exceeded using a physical approach based on analytical solutions of plane flow problems, is also known. The fractures are assumed to be constrained by geological beds (they entirely traverse them) and sub-vertical. A fracture is referred to as a constrained bed if it stops on geological bed changes. According to these hypotheses, all the intersections occur on any intermediate plane parallel to the geological layers. In the median plane of each geological bed, the nodes are on the intersections (a point) of the fractures on the plane (a matrix node and a fracture node in the same place). The vertical connections are carried by the fractures traversing several layers and the volumes associated with the nodes are calculated as all of the points (in the 2D plane, propagated vertically to the thickness of the layer) that are the closest to the node (in the medium This method reaches its limit when the fractures are not bed constrained and/or the dip of the fractures is not vertical. The intersections are in fact no longer present in each intermediate plane and the vertical connectivity can no longer be met. By increasing the number of intermediate planes, the error can be reduced (without ever being exact in the case of sub-horizontal fractures), but the number of nodes increases considerably and exceeds the limits imposed by the solvers. SUMMARY OF THE INVENTION [0023] In general terms, the invention relates to a method for optimizing the development of a fluid reservoir traversed by a fracture network, wherein said reservoir is discretized into a set of cells, each cell comprising a matrix medium and a fracture medium, and a discrete fracture network (DFN) is constructed in each cell from observations of said reservoir. The method comprises the following generating a mesh of the discrete fracture network by defining a set of cells for each fracture; generating a mesh of the matrix medium by dividing each cell of an octree technique, wherein a cell is divided into eight cells, the cells resulting from a division being themselves split in eight, until each cell resulting from a division comprises a number of fractures below or equal to a given threshold; determining transmissivities between the cells of the fracture mesh, transmissivities between the cells of the matrix medium mesh, and transmissivities between cells of the fracture mesh and cells of the matrix medium mesh; using the cells and the transmissivities for generating an image of the fluid reservoir; and using the image of the fluid reservoir and a flow simulator implemented with a computer for optimizing the development of the fluid reservoir. According to the invention, the mesh of the matrix medium can be generated by a first-order balanced octree technique. According to an embodiment, the cells resulting from a division are themselves split in eight, until each cell resulting from a division comprises a number of fractures below or equal to a given threshold, or until the number of cells is below or equal to a given threshold. According to the invention, the transmissivities between the cells of the matrix medium mesh and the fracture medium can be determined as a function of a volume of influence of a fracture node i, Ω , defined as all of the points of a cell of the matrix medium mesh whose closest fracture node is fracture node i, and as a function of a average distance l to fracture node i on Ω , and assuming that no flow circulates between these volumes of influence. According to an embodiment, each fracture is represented within the discrete fracture network by a polygonal finite plane, which is isotropic regarding its dynamic properties, and the plane can comprise at least one intersection segment corresponding to an intersection between the fracture and another fracture of the network, and the following stages are carried out: a) constructing a Voronoi diagram on each fracture plane by positioning Voronoi cell centers on the intersection segments; and b) calculating transmissivities between neighboring Voronoi cell centers from a ratio of the surface area of the neighboring cells to the distance between the neighboring cells. According to this embodiment, the Voronoi cell centers can be positioned on the intersection segments by applying the following rule: each intersection segment is at least discretized by two intermediate Voronoi nodes at each end of the segment and refined in the case of close segments and intersecting segments. The number of cells can also be limited by assembling the Voronoi cells belonging to the same intersection segment, as long as the cell resulting from the assembly remains connected. The transmissivity between two cells resulting from the assembly is advantageously equal to the sum of the transmissivities via the boundaries between them according to the flow conservation. Finally, the intersection segments can be determined by an octree technique wherein a cell is divided into eight cells, the cells resulting from a division being themselves split in eight, until each cell resulting from a division comprises a number of fractures below or equal to a given threshold. BRIEF DESCRIPTION OF THE DRAWINGS [0039] Other features and advantages of the method according to the invention will be clear from reading the description hereafter of embodiments given by way of non limitative example, with reference to the accompanying figures wherein: FIG. 1 illustrates the various stages of the method according to the invention; FIG. 2 illustrates a realization of a fracture/fault network on the scale of a reservoir; FIG. 3 illustrates a discrete fracture network (DFN); FIG. 4 illustrates an example of a fracture plane having six vertices and four coplanar intersection segments, including one multiple intersection with other fractures; FIG. 5 illustrates the creation of the Voronoi cell centers for the example chosen (FIG. 4), according to the Koebe theorem; FIG. 6 illustrates the construction, from FIG. 5, of the infinite Voronoi diagram, with determination of the neighboring cells and boundaries; FIG. 7 illustrates the result of the clipping applied to the image of FIG. 6; FIG. 8 illustrates the assembly of the Voronoi cells for creating the "fracture nodes" (here six "fracture nodes" characterized each by a color); FIG. 9 illustrates the assembly of the Voronoi cells in cases where the "fracture node" does not correspond to the Voronoi nodes, with calculation of a correction of all the transmissivities; FIG. 10 shows a three matrix mesh of cells of different sizes; and FIG. 11 shows a matrix mesh applied to a DFN. DETAILED DESCRIPTION OF THE INVENTION [0051] The method according to the invention for optimizing the development of a reservoir, using the method of generating a matrix medium mesh, comprises six stages with the first four stages being carried out by a computer, as illustrated in FIG. 1: 1--Discretization of the reservoir into a set of cells (MR) 2--Modelling of the fracture network by a discrete fracture network (DFN) 3--Generation of a double medium mesh (MAILDM) 3-a Generation of a fracture medium mesh (FRAC) 3-b Generation of a matrix medium mesh (MAT) 4--Simulation of the fluid flows (SIM) 5--Optimization of the reservoir production conditions (OPT) 6--Optimized (global) development of the reservoir (EXPLO) 1--Discretization of the Reservoir into a Set of Cells (MR) The petroleum industry has combined for a long time field (in-situ) measurements with experimental modelling (performed in the laboratory) and/or numerical modelling (using softwares). Petroleum reservoir modelling thus is an essential technical stage with a view to reservoir exploration or development. The goal of such modelling is to provide a description of the reservoir, characterized by the structure/geometry and the petrophysical properties of the geological deposits or formations. This modelling is based on an image of the reservoir generated by discretizing the reservoir into a set of cells (in the description, a cell corresponds to a node). Each cell represents a given volume of the reservoir and makes up an elementary volume of the reservoir. The cells in their entirety make up a discrete image of the reservoir, referred to as reservoir model. Many software tools are known allowing construction of a reservoir model from data (DG) and measurements (MG) relative to the reservoir. In the case of fractured reservoirs, the properties in the fracture medium and in the matrix medium are often very heterogeneous. For these heterogeneities to be properly taken into account, a double medium flow model is frequently used. FIG. 2 illustrates a two-dimensional view of a reservoir model. The fractures are represented by lines. The cells are not shown. 2--Modelling the Fracture Network In order to take into account the role of the fracture network in the simulation of flows within the reservoir, it is necessary to associate with each of these elementary volumes (cells of the reservoir model) a modelling of the fracture medium. Fracture Characterization Statistical reservoir characterization is carried out by direct and indirect reservoir observations (OF) using 1) well cores extracted from the reservoir, on which a statistical study of the intersected fractures is performed, 2) outcrops characteristic of the reservoir, which afford the advantage of providing a large-scale view of the fracture network, and 3) seismic images allowing them to identify major geological events. These measurements allow characterization of the fractures by statistical parameters (PSF) of their respective density, length, azimuthal orientation, inclination and opening, and of course their distribution within the reservoir. At the end of this fracture characterization stage, statistical parameters (PSF) are available describing the fracture networks from which realistic images of the real (geological) networks can be reconstructed (generated) on the scale of each cell of the reservoir model being considered (simulation domain). The goal of characterization and modelling of the reservoir fracture network is to provide a fracture model validated on the local flows around the wells. This fracture model is then extended to the reservoir scale in order to achieve production simulations. Flow properties are therefore associated with each cell of the reservoir model (MR) (permeability tensor, porosity) of the two media (fracture and matrix). These properties can be determined either directly from the statistical parameters (PSF) describing the fracture networks, or from a discrete fracture network (DFN) obtained from the statistical parameters (PSF). Constructing a Discrete Fracture Network (DFN)--FIGS. 2 and 3 Starting from a model of the reservoir studied, a detailed representation (DFN) of the internal complexity of the fracture network, made as accurately as possible in relation to the direct and indirect reservoir observations, is associated therewith in each cell. FIG. 2 illustrates a realization of a fracture/fault network on the scale of a reservoir. Each cell of the reservoir model thus represents a discrete fracture network delimiting a set of porous matrix blocks, of irregular shape and size, delimited by fractures. Such an image is shown in FIG. 3. This discrete fracture network constitutes an image representative of the real fracture network delimiting the matrix blocks. Constructing a discrete fracture network in each cell of a reservoir model can be achieved using known modelling softwares, such as the FRACAFlow® software (IFP Energies nouvelles, France). These softwares use the statistical parameters determined in the fracture characterization stage. Thus, within the discrete fracture network (DFN), each fracture is represented by a polygonal finite plane which is isotropic regarding its properties. that is, any property of the fault (hydraulic, such as the conductivities) is homogeneous in this plane. This plane can comprise at least one intersection segment corresponding to an intersection between the fracture and another fracture of the 3--Generation of a Double Medium Mesh (MAILDM) At this stage, reservoir engineers want to carry out flow simulations to best calibrate the flow properties of the fracture network. There are many techniques available for generating such a mesh and solving flow equations. Considering that the mass conservation equation holds in a fractured porous medium, and since Darcy's equation is assumed to hold in the matrix medium (Ω ) and in the fracture medium (Ω ) with the continuity of the normal flow at the matrix/fracture boundary δΩ (with Ω =Ω), it can be stated: { φ F χ ∂ P F ∂ t - div ( K F _ _ μ ∇ → P F ) = Q F , in Ω F φ M χ ∂ P M ∂ t - div ( K M _ _ μ ∇ → P M ) = Q M , in Ω M K F _ _ μ ∇ → P F n -> F = K M _ _ μ ∇ → P M n -> M , on ∂ Ω FM ##EQU00001## : P ,M(x,y,z,t) are the respective pressures of each medium (fracture and matrix), Q ,M(x,y,z,t) are the source terms; χ = c R + c fl = c R + i = fluide S fl _ i c i ##EQU00002## or me total compressibility of the medium ; c is the compressibility of the rock; c is the compressibility of the fluid; K ,M(X, y, z) are the respective fracture and matrix permeabilities in space; Ω ,M(x, y, z) is the respective porosities of the fracture and matrix media in space; and μ is the viscosity of the fluid (constant). Using the Green-Ostrogradski theorem allows this system to be rewritten in its integral form: { ∫ Ω F φ F χ ∂ P F ∂ t + ∫ ∂ Ω FM K F _ _ μ ∇ → P F n -> F S = ∫ ∂ Ω F K F _ _ μ ∇ → P F n -> ext S ∫ Ω M φ M χ ∂ P M ∂ t + ∫ ∂ Ω FM K M _ _ μ ∇ → P M n -> M S = ∫ ∂ Ω M K M _ _ μ ∇ → P M n -> ext S ∫ ∂ Ω FM K F _ _ μ ∇ → P F n -> M S = ∫ ∂ Ω F M K M _ _ μ ∇ → P M n -> M S ∫ ∂ Ω F K F _ _ μ ∇ → P F n -> ext S = ∫ ∂ Ω F Q F n -> ext S ∫ ∂ Ω M K M _ _ μ ∇ → P M n -> ext S = ∫ ∂ Ω M Q M n -> ext S 3-a Generation of a Fracture Medium Mesh (FRAC) According to a preferred embodiment, a method allowing generating a fine mesh on any discrete 3D fracture network with an optimum number of nodes, close to the number of intersections between the connected fractures, is used. The connectivity of the discrete fracture network (DFN) on the scale being studied (a zone of interest of several thousand square meters) is complied in order to simulate flows. This method of meshing each fracture plane in the 3D space is applicable to all the flow problems in a 2D plane with discontinuities as regards hydraulic properties. This method essentially comprises two stages: a) constructing a Voronoi diagram on each fracture plane by positioning Voronoi cell centers on the intersection segments; b) calculating transmissivities between the centers of neighboring cells from the ratio of the surface area of the neighboring cells to the distance between the neighboring cells; The cells of the fractured mesh are constructed from a segment Voronoi diagram to limit to a single homogeneous medium the space of exchange between two fracture nodes while complying locally with the flow physics in a homogenous medium. The discontinuity due to any fracture intersection is modelled by the mesh construction. Once the nodes are positioned, a simple and physical formulation is used to calculate the transmissivities which are the connection terms between nodes. This method, applied here to the 3D fracture medium mesh, applies to any 3D problem of disordered heterogeneity segments connected by planes of homogenous property. Physically, it is assumed that the intersection between two fractures is a heterogeneity and the damage is assumed to be greater in each plane relative to the respective fractures. Now, in a homogeneous medium, the fluid will follow the path requiring the least energy to travel from one point to the next. Since it is assumed that the pressure gradient along the intersection is lower than in the fracture (higher permeability), to go from one intersection to the next, the fluid will follow the path with a shorter distance between the two intersection segments, which leads to calculation, for each fracture, the 2D segment Voronoi diagram on the intersections. Indeed, this diagram affords the advantage of delimiting the zones of influence of each fracture. 1. The mesh of the fractured medium is generated, on the DFN, in order to have as few nodes as possible. In order to fully meet the connectivity, the fracture nodes are arranged on the intersections of the 3D fractures (one or more nodes per intersection segment). 2. The isopressures, orthogonal to the pressure gradient and thus to the current lines in each homogeneous fracture plane, are assumed to be parallel to the intersections. 3. The cells of the mesh are constructed from a segment Voronoi diagram on each fracture plane. This construction affords the advantage of: limiting to a single homogeneous medium (the plane of a fracture) the space of exchange between two fracture nodes; locally meeting the orthogonality between the current line (between two nodes) and the boundary of the cells; and facilitating the approximate calculation of the pressure gradient between the nodes along the boundary. According to the invention, each fracture is represented by a polygonal finite plane isotropic regarding its dynamic properties. Each plane can comprise at least one intersection segment corresponding to an intersection between the fracture and another fracture of the network (DFN). FIG. 4 illustrates an example of a fracture with six vertices and four coplanar intersection segments, including a multiple intersection. The method for meshing this plane comprises the following stages: 1. For each intersection, creation of a fracture node (a node at the center of each intersection, the information is retained on the coordinates of both ends). In the case of a multiple intersection (three fractures at least intersect at the same point), the initial node at the center of the segment is replaced by two nodes on either side of the point. The goal of this preprocessing is to provide connexity for any fracture cell. 2. Construction of the Voronoi diagram on each fracture plane from the intersection segments by applying Koebe's theorem, that is each segment is at least discretized by two intermediate Voronoi nodes at each end and refined in case of segments that are too close or that intersect. This construction can be broken down into the following stages: i. Creation of all of the Voronoi cell centers with points satisfying the "kissing disks" theorem of Koebe-Andreev-Thurston (P. Koebe, Kontaktprobleme der konformen Abbildung, Ber. Sachs. Akad. Wiss. Leipzig, Math.-Phys. KI., 88:141-164, 1936) on each intersection contained in the plane of the fracture. Each segment is at least discretized by two intermediate Voronoi nodes at each end of the segment, and refined in the case of segments that intersect and segments that are too close (the notion of closeness depends on the distance from which two points are assumed to merge. It is a geometric precision parameter that the user can define according to the size of the domain being studied in relation to the size of the fractures notably). FIG. 5 illustrates the creation of the Voronoi cell centers for the example chosen (FIG. 4), according to Koebe's theorem. It can be determined that, for each triplet of points, the circle whose center is the midpoint of two of the points does not contain the third point. ii. Construction of the Voronoi diagram on these points using S. Fortune's algorithm (S. J. Fortune, A Sweepline Algorith for Voronoi Diagrams, Algorithmica 2 (1987), 153-174), proved to be the fastest on an infinite 2D plane. FIG. 6 illustrates the construction of the infinite Voronoi diagram, with determination of the boundaries and the neighboring cells. iii. Clipping of the infinite Voronoi diagram with respect to the finite polygon defining the fracture. Clipping is confining a visual result to a given domain. At this stage, the Voronoi cells are convex and the line passing through the centers of two neighboring cells is orthogonal to the boundary that separates them. Such a geometry allows easier approximation of the pressure gradient between the nodes along the boundary. FIG. 7 illustrates the result of the clipping procedure applied to the image of FIG. 6. 3. Fine calculation of the transmissivities between the centers of the neighboring cells of the diagram The transmissivities (Tff) between the centers of neighboring cells are calculated from the ratio of the surface area (Sc) of the neighboring cells to the distance (Dc) between the neighboring cells: Tff=KSc/Dc with K the permeability of the fracture where the flow occurs. 4. Voronoi Cells Assembly It is assumed that the damage at the intersection between two fractures is greater than in the respective planes of the fractures. The conductivity discontinuities on the fracture plane are therefore taken into account locally along the intersections. The isopressure lines in each fracture plane then correspond to the equidistance lines between the intersection traces. The segment Voronoi diagram meets this criterion with the cells of this diagram becoming the fracture cells (connected by construction) associated with the node. To limit the number of nodes, the Voronoi cells are assembled belonging to the same intersection segment as long as the cell remains connected (that is, the cells belong to the same segment and there is no multiple intersection). FIG. 8 illustrates the assembly of the Voronoi cells in order to create the "fracture nodes" (here six "fracture nodes" characterized each by a color). Then, the volumes are summed and the transmissivity between two "fracture nodes" is equal to the sum of the transmissivities via the boundaries separating them according to the flow conservation. If a fracture node A contains several Voronoi points P and a node B contains several Voronoi points P , the transmissivity between two fracture nodes A and B is calculated as the sum of the transmissivities between all the Voronoi cell pairs belonging to a node and having a common face (FIG. 8). T AB = i , j T ( P A i , T B j ) ##EQU00004## In the case of assembly, when the "fracture node" does not correspond to the Voronoi nodes, a correction of all the transmissivities is carried out to account for the distance modifications for the fluid exchanges. This modification depends on the permeability assigned to each fracture intersection (which can vary depending on the model, from that of the calculation plane to infinity). One may indeed want to model a different permeability along the intersection of two fractures, without it being infinite, in relation to the one that defines each fracture plane. In order to find the correction linked with the finite conductivity of an intersection segment, a transmissivity is added between center C1 of the node (or, geometrically speaking, of the segment) and a Voronoi point P . Thus, if two Voronoi nodes (or more) belong to the same fracture node (FIG. 9) of center C1, the transmissivity T.sub.S1S2 between a node A and a node B is calculated as follows: T AB corr = ( 1 T C 1 A i + 1 T A i B j + 1 T C 2 B j ) - 1 ##EQU00005## This correction removes the hypothesis of infinite conductivity along the fracture if the selection of the model imposes T In order to be used in a software and executed in a computer, the algorithm inputs can be as follows: Zone of interest is a 3D parallelepiped given by the bounding box (Xmin, Ymin, Zmin, Xmax, Ymax, Zmax) of the simulation domain; List of the fractures (3D convex plane polygons given by lists of ordered vertices) with their isotropic hydraulic properties in each fracture plane (permeability, thickness, compressibility); Connectivity table under boundary conditions (table indicating to which cluster (of maximum size) a fracture belongs. A cluster is a group of fractures for which there is a path connecting each List of the 3D intersection segments, and for each fracture a list of pointers to the corresponding intersections. Each fracture intersection and vertex is expressed, through base change, in a local reference frame of the fracture. Thus, the 3D general problem is brought down to a succession of 2D plane problems. This new fractured medium meshing technique allows generation of a fine mesh on the discrete fracture network with an optimum number of nodes, close to the number of intersections between the connected fractures. The connectivity of the DFN on the scale being studied (a zone of interest of some thousand square meters) is complied with in order to simulate flows; the 3D cells of the fractured mesh are constructed from a segment Voronoi diagram discretized according to Koebe's theorem on each fracture plane. This construction affords the advantage of: [limiting to a single homogeneous medium (the plane of a fracture) the space of exchange between two fracture nodes, locally meeting the orthogonality between the current line (between two nodes) and the boundary of the cells and for readily calculating the pressure gradient between the nodes along the boundary; taking account of the discontinuity due to any fracture intersection; and being broken down into simple and readily parallelizable stages. This new fractured medium meshing technique thus is a semi-analytical method using a Voronoi diagram for reducing the number of nodes required for flow simulation on a discrete fracture network. The method according to the invention is therefore applicable to larger calculation areas than previous methods. The results of this meshing method (nodes, volume and associated transmissivities) can then be used with a double medium approach for simulating well tests, interference tests or an equivalent permeability calculation. This method, applied here to the meshing of a 3D fractured medium, applies to any homogeneous plane problem populated with heterogeneity segments. The discretization of double medium flow equations is recalled (the discrete pressures of each cell N are denoted by P ,N and P Evaluation of the pressure variation for a time Δt is obtained via a time-implicit scheme, which leads to: { ∫ Ω F φ F χ P F j + 1 - P F j Δ t Ω - ∫ ∂ Ω FM K F _ _ μ ∇ → P F j + 1 n -> M S = ∫ ∂ Ω F Q F j + 1 n -> ext S ∫ Ω M φ M χ P M j + 1 - P M j Δ t Ω + ∫ ∂ Ω FM K M _ _ μ ∇ → P M j + 1 n -> M S = ∫ ∂ Ω M Q M j + 1 n -> ext S ##EQU00006## The spatial pressure variation is then evaluated by a finite-volume numerical approach: { n φ F , n χ P F , n j + 1 - P F , n j Δ t Ω F , n + n k T FF , nk ( P F , n j + 1 - P F , k j + 1 ) + n l T F M , nl ( P F , n j + 1 - P M , l j + 1 ) = n Q Fn j + 1 n φ M , n χ P M , n j + 1 - P M , n j Δ t Ω M , n + n k T MM , nk ( P M , n j + 1 - P M , k j + 1 ) + n l T F M , nl ( P M , n j + 1 - P F , l j + 1 ) = n Q Mn j + 1 ##EQU00007## To simulate well tests, there is a compressibility effect, and a flow Q is imposed on the place where measurements are taken for each well. Pressure boundary conditions can be applied on the faces of the parallelepiped being studied. The system is then preprocessed to be reduced to n independent equations. [ - T ij μ ( χ Φ i V i Δ t n + k ≠ i T ki μ ) - T ij μ ] ( P i n ) = ( Q i lim + 0 , nb lim T i , j lim μ P j lim ) + ( χ Φ i V i Δ t n P i n - 1 ) ##EQU00008## : P.sub.j,lim is the limit pressure imposed on the temporary node j_lim, P is the pressure unknown in node i at the time n, χ is the total compressibility (medium+fluid), Δt is the current time interval, V is the volume of node i. The system thus is, by construction, diagonally dominant. To calculate equivalent permeabilities (MAE) for each reservoir cell with these equivalent permeabilities being used subsequently in the flow simulator on reservoir scale, the same boundary conditions as described in French Patent 2,757,947 are used. The following discrete numerical system then has to be solved: [ - T ij k ≠ i T ki - T ij ] ( P i ) = ( 0 , nb lim T i , j lim P j , lim ) ##EQU00009## : P.sub.j,lim being the limit pressure imposed on node j_lim for a given pressure gradient (this term varies for the 3 systems solved); and P is pressure unknown in node i. The system thus is, by construction, diagonally dominant, and independent of compressibility and viscosity. 3-b Generation of a Matrix Medium Mesh (MAT) Double medium flow simulation also requires discretizing the matrix medium and calculating terms of exchange between the matrix medium and the fracture medium. The necessity of constraining the pressure field of the matrix medium to the fracture network leads us to providing a new 3D discretization method for the matrix medium controlled by the geometry of the 3D simulation DFN and its heterogeneities in space for example (therefore the heterogeneities of the fracture medium). Indeed, the pressure field of the matrix medium in a reservoir cell depends, a priori, on the position of the fractures and on the hydraulic conductivity ratio between the matrix medium and the fracture medium. Notably, the fracture medium is assumed to be more conductive than the matrix medium. Therefore, for the same flow rate in a short time interval, the pressure variation in the fracture medium can be disregarded in relation to that in the matrix medium. It is then assumed that the isopressures are parallel to the fracture planes in the matrix medium, that is, for the same distance (possibly corrected by the permeability of the matrix medium) with respect to the fracture medium, the pressure is assumed to be constant in the matrix medium. This hypothesis allows defining the direction of the pressure gradient in the matrix medium and it facilitates calculation of the exchange terms. To discretize the matrix medium, an Octree type algorithm is used. An octree is a tree type data structure in which each node can have up to eight children (octants). Octrees are most often used for partitioning a three-dimensional space by subdividing it recursively into eight octants. Thus, in order to construct the cells of the matrix medium within a cell of the reservoir model (obtained from the reservoir discretization), the cell is divided into eight cells. Then, if necessary, each one of these eight cells is divided into eight cells, and the process continues until the mesh obtained is satisfactory. The octree division technique is carried out as follows: Algorithm Input Zone of interest is a 3D parallelepiped (all or part of the reservoir model) given by the bounding box (Xmin, Ymin, Zmin, Xmax, Ymax, Zmax) of the simulation domain; List of the fracture cells with their geometric (intersection segments between fractures for example) and hydraulic properties; and Properties (porosity, permeability, compressibility) of the matrix medium at any point of the zone being studied. Refinement Conditions Refinement corresponds to the division into eight octants. According to the invention, the refinement condition (condition for a cell to be again split in eight) is the number of fracture nodes (or any other strong hydraulic discontinuity) present in the calculation rectangular parallelepiped (cell resulting from a division). The reservoir cell is thus divided into eight, and each cell resulting from this first division is split in eight again, until each cell resulting from a division comprises a number of fractures below or equal to a given threshold (refinement condition). Thus, the more general problem is split up into simplified problems of a parallelepipedic matrix cell (the cell resulting from the last division) and of a limited number of fractures traversing it. In other words, the matrix is meshed more finely in areas of higher complexity (in the sense of the fracturation). According to the terminology of the Octree type algorithm, what is referred to as the "octree leaf" is the object of the tree that can no longer be subdivided considering the refinement condition of the octree. It is therefore, according to the invention, a cell resulting from the last division (cell having less fractures than a given threshold). The octree leaves are intended to become the cells of the matrix medium. It is therefore important to keep a parallelepiped size of the same order of magnitude among neighbors (˜50%) so as to allow good flow simulations using known softwares as flow simulators. The octree constructed according to the invention is thus a first-order balanced octree. This constraint on the construction of the octree means that the level difference (number of subdivisions required to access the current leaf, the root being of level 0) between two neighboring leaves in the octree cannot exceed one. Thus, each matrix node cannot have more than four neighbors in each one of the six directions. According to a preferred embodiment, a third constraint is added to construct the octree which is a refinement condition linked with a memory limit. Thus, a maximum number of cells of the matrix medium can be set. When the number of leaves reaches this level, the division stops. Once the octree is constructed, each leaf corresponds to a parallelepipedic cell resulting from one or more divisions. The node of the matrix medium is then placed at the center of these cells. Thus, the matrix medium mesh is generated by meeting a constraint linked with the high discontinuities of the fracture medium, using a first-order balanced octree with a refinement constraint on the number of fractures per leaf. Each octree leaf carries the information on the fracture nodes (and therefore their geometry) traversing it. The neighborhood information contained in the octree and the fact that the octree is a first-order balanced octree facilitates calculation of the exchange terms. FIG. 11 shows a matrix mesh applied to a DFN with, from left to right which is a fracture density network variable in space, a matrix zone associated with the DFN for octree-based cutting, and the mesh of the matrix medium (corresponding to the left-hand network) generated according to the method of the invention. Computation of the Matrix/Fracture Exchange Terms According to a first possible configuration, a matrix node exchanges with several fracture nodes. The principle of flow conservation in the considered cell provides the expression: Q MF = i = 1 k Q MF i Eq 2 ##EQU00010## where the flow exchanged between the matrix cell and fracture node Q MF i = T MF i μ ( P Fi - P M ) = 1 μ ∫ ∂ Ω Fi K M _ _ ∇ → P n → Fi S Eq 3 ##EQU00011## hence the transmissivity between matrix node M and fracture node Fi T MF i = ∫ ∂ Ω Fi K M _ _ ∇ → P n → Fi S ( P Fi - P M ) ≈ 2 S MF i K M _ _ n F i n F i l mFi Ω M i Eq 4 ##EQU00012## The volume of influence of fracture node i, Ω , is all the points of the matrix cell whose closest fracture node is fracture node i. It is assumed that no flow circulates between these volumes of influence. They are indeed delimited, by hypothesis, by an isopressure equidistant from the fracture nodes. is the average distance with respect to fracture node i on Ω The volume of influence can be calculated as follows: By hypothesis, the pressure gradient is parallel to the normal to the fractures. During a short time interval, the pressure in the fracture medium varies little over space, its first-order approximation is obtained by its average pressure. To evaluate a matrix local pressure in the cell, assuming that the pressure field in the matrix varies linearly as a function of the minimum distance to the fracture, with Taylor's formula: (•)n.su- b.F(•) Eq 1 The zones of influence, Ω , associated with each fracture present in the matrix cell considered are determined by a stochastic algorithm: drawing random points in the matrix cell (in each one of the 8 octants); for each point, computing the distance to each finite fracture plane; assigning this point to the closest fracture node to evaluate the volumes of influence; and assigning the distance to the octant and computing the average matrix/fracture distance in each octant. The result of this algorithm is as many density (distance) functions as there are fracture nodes and an average matrix/fracture distance in each octant. The average matrix/fracture distance l and the matrix volumes associated with the fracture node are then readily computable for each fracture node. It is possible to also weigh them by a time term to be explicit for modelling the transient flow. According to a second possible configuration, no fracture node belongs to the volume associated with the matrix node. In the case of double medium flows (fractures+matrix), if no fracture traverses the matrix cell, the matrix cell being considered supplies the fracture network only via the matrix/matrix exchange. For the simple medium (fractures only), the volume of influence of the cell is added to the one that exchanges with the closest fracture (via the volume function associated with a fracture node). Computation of the Matrix/Matrix Exchange Terms By using a first-order balanced construction of the octree, in each one of the six directions, it is ensured that a cell can have 1 or 4 neighboring cells per face. The main difficulty for matrix/matrix transmissivity computation is linked with the fact that the lines of the isopressures P are correlated with the geometry of the fractures so that the macroscopic pressure gradient is therefore also is in the cell. The cell has a single neighbouring cell in direction X T ij = ( K i + K j ) Δ Y Δ Z L ##EQU00013## The cell has four neighbouring cells in direction X 1 T ij = L Δ Y 1 ( Δ X i + Δ X j ) Δ Z [ Δ X i K i + Δ X j K j ] Eq 5 ##EQU00014## FIG. 10 illustrates three matrix mesh cells of different size where the thick line represents the boundary between the cells, a cross represents the center of a cell and a circle represents the center of a pseudocell. 5--Calibration of the Flow Properties of the Fractures and of the Matrix Medium Fracture Medium The next stage determines the flow properties of the initial fractures (conductivity and opening of the fractures), then calibrating these properties with well test simulations on discrete local flow models, inherited from the realistic image of the real (geological) fracture network on the reservoir scale. Although it covers only a limited area (drainage area) around the well, such a well test simulation model still comprises numerous computation nodes if the fracture network is dense. The size of the systems to be solved and/or the duration of the computations therefore often remain Calibration of the fracture flow properties (conductivity and opening of the fractures), locally around the wells, requires, if necessary, the simulation of well tests. This type of calibration is well known. The method described in French Patent 2,787,219 can for example be used. The flow responses of some wells (transient or pseudo-permanent flow tests, interferences, flow rate measurement, etc.) are simulated on these models extracted from the geological model giving a discrete (realistic) image of the fractures supplying these wells. The simulation result is then compared with the real measurements performed in the wells. If the results differ, the statistical parameters (PSF) describing the fracture networks are modified, then the flow properties of the initial fractures are redetermined and a new simulation is carried out. The operation is repeated until the simulation results and the measurements agree. The results of these simulations allow calibration of (estimate) the geometry and the flow properties of the fractures, such as the conductivities of the fracture networks of the reservoir being studied and the openings. 6--Simulation of the Fluid Flows (SIM) and Calibration of the Fracture Medium Properties (OPT) 6-a Method without the Scaling Stage At this stage, the reservoir engineer has all the data required for constructing the flow model on the scale of a ZOI or of a reservoir cell. The image models the fracture network by a discrete fracture network (DFN) where each fracture is for example meshed with the Voronoi cells containing transmissivity data (invention). According to the process selected by the reservoir engineer, two types of simulation can be carried out: short-term well test simulation allowing calibration of the properties of the near-well model selected. The model used for flow simulation on the 3D mesh is based on the following hypotheses: the flows are single-phase flows, the fluid and the rock are weakly compressible, the temperature varies little in the reservoir, the fluid viscosity is constant in the reservoir, the flows in porous media follow a Darcy's law in the matrix and the fractures, and gravity is disregarded but pressure P can be replaced by (P+ρgz) for the well test simulations with a suitable postprocessing. If the computer capacities permit, due to the small number of nodes resulting from the provided method, consideration may be carrying out simulations directly on the fracture network using the method according to the invention. 6-b Simulation of the Fluid Flows (SIM) and Optimization of the Reservoir Production Conditions (OPT) At this stage, the reservoir engineer has all the data required for constructing the flow model on the reservoir scale. The hydraulic properties of fractures are calibrated near wells and scaled in form of equivalent permeabilities over all of the reservoir cells. The fractured reservoir simulations often adopt the "double-porosity" approach proposed for example by Warren J. E. et al. in "The Behavior of Naturally Fractured Reservoirs", SPE Journal (September 1963), 245-255, according to which any elementary volume (cell of the reservoir model) of the fractured reservoir is modelled in form of a set of identical parallelepipedic blocks, referred to as equivalent blocks, delimited by an orthogonal system of continuous uniform fractures oriented in the principal directions of flow. The fluid flow on the reservoir scale occurs through the fractures only, and fluid exchanges take place locally between the fractures and the matrix blocks. The reservoir engineer can for example use the methods applied to the entire reservoir this time in French Patent 2,757,947 corresponding to U.S. Pat. No. 6,023,656, French Patent 2,757,957 corresponding to U.S. Pat. No. 6,064,944 and EP The reservoir engineer chooses a production process, such as, for example, the waterflooding recovery process, for which the optimum implementation scenario remains to be specified for the field considered. The definition of an optimum waterflooding scenario is for example in setting the number and the location (position and spacing) of the injector and producer wells in order to best take account of the impact of the fractures on the progression of the fluids within the reservoir. According to the scenario selected, to the double-medium image of the reservoir and to the formula relating the mass and/or energy exchange flow to the matrix-fracture potential difference, it is then possible to simulate the expected hydrocarbon production by the flow simulator (software) referred to as double-medium simulator. At any time t of the simulated production, from input data E(t) (fixed or simulated-time varying data) and from the formula relating exchange flow (f) to potential difference (ΔΦ), the simulator solves all the equations specific to each cell and each one of the two grids of the model (equations involving the matrix-fracture exchange formula described above), and it thus delivers the solution values to the unknowns S(t) (saturations, pressures, concentrations, temperature, etc.) at time t. This solution provides knowledge of the amounts of oil produced and of the state of the reservoir (pressure distribution, saturations, etc.) at the time being considered. 7--Optimized Reservoir Development (EXPLO) The image of the fluid reservoir and a flow simulator are used to optimize the development of the fluid reservoir. Selecting various scenarios characterized, for example, by various respective sites for the injector and producer wells, and simulating the hydrocarbon production for each one according to stage 6 enables selection of the scenario allowing the production of the fractured reservoir being considered to be optimized according to the technical and economic criteria selected. Then development of the reservoir according to this scenario is selected allowing the reservoir production to be optimized. According to an embodiment, a tree type structure is also used during discretization of the fracture medium. In stage a) of the generation of the fracture medium (FRAC), a Voronoi diagram is constructed on each fracture plane by positioning Voronoi cell centers on the intersection segments. The octree allows to acceleration of the construction of the fracture medium mesh by limiting the number of intersection computations. An octree as described in stage 3b is then constructed, but not necessarily balanced at this stage. The refinement conditions are the number of subdivisions (in order to set the maximum number of leaves) and the number of fractures associated with a leaf. An accelerated computation of the intersections between the 3D fractures is then performed using the octree, because the number of fractures is reduced in each subdivision. Along with the matrix/matrix exchange terms computation, the construction of the octree allows spatially splitting the fractures having no interactions. Only the fractures belonging to the same octree leaf will be subjected to a fracture/fracture transmissivity computation according to the method described above. The costly 3D computation of the fracture/fracture intersections is thus limited to the spatially neighboring fractures. The algorithm provided also has the specific feature of allowing and facilitating parallelization of the computations at several levels: computation of the intersections between fractures, computation of the fracture/fracture exchange terms, and computation of the fracture/matrix exchange terms. The present double medium meshing method allows overcoming the restrictive hypothesis of having as many fracture nodes as matrix nodes while preserving the volumes in place and the flow physics by means of an upwind scheme. Using tree type structures (and in particular the octree) allows acceleration of the construction of the fracture medium mesh (limitation of the number of intersection computations), to simplify the matrix/fracture exchange term computations and to properly estimate the matrix volumes associated with the fracture nodes. This method, applied here to the meshing of a 3D double medium, applies to any dual problem involving high heterogeneities. Patent applications by IFP Energies nouvelles Patent applications in class MODELING BY MATHEMATICAL EXPRESSION Patent applications in all subclasses MODELING BY MATHEMATICAL EXPRESSION User Contributions: Comment about this patent or add new information about this topic:
{"url":"http://www.faqs.org/patents/app/20130096889","timestamp":"2014-04-24T20:19:20Z","content_type":null,"content_length":"109930","record_id":"<urn:uuid:71ca65bf-4e73-4afd-9ea4-89f1dbc7b170>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00142-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-user] Polynomial interpolation Rob Clewley rob.clewley@gmail.... Mon Apr 21 22:20:45 CDT 2008 On Mon, Apr 21, 2008 at 6:07 AM, Joris De Ridder <Joris.DeRidder@ster.kuleuven.be> wrote: > On 21 Apr 2008, at 08:02, Stéfan van der Walt wrote: > > Talking about derivatives, does anyone know whether > > > > http://en.wikipedia.org/wiki/Automatic_differentiation > > > > is of value? It's been on my TODO list for a while, but I haven't > > gotten round to studying it in detail. It is of great value for several reasons (e.g. see Eric Phipps' thesis at www.math.cornell.edu/~gucken/PDF/phipps.pdf). Our group at Cornell a couple of years ago had been waiting to see if there would be a standard package emerging for us to interface into Python.... > I think the Scientific.Functions.Derivatives subpackage of Konrad > Hinsen has such functionality. For one thing, I strongly dislike the way of interacting with the numeric objects and their derivatives in that package. It's not very Pythonic in its use of classes and duck typing. The other problem is, the situations where it would be of most use need so many computations that it's not an effective approach unless it is done at the level of C code. A couple of years ago we tried interfacing to a new release of ADOL-C through SWIG, but found some extremely strange memory errors that we couldn't sort out. I think those only showed up when we tried to pass in data that had been prepared by SWIG from Python arrays. Anyway, getting a package like that properly interfaced would be the way forward as far as we are concerned. OpenAD looks like another good bet for an open-source library. Then again, the problem of efficiency becomes getting the users functions into C code so that "source code transformation" can be performed. I certainly like the idea of having all in-built functions "knowing" their derivatives, but it's not clear how these python-level representations can be best interfaced to C code, whether the basis for the AD is "source code transformation" or "operator overloading". I think there would need to be a new class that allows "user" functions that know their derivatives but which are defined in a piecewise-fashion, e.g. to include solutions to differential equations (for instance) represented as interpolated polynomials. > Also, to come back to the original > thread subject, Scientific.Functions.Interpolation is also able to > generate Interpolators (don't know the algorithm). It's only linear interpolation, but, on the plus side, does support multi-dimensional meshes, which is a generalization I wholeheartedly endorse. Alas, multivariate polynomials or wavelet-type bases would be If we're going to start thinking "big" for supporting more mathematically natural functionality, I believe we ought to be thinking far enough out to support the basic objects of PDE computations too (or at least a compatible class structure and API), even if it's not fully utilized just yet. Scipy should support scientific computation based around mathematically-oriented fundamental objects and functionality (i.e. to hide the "dumbness" of arrays inside some sugar coating). I think writing and debugging complex mathematical tools will be a lot easier if we raise the bar a little and use somewhat more sophisticated basic objects than arrays (our efforts with Pointsets have helped enormously in writing sub-packages of PyDSTool, in particular for putting PyCont together so quickly). Such an approach will also be a lot easier to introduce to new and less technical scientific users. The field of scientific computation is still weighed down by keeping old Fortran-style APIs up to date for re-use (of course, I'm guilty of this). Python brings a fresh opportunity to break these shackles, at least interpreted in terms of adding an extra level of indirection and duck-typed magic at the heart of scipy. Some imagination is needed here (a la "import antigravity" ;). More information about the SciPy-user mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-user/2008-April/016458.html","timestamp":"2014-04-17T06:47:07Z","content_type":null,"content_length":"6771","record_id":"<urn:uuid:dcbcdc9a-c82a-4782-b9b2-01e9d516be43>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00071-ip-10-147-4-33.ec2.internal.warc.gz"}
[igraph] thoughts on mixing matrices? [Top][All Lists] [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [igraph] thoughts on mixing matrices? From: Bernie Hogan Subject: [igraph] thoughts on mixing matrices? Date: Thu, 27 Aug 2009 23:55:35 +0100 Hi various iGraph users, I'm interested in calculating a mixing matrix for a graph, and then using this matrix as a sort of 'reduced graph' for visualization. I think of how to do it programmatically, but I'm wondering if such a facility is available natively. What is a mixing matrix? It is a graph where the rows and columns are defined by some discrete values that are the properties of the nodes. The values (either raw or normalized) refer to the number of links between nodes having that that property. So a mixing matrix on friendships between men and women would be a 2*2 matrix. The first diagonal would be links between men, the second diagonal, links between women. The off-diagonals are the men->women links and women->men links. I noticed people mention assortativity here a little while ago, and from what I understand, categorical assortativity can be calculated by assessing the share of the diagonals versus the off-diagonals. (Continuous assortitivity is more like a correlation value, if I recall). I have incredibly little experience programming in c, so I'd be hesitant to add this myself. Take care, Bernie Hogan Research Fellow, Oxford Internet Institute University of Oxford [Prev in Thread] Current Thread [Next in Thread] • [igraph] thoughts on mixing matrices?, Bernie Hogan <=
{"url":"http://lists.gnu.org/archive/html/igraph-help/2009-08/msg00100.html","timestamp":"2014-04-16T19:25:38Z","content_type":null,"content_length":"5427","record_id":"<urn:uuid:5836ee7a-d5c7-4ddc-9a25-7b843d15b42f>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00328-ip-10-147-4-33.ec2.internal.warc.gz"}
Shailesh Chandrasekharan Quantum critical behavior in three dimensional lattice Gross-Neveu models Phys. Rev. D (2013) Fermion Bag Approach to Fermion Sign Problems Eur. Phys. J. A (2013) Fermion Bag Solutions to Sign Problems Proceedings of Science (2012) The generalized fermion bag approach Proceedings of Science (2011) Doctor of Philosophy - Columbia B. Tech - Indian Institute of Technology, Madras, India In Prog. Part. Nucl. Phys. Vol. 53, issue 1, edited by . ; : . In Quantum Monte Carlo: Recent Advances and Common Problems in Condensed Matter Physics and Field Theory edited by M. Compostrini, M.P. Lomardo and F. Paderiva. ; : EDIZIONI ETS. In Computer Simulations in Condensed Matter Physics XIII edited by D.P. Landau, S.P.Lewis and H.-B.Shuttler. ; : Springer. In Continuous Advances in QCD edited by Andrei V. Smilga. ; : World Scientific. In Continuous Advances in QCD edited by Smilga, A.V.. ; : World Scientific. 2003 Outstanding Junior Investigator, Department of Energy, Division of Nuclear Physics
{"url":"http://www.phy.duke.edu/content/shailesh-chandrasekharan","timestamp":"2014-04-20T04:08:41Z","content_type":null,"content_length":"28743","record_id":"<urn:uuid:f6b7d1b2-7b99-4007-94f7-da947e0155dc>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00441-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Help Please!! Use Laplace transforms to solve the initial value problem x''+4x'+8x=2e^(-t) ; x(0)=0 and x'(0)=4 • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5029a295e4b044f79e8f444f","timestamp":"2014-04-18T23:52:48Z","content_type":null,"content_length":"163387","record_id":"<urn:uuid:75ff6019-e8e5-4655-9d3c-938905b7603e>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00364-ip-10-147-4-33.ec2.internal.warc.gz"}
++ operator Author ++ operator Ranch Hand int y = ++x + x++; Joined: May if x = 2 05, 2003 can somebody explain ? Posts: 97 SCJP 2 1.4 Ranch Hand Originally posted by Robbie kyodo: Joined: May int y = ++x + x++; 07, 2003 if x = 2 Posts: 270 ++x: increase x by 1 and then evaluate x, so this expression will be of vaule 3 and x is also 3 x++: (remember x is 3 already by ++x!)evaluate x and then increase x by 1. so the expression will be of value 3 and x will be 4. acording the above explanation, both expressions are of value 3 and thus y=3+3=6.... you may try: what's x? Meng Yi Ranch Hand Joined: May int x=2; x += ++x + x++; 05, 2003 x = 2 + 3 + 3 Posts: 97 8 ? is it right ? Ranch Hand Hi Robbie Joined: Apr In java intial evaluation order is left to right. So now evaluate ++x + x++ which gives us 3 + 3; but now makes x = 4. when first ++x is encountered it gives 3 then the value of x which 13, 2003 is now three is now put in x, so now x also gets three, now x is incremented and now x will contain 4. So y will be 3+3=6. The evaluation order for = is right to left. Posts: 1088 int x=2; x += ++x + x++; x = 2 + 3 + 3 8 ? is it right ? [ May 26, 2003: Message edited by: Anupam Sinha ] Ranch Hand Joined: May 07, 2003 Posts: 270 Ranch Hand Joined: May 8 is correct. 07, 2003 there is no such an operator =+. To me it is just the same as = . Posts: 270 Joined: Oct Realize that order of operations and precedence isn't tested on the SCPJ exam. So I'd recommend concentrating more on inner classes and threads and fun stuff like that. In real life 17, 2001 you'll use lots of ( )'s to make the precedence in your arithmatic to make things clear. Posts: 4313 I like... - Jess Blog:KnitClimbJava | Twitter: jsant | Ravelry: wingedsheep Joined: Oct 17, 2001 Robbie- Posts: 4313 I goofed. I was editing / deleting a post of mine in this thread, and I think the UBB got a little mixed up and it accidently deleted YOUR post instead of mine. My apologies!! - Jess I like... Ranch Hand I repost my message again Joined: May 05, 2003 I think I mistype Posts: 97 x =+ ++x + x++ instead of += I typed =+ it still compiles. why ? Ranch Hand Originally posted by Robbie kyodo: Joined: May I repost my message again 07, 2003 Posts: 270 I think I mistype x =+ ++x + x++ instead of += I typed =+ it still compiles. why ? I think for =+, it is not a compound operator, but rather a combination of operators. form maths, we know that +5 is positive five and is equal to 5 and +(-5) is still -5. so op1=+op2 is just the same as op1=op2, op1 and op2 are compatible numeric values here. something strange as y=+-5; is by all means valid in java. it is just the same as y= + (-5); and further more the same as y=-5; hope it helps.........(help myself as well [ May 27, 2003: Message edited by: Yi Meng ] subject: ++ operator
{"url":"http://www.coderanch.com/t/241978/java-programmer-SCJP/certification/operator","timestamp":"2014-04-19T20:23:20Z","content_type":null,"content_length":"38863","record_id":"<urn:uuid:fcd81c0e-4272-4fda-be7b-7674ba001ef8>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00394-ip-10-147-4-33.ec2.internal.warc.gz"}
III. -LEAP METHODS A. Method 1 (binomial -leap by Tianhai and Burrage) B. Method 2 (modified binomial -leaping by Peng et al.) C. Method 3 (generalized binomial -leaping) D. Notes on methods 1–3 E. -leaps with delayed reactions A. Simple four-reaction model without delays B. The Hes1-dimer model C. The Her1/Her7 model for five coupled cells
{"url":"http://scitation.aip.org/content/aip/journal/jcp/128/20/10.1063/1.2919124","timestamp":"2014-04-19T05:20:40Z","content_type":null,"content_length":"120305","record_id":"<urn:uuid:7ca69600-45bc-4ee6-9462-10835c0884c4>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00285-ip-10-147-4-33.ec2.internal.warc.gz"}
Irish Logarithms - Calculating History Ideas for speeding up multiplication on mechanical calculators. The only thing most mechanical calculators can really do is adding and subtracting. Multiplication is achieved by repeatedly adding the multiplicand to itself, as many times as the multiplier is large. To speed up the process the multiplicand is shifted one position for each higher order of the multiplier, so that a multiplication by 345 does not need 345 additions, but only 3+4+5 (Figure 1). Figure 1: Mechanical multiplication. The machines of Leibniz (1673) and Thomas de Colmar (1820) already worked on this principle. But this had the inconvenience that the number of additions, and the duration of the calculation, depended on the value of the multiplier. Direct multiplication Figure 2: Verea's multiplication table In 1878, Ramon Verea, a Spaniard from Cuba who had ended up in New York, proposed to put the tables of multiplication in "hardware" in order to carry out immediately the multiplication of a multiplicand with a one-digit multiplier.^[1] He made a decagonal prism with holes whose "shallowness"^[2] represented the simple products.^[3] (Figure 2) A pin probes the depth of a hole and rotates a counting wheel proportional to the depth of the hole. For each digit in a multiplicand the corresponding prism is turned so that the face representing the value of the digit is looking at the pin. The pins are set at a height that corresponds to a digit of the multiplier. Then the prisms are pushed towards the pins, and thereby move the pins over a distance required by the multiplication tables. When multiplying with a multi-digit multiplier, this process must be repeated for each digit. If the shallowness of the holes would represent directly the simple products, the deepest hole should have been 9×9 = 81 times deeper than the shallowest hole. This required a probing accuracy that, at Verea's time, was not achieved without an inconveniently large and slow mechanism. Therefore Verea choose to represent the simple products by two holes: one hole for the units and one hole for the tens. The holes are sensed by two pins that control two successive orders in the adder. For 9×9 one pin would move one step, and add one to the units-counter, and the other pin would move 8 steps and add 8 to the tens counter. Another reason to use two holes is that the normal tens transfer mechanism in the adder can be used: if the units wheel rotates past 9, the tens wheel should be rotated 1 step further. If a single hole was used, a transfer of 8 might occur, and the usual tens-transfer mechanisms can not do thatt. In 1872, before Verea patented his machine, Edmund Barbour had already received a patent on a multiplier,^[4] in which the simple products are represented as short racks on large cylinders. Figure 3: Barbour's multiplication table Figure 4: Bollée's multiplication table In 1896, the Frenchman Leon Bollée developed a multiplier by a similar principle. Instead of prisms with holes, he used sticks on a block to represent the tables (Figure ). Bollee clearly describes in his patent the operation of his machine: after setting the multiplicand and one digit of the multiplier, first all units of the simple products are added to the counting mechanism, then the possible tens-transfers are performed, then the tens of simple products are added to the counting mechanism, which is shifted one position, and the resulting tens-transfers are performed, and finally the counting mechanism slides back. Bollée only needs one probe per resulting digit, whereas Verea needs two. It is not clear how two probes, the units-probe of one order and the tens-probe of a lower order, could simultaneously operate one and the same counting wheel. Multiplication tables similar to Bollée's design have been successfully employed by Otto Steiger and Hans Egli in the "Millionaire" calculator. There were several other direct multipliers on the market, but most of the mechanical calculators continued multiplying by repeated addition.
{"url":"https://sites.google.com/site/calculatinghistory/home/irish-logarithms-1","timestamp":"2014-04-18T18:48:16Z","content_type":null,"content_length":"31465","record_id":"<urn:uuid:69cdc9f1-8059-411e-be96-b965f18508e6>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00323-ip-10-147-4-33.ec2.internal.warc.gz"}
AP CALCULUS AB Test Prep Videos 11 Hours Long--135 Solved Problems Ask a question about this product. This AP Calculus AB video is 11 hour long video containing 135 solved problems from 3 different AP Calculus AB exams (PDF file included). This video prepares a student to be successful in the AB exam and get ready for college. The problems are worked out in easy to follow, step by step fashion. All the problems are presented in a manner that allows the student to develop thinking skills that will last a lifetime. 3 options for purchasing-all with free shipping! Option 1 (DIGITAL DOWNLOAD) :Purchase a link to instantly view the video on our website or download it to your computer. Option 2 (CD SHIPPED) : You can purchase a CD which you can play on any computer. Option 3 (DVD-SET SHIPPED) : You can purchase a DVD set which you can play on your DVD player (computer or television). For MAC users: Please purchase only the DVD version.
{"url":"http://mathtutorondvd.com/ap-calculus-ab","timestamp":"2014-04-20T15:50:42Z","content_type":null,"content_length":"23339","record_id":"<urn:uuid:7fcdb52e-a25e-4071-8349-ccd9b048fe19>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00500-ip-10-147-4-33.ec2.internal.warc.gz"}
Algorithms Sequential and Parallel: A Unified Approach, Second Edition With multi-core processors replacing traditional processors and the movement to multiprocessor workstations and servers, parallel computing has moved from a specialty area to the core of computer science. In order to provide efficient and cost-effective solutions to problems, algorithms must be designed for multiprocessor systems. Algorithms Sequential and Parallel: A Unified Approach 2/E provides a state-of-the-art approach to an algorithms course. The book considers algorithms, paradigms, and the analysis of solutions to critical problems for sequential and parallel models of computation in a unified fashion. This gives practicing engineers and scientists, undergraduates, and beginning graduate students a background in algorithms for sequential and parallel algorithms within one text. Prerequisites include fundamentals of data structures, discrete mathematics, and calculus. FEATURES: • Discusses practical applications of algorithms (e.g., efficient methods to solve critical problems in computational geometry, image processing, graph theory, and scientific computing) • Provides information updated from the previous edition, including discussions of coarse-grained parallel computing • Mathematical tools are developed in the early chapters • Includes exercises at the end of each chapter that vary in difficulty, from confidence-building problems to research-oriented problems
{"url":"http://my.safaribooksonline.com/book/software-engineering-and-development/algorithms/9781584504122","timestamp":"2014-04-21T07:57:08Z","content_type":null,"content_length":"88467","record_id":"<urn:uuid:92adafa7-67de-426d-9f67-061e1dd6db7e>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00515-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - [Problem]: Magnitude/direction of magnetic force, electric force and the net force. 1. The problem statement, all variables and given/known data A magnet produces a 0.40 T field between its poles, directed horizontally to the east. A dust particle with charge q = -8.0×10^-18 C is moving vertically downwards with a speed of 0.30 cm/s in this field. Whilst it is in the magnetic field, the dust particle is also in an electric field of strength 1.00×10^-2 V/m pointing to the north. (a) What is the magnitude and direction of the magnetic force on the dust particle? (b) What is the magnitude and direction of the electric force on the dust particle? (c) What is the magnitude of the net force on the dust particle? 2. Relevant equations F=(k|q1q2|) / r^2 3. The attempt at a solution a) Magnitude: F=|q|vB = (-8 x 10^-18 C)(-0.003 m/s)(0.4 T) = 9.6 x 10^-21 N Direction: Since the velocity is south and the force must be perpendicular to the velocity, the force must lie in a plane perpendicular to the north/south axis. b) I know it has to do with "1.00 x 10^-2 V/m" but all I could do with it was: Magnitude: B=E/V --> (1.00 x 10^-2 V/m) / (0.003 m/s) = 3.3(repeating) T Direction: North c) Would the net force be the sum of answers "a" and "b" or would it be: Net force = sqrt(a^2 + b^2) a: being the force in a) b: being the force in b)
{"url":"http://www.physicsforums.com/showpost.php?p=2457932&postcount=1","timestamp":"2014-04-19T17:31:04Z","content_type":null,"content_length":"10002","record_id":"<urn:uuid:97838290-d663-4c60-a232-98e143df317e>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00463-ip-10-147-4-33.ec2.internal.warc.gz"}
Patterning and Symmetry From EscherMath Math Topic: Patterning and Symmetry Grade Levels: Grades 4 - 6 By: Ann Rule, Ph.D. and Anneke Bart, Ph.D. Saint Louis University Lesson plan design adapted from: Wigging, G. & McTighe, J. 1998. Understanding by Design. Alexandria, VA: ASCD. Missouri GLEs │Grade│ GLE │ Description │ │ │MA 4 1.6 │describe geometric and numeric patterns. │ │4 │MA 2 3.6 │predict the results of sliding/translating, flipping/reflecting or turning/rotating around the center point of a polygon. │ │ │MA 2 1.10│Create a figure with multiple lines of symmetry and identify the lines of symmetry. │ │ │MA 4 1.6 │make and describe generalizations about geometric and numeric patterns. │ │5 │MA 2 3.6 │predict, draw and describe the results of sliding/translating, flipping/reflecting or turning/rotating around a center point of a polygon.│ │ │MA 2 1.6 │identify polygons and designs with rotational symmetry. │ │ │MA 4 1.6 │compare various forms of representations to identify patterns. │ │6 │MA 2 3.6 │describe the transformation from a given pre-image using the terms reflection/flip, rotation/turn, and translation/slide. │ │ │MA 2 1.6 │create polygons and designs with rotational symmetry. │ Mathematical Context: • Problem Solving • Reasoning • Communication • Making Connections • Designing and Analyzing Representations Identify Desired Results • Students will understand the connection between patterns in art and patterns in mathematics. • Students will understand the visual effects of patterns. • Students will understand the key components of symmetry. • Students will understand the properties of border patterns and their relationships to mathematics. Essential Questions: • Why is it important to explore patterns in the real world? • What are the relationships between patterns in art and patterns in math? • How can learning about patterns in different pieces of artwork develop a heightened sense of patterns in mathematics? Planned Learning Experiences • Through exploration of border patterns in pieces of artwork, students will be able to identify the properties of patterns. • Through exploration of border patterns in pieces of artwork, students will be able to describe symmetries. Assessment Evidence After exploring border patterns, students will complete the attached worksheet on Border Patterns in Greek Art. Students are expected to accurately complete the worksheet with 90% accuracy. Learning Activities • A field trip to the St. Louis Art Museum to explore Greek Art Procedures (what will students do?) • Students will be directed to chosen pieces of artwork in the museum, where they are given background on the artwork e.g., a Greek Amphora, c. 530 BC, painted by Antimenes, etc. • Students will explore the patterns on the vase looking for lines of symmetry. • On the handout (attached) students will be asked to fill in the appropriate information as they explore the amphora. • Students will design their own border patterns based on vertical lines of symmetry • Students will design their own border patterns based on horizontal lines of symmetry. • Ask students key questions about what they’ve learned about border patterns and how they relate to both mathematics and art. Border Patterns in Greek Art Worksheet Objective: Explore properties of border patterns and describe symmetries There are several border patterns on this beautiful vase. Let’s explore some of their properties: 1. There is a fairly broad pattern at the top of the Amphora: A. Do you see mirror lines in this pattern? If so draw them on the image above. B. Does the pattern have rotational symmetry? (in other words: does it look the same upside-down as it does right-side up?) 2. Towards the bottom of the amphora we see several more border patterns. Let’s look at one of them: A. Does this border have any mirror lines? B. Does this pattern have rotational symmetry? (in other words: does it look the same upside-down as it does right-side up?) 3. Draw a border pattern that has a vertical mirror line, but no horizontal mirror line. 4. Draw a border pattern that has a horizontal mirror line, but no vertical mirror line.
{"url":"http://euler.slu.edu/escher/index.php/Patterning_and_Symmetry","timestamp":"2014-04-20T19:27:21Z","content_type":null,"content_length":"21857","record_id":"<urn:uuid:0de73377-731a-4643-a852-e40c27e779f6>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00088-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] Strict identity relation and possible worlds Carlos Gonçalves cgon1 at iscte.pt Fri Jan 16 16:20:37 EST 2004 Hi all, Consider a set of possible worlds F and a binary relation E, such that (w,w') are in E if, and only if, they are the same element of F, E could thus be thought of as a strict identity relation between possible worlds. If the set F is such that there are no indescernible worlds, then E holds intuitively for this set, otherwise, if at least two worlds in F are indescernible, then we have that strict identity between worlds (as defined by E) and indescernibility between worlds do not coincide. My questions are, as follows: Are there any arguments that: (1) Put into question E when no indescernible worlds are elements of F ? (2) Put into question the argument that strict identity between worlds (as defined above in terms of E) and indescernibility between possible worlds do not coincide? If you could shed some light on possible arguments, would greatly appreciate it. C. Pedro More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2004-January/007795.html","timestamp":"2014-04-17T00:53:24Z","content_type":null,"content_length":"3228","record_id":"<urn:uuid:1b7474c3-21b3-4705-8107-5dabd58c8b89>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00407-ip-10-147-4-33.ec2.internal.warc.gz"}
Boole's inequality Boole's inequality ^en In probability theory, Boole's inequality, also known as the union bound, says that for any finite or countable set of events, the probability that at least one of the events happens is no greater than the sum of the probabilities of the individual events. Boole's inequality is named after George Boole. Formally, for a countable set of events A1, A2, A3, ..., we have In measure-theoretic terms, Boole's inequality follows from the fact that a measure is σ-sub-additive. Wikipedia [ - ]
{"url":"http://www.freebase.com/m/01ylb_","timestamp":"2014-04-17T11:45:52Z","content_type":null,"content_length":"159882","record_id":"<urn:uuid:78c75a75-2a23-4fdd-bc69-b90156689e2b>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00354-ip-10-147-4-33.ec2.internal.warc.gz"}
Converting math equations to C# A while ago, I worked on a product where part of the effort involved turning math equations into code. At the time, I wasn't the person who was allocated the role, so my guess is the code was written by simply taking the equations from word and translating them by hand into C#. All well and good, but it got me thinking: is there a way to automate this process so that human error can be eliminated from this, admittedly boring, task? Well, turns out it is possible, and that's what this article is about. Equations, eh? I'd guess that barring any special math packages (such as Matlab), most of us developers get math requirements in Word format. For example, you might get something as simple as this: This equation is easy to program. Here, let me do it: y = a*x*x + b*x + c;. However, sometimes, you end up getting really nasty equations, kind of like the following: Got the above from Wikipedia. Anyways, you should be getting the point by now: the above baby is a bit too painful to program. I mean, I'm sure if you have an infinite budget or access to very cheap labour, you could do it, but I guarantee you'd get errors, since getting it right every time (if you've got a hundred) is difficult. So, my thinking was: hey, there ought to be a way of getting the equation data structured somehow, and then you could restructure it for C#. That's where MathML entered the picture. Okay, so you are probably wondering what this MathML beast is. Basically, it's an XML-like mark-up language for math. If all browsers supported it, you'd be seeing the equations above rendered using the browser's characters instead of bitmaps. But regardless, there's one tool that supports it: Word. Microsoft Word 2007, to be precise. There's a little-known trick to get Word to turn equations into MathML. You basically have to locate the equation options... and choose the MathML option: Okay, now copying our first equation onto the clipboard will result in something like the following: <mml:mi mathvariant="italic">bx</mml:mi> You can probably guess what this all means by looking at the original equation. Hey, we just ripped out the structure of an equation! That's pretty cool, except for one problem: converting it to C#! (Otherwise, it's meaningless.) Syntax tree Keeping data the way we get it is no good. There's lots of extra information (like that italic statement near bx), and there's info missing (like the multiplication sign that ought to be between b and x). So, our take on the problem is turn this XML structure into a more OOP, XML-like structure. In fact, that's what the program does – it turns XML elements into corresponding C# classes. In most cases, XML and C# have a 1-to-1 correspondence, so that an <mi/> element turns into an Mi class. So woo-hoo, without too much effort, we turn XML into a syntax tree. Now, the tree is imperfect, but it's there. Let us instead discuss some of the thorny issues that we have to overcome. Single/multi-letter variables Does 'sin' mean s times i times n, or a variable called 'sin', or the Math.Sin function? When I looked at the equations I had, some of them used multiple letters, some were single-letter. There's no 'one size fits all' solution as to how to treat those. Basically, I made this an option. The times (×) sign If you write ab, it might mean a times b. If that's the case, you need to find all the locations where the multiplication has been omitted. On a funny note, there are also different Unicode symbols used by the times sign in different math editing packages (I was testing with MathML as well as Word). The end result is that finding where the multiplication sign is missing is very difficult. Greek to Roman Some people object to having Greek constants in C# code. Hey, I code in UTF-8, so I can include anything, including Japanese characters and those other funny Unicode symbols. It does mess up IntelliSense because your keyboard probably doesn't have Greek keys - unless you live in Greece, that is. Plus, it's a way to very quickly kill maintainability. So, one feature I had to add is turning Greek letters into Roman descriptions, so that Δ would become Delta and so on. Actually, Delta is a special case because we are so used to attaching it to our variables (e.g., writing ΔV). Consequently, I added a special rule for Δ to be kept attached even in cases where all other variables are single-letter. Correctly treating e, π, and exp Basically, the letter pi (π) can be just a variable, or it can mean Math.PI. Same goes for the letter e – it could be Math.E, and in most cases, it is. Another, more painful substitution is exp to Math.Exp. Support for all three of these had to be added. Power inlining Most people know that x*x is faster than Math.Pow(x, 2.0), especially when dealing with integers. Inlining powers of X and above is an option in the program. I have seen articles (can't find the link) where people claim that you lose precision if you avoid doing it the Math.Pow way. I'm not sure though. Operation reduction I've been alerted to the fact that some expressions output are inefficient as far as their constituent operations go. For example, a*x*x+b*x+c is not as efficient as x*(a*x+b)+c because it has more multiplications. Thus, one of the future goals of my solution is to attempt to optimize these scenarios. It will make them less readable though! There were plenty of other problems in converting from XML to C#, but the main idea stayed the same: correctly implement the Visitor pattern over each possible MathML element, removing unnecessary information and supplying that information which is missing. Let's look at some examples. Okay, I bet you can't wait to see an actual example. Let's start with what we had before: Here's the output we get: I omitted the initialization steps for variables that the program also creates. Let's look at the more complex equation. Here it is, in case you have forgotten: Care to guess what the output of our tool is? p = rho*R*T + (B_0*R*T-A_0-((C_0) / (T*T))+((E_0) / (Math.Pow(T, 4))))*rho*rho + (b*R*T-a-((d) / (T)))*Math.Pow(rho, 3) + alpha*(a+((d) / (t)))*Math.Pow(rho, 6) + ((c*Math.Pow(rho, 3)) / (T*T))*(1+gamma*rho*rho)*Math.Exp(-gamma*rho*rho); I originally had the above output using Greek letters (reminder: C# is okay with them). However, due to coding, I've let my tool change them to Romanized versions, thus demonstrating yet another Okay, let's do another example just to be sure – this time with a square root. Here is the equation: I've turned power inlining off for this one - we don't want the expression with the root being evaluated twice. Here is the output: a = 0.42748 * ((Math.Pow((R*T_c), 2)) / (P_c)) * Math.Pow((1 + m * (1 - Math.Sqrt(T_r))), 2); Is this great or what? If you are ever handed a 100-page document full of formulae, well, you can surprise your client by coding them really quickly. I hope you like the tool. Maybe you'll even find it useful. I have recently redesigned the tool from the ground up using F#, and you can find the latest version here.
{"url":"http://www.codeproject.com/Articles/30615/Converting-math-equations-to-C","timestamp":"2014-04-18T16:07:08Z","content_type":null,"content_length":"141641","record_id":"<urn:uuid:a13a383f-4f4f-4905-8cf5-12ea91c9c121>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00026-ip-10-147-4-33.ec2.internal.warc.gz"}
Depth of water when the water level is falling the least rapidly October 22nd 2007, 12:38 PM #1 Depth of water when the water level is falling the least rapidly Can someone help me with this problem? Thank you. A spherical tank of radius 10ft is being filled with water. When it is completely full, a plug at its bottom is removed. According to Torricelli's law, the water drains in such a way that dV/dt = -k*sqrt(y), where V is the volume of water in the tank and k is a positive empirical constant. a) Find dy/dt as a function of the depth y b) Find the depth of water when the water level is falling at least rapidly (you will need to compute the derivative of dy/dt with respect to y) Regarding a), I have done this: V=1/3*pi*y^2(3a-y) where a is the radius dV/dt=10*pi*2y(dy/dt) - (1/3)*pi*3y^2(dy/dt) -k*sqrt(y) = (dy/dt)(20*pi*y - pi*y^2) And this is what I came up for dy/dt as a function of the depth y But I am not sure how to proceed with point b) Any help will be apprecited. Maybe I am misunderstanding, but it appears you're using the equation for the volume of a cone........the tank is spherical. dV equals the area of the liquid multiplied by dy. Try to equate the area of the liquid in terms of h. By the diagram, since the tank is a sphere, the surface area of the liquid is a circle with radius The area of a circle is ${\pi}r^{2}$, so we have the area of the surface of the liquid at time t as So, we have $(10{\pi}y-{\pi}y^{2})\frac{dy}{dt}=-k\sqrt{y}$ Now, we have a separated DE. Last edited by galactus; November 24th 2008 at 05:38 AM. V=1/3*pi*y^2(3a-y) is the volume of a spherical tank that is partly filled - this way I have the depth of the sphere in the equation. Oh, I see. At first glance it looked like the cone formula. My bad. I just saw the rest of your reply. I see that I am on the right way. I now wonder how to find the depth of water when the water level is falling at least rapidly. They suggest to compute the derivative of dy/dt with respect to y. Does this mean that I have to take the derivative of the derivative and why it will give me the depth of water when the water level is falling at least rapidly. I just did the b). For those interested I took the derivative of dy/dt and found one critical value. Thank you for the help! Last edited by hasanbalkan; October 22nd 2007 at 05:06 PM. October 22nd 2007, 01:55 PM #2 October 22nd 2007, 03:06 PM #3 October 22nd 2007, 03:08 PM #4 October 22nd 2007, 04:19 PM #5
{"url":"http://mathhelpforum.com/calculus/21068-depth-water-when-water-level-falling-least-rapidly.html","timestamp":"2014-04-18T18:25:32Z","content_type":null,"content_length":"45130","record_id":"<urn:uuid:872f134b-7263-4dfc-a7cd-595cc35d7b54>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00130-ip-10-147-4-33.ec2.internal.warc.gz"}
Transition between Two Regimes Describing Internal Fluctuation of DNA in a Nanochannel We measure the thermal fluctuation of the internal segments of a piece of DNA confined in a nanochannel about 50100 nm wide. This local thermodynamic property is key to accurate measurement of distances in genomic analysis. For DNA in 100 nm channels, we observe a critical length scale 10 m for the mean extension of internal segments, below which the de Gennes' theory describes the fluctuations with no fitting parameters, and above which the fluctuation data falls into Odijk's deflection theory regime. By analyzing the probability distributions of the extensions of the internal segments, we infer that folded structures of length 150250 nm, separated by 10 m exist in the confined DNA during the transition between the two regimes. For 50 nm channels we find that the fluctuation is significantly reduced since the Odijk regime appears earlier. This is critical for genomic analysis. We further propose a more detailed theory based on small fluctuations and incorporating the effects of confinement to explicitly calculate the statistical properties of the internal fluctuations. Our theory is applicable to polymers with heterogeneous mechanical properties confined in non-uniform channels. We show that existing theories for the end-to-end extension/fluctuation of polymers can be used to study the internal fluctuations only when the contour length of the polymer is many times larger than its persistence length. Finally, our results suggest that introducing nicks in the DNA will not change its fluctuation behavior when the nick density is below 1 nick per kbp DNA. Citation: Su T, Das SK, Xiao M, Purohit PK (2011) Transition between Two Regimes Describing Internal Fluctuation of DNA in a Nanochannel. PLoS ONE 6(3): e16890. doi:10.1371/journal.pone.0016890 Editor: Laurent Kreplak, Dalhousie University, Canada Received: October 7, 2010; Accepted: January 5, 2011; Published: March 15, 2011 Copyright: © 2011 Su et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Funding: Mr. Tianxiang Su is supported by the start-up funds of Dr. Prashant K. Purohit. Dr. S. K. Das and Dr. M. Xiao are employees of the commercial company BioNanomatrix. Dr. Prashant K. Purohit acknowledges partial support from grant NSF CMMI-0953548 and the Nano/Bio Interface Center at the University of Pennsylvania through grant NSF NSEC DMR08-32802. Funder's role: The experiments were carried out at the premises of BioNanomatrix. Competing interests: The authors have read the journal's policy and have the following conflicts: Authors S. K. Das and M. Xiao are employees of the commercial company, BioNanomatrix. They declare competing financial interests in the form of “Ownership of company stocks” and “Paid employment”. T. Su and P. K. Purohit have no affiliations to BioNanomatrix. This does not alter the authors' adherence to all the Plos ONE policies on sharing data and materials. Stretching DNA in nanochannels has emerged as an important technique for separating DNA, performing genome mapping, and also studying repressor-DNA interactions, etc [1]–[3]. On the other hand, DNA confined in nanochannels also serves as a simplified model for studying single polymer behavior in concentrated polymeric solutions and melts [4], [5]. For these reasons, mechanical behaviors of DNA inside nanochannels have attracted a long-standing interest. The two most well-known scaling theories in this field are those described by de Gennes [5] and by Odijk [6]. de Gennes' blob theory, which was later generalized by Schaefer and Pincus [7], assumes that the channel width is much greater than the persistence length of the polymer. It models the moderately confined DNA as a chain of spherical blobs inside a cylindrical channel and gives the following expression for the end-to-end extension of the polymer [5], [7], [8]:(1) where are the contour length and effective molecule width of the DNA respectively. The prefactor is found to be close to unity [9]. Odijk's theory, on the other hand, works for DNA under strong confinement in which . In this regime, the polymer is deflected back and forth by the channel walls and the end-to-end extension is predicted to be [6]:(2) where is a constant whose value was determined recently by simulations [10]. Aside from the scaling theories, Wang and Gao [11] showed that the end-to-end extension of a strongly confined polymer in the Odijk regime can be derived analytically by modeling the confinement effect as a quadratic potential . Here is the stiffness of the effective quadratic potential, which depends on the channel width , and is the transverse displacement of the polymer from the axis of the nanochannel. Wang and Gao considered a confined chain under end-to-end applied force and obtained an expression for the total extension as a function of and . We set pN, substitute the relation between and (see Supporting Information) into their expression, and find:(3) which is the same as Eq.2, confirming the scaling theory of Odijk, and at the same time validating the use of quadratic confinement potentials in the strongly confined regime. Both de Gennes' and Odijk's theories have been tested by experiments as well as simulations over the years [10], [12]–[16]. However, most of the studies so far have focused on the properties of the entire DNA, for example, the end-to-end extension , the corresponding end-to-end fluctuation , and also the relaxation time of the entire DNA etc. Local properties of a confined polymer, on the other hand, like the extension and fluctuation of its internal segments, are rarely investigated. In fact, local conformation and alignment of the confined DNA have been probed only recently [17], [18]. It is also not well understood whether the existing theories developed for an entire piece of DNA can be applied locally for its internal segments. These are important issues because, if one considers the case of genome mapping, it is the local fluctuation of the internal segments that determines the resolution of the mapping. In this paper, we measure the longitudinal internal fluctuation of a piece of DNA confined in rectangular channels about 50100 nm wide. We show that neither de Gennes' blob theory nor Odijk's deflection theory can completely describe the measured internal fluctuation versus mean extension profile. A critical length scale of 10 m for the mean extension is observed, below which the internal DNA segments are more ‘blob’-like, and above which Odijk's deflection theory works better. From the histograms of extension of the internal segments, we further infer that there exist folded structures of length 150250 nm separated by 10 m along the backbone of the DNA during the transition between the two regimes. To justify the use of existing theories for studying the internal fluctuation, we focus on the Odijk regime and propose a method to explicitly calculate the internal fluctuation of a strongly confined DNA. We model the confinement effects by quadratic potentials and show that one can use the existing theories for end-to-end extension/fluctuation to describe the internal segments of the DNA when the contour length of the polymer is many times larger than its persistence length. Our model, which views the confined DNA as a discrete wormlike chain, can describe the fluctuations of heterogeneous polymers confined in non-uniform channels. It is also capable of capturing effects, like the influence of nicking sites on the DNA fluctuation profiles, which we will discuss at the end of the paper. Results and Discussion To visualize the internal segments, dye-labeled (Alexa-546) nucleotides are introduced into the backbones of the nicked DNA ( kbp, m), DNA ( kbp, m) and bacterial artificial chromosome (BAC) human DNA clones (MCF7 BAC clone 9I10, fragmented, full length kbp, m) (Fig. 1) [19]. The DNA molecules are then driven by electric field into the nanochannels. With the Alexa-546 labels excited by light, extension of each internal segment is recorded frame-by-frame. Average extension and the root mean square (rms) fluctuation for each internal segmenet are calculated and plotted in the profile. Figure 1. Measurement of the fluctuations of the internal segments of confined DNA. (A) Image of a dye label (Alexa-546) on a DNA backbone (backbone not shown) with ms exposure time. (B) 2D surface plot of the raw image (intensity of the dye vs. the X Y coordinates). (C) Image of one T4 DNA fragment (36 microns) with backbone (red) and internal labels (green). (D) Time series (8 seconds) of the DNA showing the fluctuations of backbone and internal labels. In (D), the red trace is the backbone and the green traces are the trajectories of internal dye labels. In Fig. 2, we first show the result for DNA confined in a 80 nm130 nm channel. The maximum , which is roughly the mean extension of the entire DNA, is about m, in agreement with the measurements of Tegenfeldt et al [12]. The internal fluctuation increases with with a power law. This power law and even the magnitude of the fluctuation can be well captured by de Gennes' theory (discussed below) with no fitting parameters. Figure 2. Internal fluctuation of DNA confined in a 80 nm 130 nm channel. (A) The measured rms fluctuation versus mean extension for the internal segments of the DNA agrees very well with de Genne's theory with no fitting parameters (red curve, Eq.4). (B) A linear profile confirms the power law of of the de Gennes' theory. Note, however, that here we have maximum m. As shown in a subsequent figure (Fig. 4) and in the text, for longer polymer with a maximum m, the data deviates significantly from de Gennes' theory and even the 0.5 power law is lost. The longitudinal fluctuation of the confined DNA in de Gennes' theory can be evaluated using the effective stiffness of the polymer: [12], [20]. Using this expression and Eq.1 to eliminate , we get the relation between and :(4) Therefore, de Gennes' theory predicts a power law for the profile. It is interesting to note that the prefactor in Eq.4 depends only on the channel width , but not on the effective molecule width , nor on the persistence length . This implies that the profile is independent of the ionic strength of the experimental buffer. To compare the theory with the measured internal fluctuation, we plot Eq.4 together with the experimental data in Fig. 2. Surprisingly, the data matches with the theory very well without any fitting parameters. Both the power law and the magnitude of the fluctuation are correctly predicted by Eq.4. de Gennes' theory also gives the distribution of the extension , which we can compare to our measurement. We consider the recently proposed “renormalized” Flory-type free energy for a confined polymer [21] and its corresponding prediction of the longitudinal fluctuation:(5) where , are two constants, are the total number of monomers and the number of monomers inside a blob respectively [21]. Both of the relations can be rewritten in terms of (which is the solution of ) with being a constant. The probability distribution is therefore:(7) Here is a constant determined by the normalization condition. In our experiments, we record the extension of each internal segment frame-by-frame and then calculate the distribution for each segment. Fig. 3 shows the measured for two internal segments and their fitting results to Eq.7 (red). The result again implies that, for DNA confined in a 80 nm130 nm channel, the behavior of the internal segments can be well captured by de Gennes' theory. Moreover, by fitting the distribution to Eq.7, we obtain the constant , which, when plugged back into Eq.6-2, yields: (here nm). Therefore, starting from the “renormalized” Flory-type free energy Eq.5, we recover Eq.4 with the same prefactor. This indicates that the prefactor in Eq.4 is quite accurate although it is derived from a scaling theory. It also explains why Eq.4 matches with the measured profile without any fitting parameters (Fig. 2). It is important to note that, for DNA confined in a 80 nm130 nm channel, the maximum is less than 10 m (Fig. 2). We shall show next that for longer DNA whose maximum is greater than 10 m, the measurement no longer agrees with de Gennes' theory. In particular, the 0.5 power law in the profile is lost. Figure 3. Probability distributions for 2 internal segments of DNA inside a 80 nm 130 nm channel. The experimental data is fitted to Eq.7 (red). The fitting value (Eq.7), when plugged back to Eq.6-2, recovers de Gennes's formula Eq.4. Fig. 4A shows the profile for the internal segments of T4 DNA in a 80 nm130 nm channel. The maximum , which is roughly the mean extension of the entire DNA, is about m, in agreement with the simulation result of Jung et al [14]. Fitting of to the experimental data yields , which is very different from the prediction of de Gennes' theory (Eq.4). Similar results are found for DNA in channels of different sizes: for T4 DNA confined in 60 nm100 nm channels (Fig. 4B) and for DNA in 50 nm70 nm channels (Fig. 4C). In all these cases the maximum is greater than m. We note, however, that in Fig. 4, the experimental data for segments with m still matches with de Gennes' theory (except for the 5070 nm channel case, which we will explain later). It is the data with m that deviates significantly from de Gennes' prediction. In fact, if we plot the fluctuation results for short segments with m for and T4 DNA together, the two profiles are almost identical, satisfying de Gennes' theory (see Supporting Information Fig. S1). Figure 4. Fluctuation of the internal segments of (A) T4 DNA in 80 nm 130 nm, (B) T4 DNA in 60 nm 100 nm and (C) DNA in 50 nm 70 nm channels. For all cases, the maximum mean extension m. For (A) and (B), the data m agrees with de Gennes's theory (red, no fitting parameters). Deviation from de Gennes' theory begins at a critical m, above which the data falls into the black curve predicted by the deflection theories of Odijk [6], Wang and Gao [11]. For tighter channels (C), the transition occurs earlier with most data falling in the deflection regime. To rule out the possibility that the observed difference between DNA and T4 DNA stems from sequence variations, we perform the same experiments on the bacterial artificial chromosome (BAC) human DNA clones (MCF7 BAC clone 9I10), which also has maximum m. As shown in Fig. 5, the results for the BAC DNA are almost identical to those for the T4 DNA. In particular, for small m, both match with de Gennes' prediction without any fitting parameters, while for m, both identically deviate from de Gennes' prediction. This suggests that the deviation from de Gennes' theory for long internal segments truly stems from segment size, not from sequence variations. Figure 5. Internal fluctuation versus mean extension for BAC (red squares) and T4 DNA (black circles) in a 80 nm 130 nm channel. This figure shows that DNAs from two different sources give almost identical results, which suggests that agreement with de Gennes theory for short internal segments, and deviation from de Gennes' theory for long internal segments, are both sequence independent. To better understand the deviation from de Gennes' prediction, we further look i nto the local structures of the confined DNA. Odijk showed recently that even in a nm channel, DNA can fold back on itself, giving rise to a global persist ence length much larger than nm, the intrinsic persistence length of the DNA [18], [22]. Because of this, Odjik argued that the transition from Odijk's regime to de Gennes' regime could be delayed with the increase of the channel size [18]. To check whether such local folded structures exist in the DNA in our experiments, we measure the extension distribution for each single internal segment (see “Materials and Methods” for details). We find that for most internal segments whose mean extension is longer than m, the distribution shows two or more peaks (Fig. 6B–C). From this observation, we infer that there indeed exist some folded structures in those internal segments – one peak in the distribution corresponds to the folded configuration, and the second peak corresponds to the extended configuration (Fig. 6). The existence of folded structures can be also inferred from the typical extension versus time plot as shown in Fig. 6D, where the steps in correspond to different states of the internal segments. Furthermore, we find that in the distribution , the measured distances between any two peaks are always integral multiples of 400500 nm, indicating that the difference in extension of a single folded structure and its extended form is about nm, ten times the persistence length of the DNA. This further implies that each branch of the folded structure is about 150250 nm, if we assume each folded structure has two (loop) or three (hairpin) branches (Fig. 6). Also, by checking the location of the internal segments that show multiple-peak distributions, we find that the folded structures are separated by 10 m, which roughly agrees with the value of above which de Gennes' theory fails to match with the experimental data (Fig. 4). In the following we show that for m the fluctuation data is better described by Odijk's deflection theory. Figure 6. (A) Folded structures in the backbone of confined DNA. Each branch of the structure is about nm, about the width of the channel size. The structures are separated by a distance 10 m. (B, C) Distribution of extension for 2 internal segments that contain the folded structures. In disagreement with de Gennes' prediction, the distributions show 2 peaks, from which we infer the existence of the folded structures. However, the structures are not stable as the two peaks in the distributions are comparable in height. The red curves fitted to the left peaks on the histogram are from de Gennes' theory (Eq.7) and the ones superimposed on the right peaks are from the deflection theory (Eq.10). (D) Extension versus time for a single internal segment that shows two peaks in the distribution . The extension of this particular internal segment seems to fluctuate around two values shown by the dashed lines. This gives rise to the two peaks seen in the probability distribution. To exactly (rather than in a scaling sense) evaluate the fluctuation of DNA in the Odijk deflection regime, we extend the theory recently developed by Wang and Gao [11]. This theory represents the DNA as a strongly confined wormlike chain (fluctuating elastic rod) subjected to an additional end-to-end force and produces the relation between the mean extension and , the stiffness of the effective confinement potential (which is a function of the channel width ):(8) where again, is the thermal energy, is the bending modulus of the polymer, and in a rectangular channel the stiffness of the confinement potential can be expressed as , with being a constant. Using Eq.8, we calculate the effective stiffness of the DNA as , and then evaluate the fluctuation as :(9) Leaving as a free parameter, we fit Eq.9 to the experimental data with m in Fig. 4A–C (black curves) and obtain and respectively. For the BAC DNA confined in 80 nm130 nm channels shown in Fig. 5, we obtain from a similar fit. The fact that all the four sets of experimental data for different channel widths yield the same makes sense because is expected to be a universal constant independent of . Moreover, the constant comes from the expression for the free energy of confined chains in the Odijk regime and it has been estimated by Burkhardt to be [23], which is very close to our fitting results. This strongly suggests that in the large mean extension regime m, the DNA segments are better described by the deflection theory. Furthermore, from Fig. 4A to C, we observe that the length of the error bars decreases with the decrease of the channel size. The reason for this may be that for moderately confined DNA, the local folded structures can form and unravel with comparable rates, as indicated by the similar height of the two peaks in the distribution in Fig. 6B–C. Therefore, the behaviors of the confined polymer is a competition between de Gennes' type and Odijk type regimes and the error bar is large. As the channel size becomes smaller, Odijk's theory begins to dominate, resulting in smaller error bars. By integrating the force-extension relation Eq.8, we obtain the free energy expression in the Odijk (or Wang and Gao) deflection regime (see Supporting Information), which further leads to the distribution for the extension :(10) where , and is the normalization factor. We fit this expression to the right peaks in Fig. 6B–C and find that reasonable parameters (m, nm) give excellent matches with the measured probability distributions in experiments. In fact, we can use this free energy expression to understand the transition from a different point of view. We note that the internal segments are expected to stay in the regime with lower free energy, and that regime transition occurs when the free energies in the two regimes are equal. By comparing the free energies in the two regimes, we draw a phase diagram on the plane in Fig. 7. The result shows that as decreases, the transition length decreases. Theoretically, the phase diagram involves an undetermined constant, which we fit such that transition occurs in the range m when nm. Then the result shows that at nm, the transition length is m, which roughly agrees with our experimental result for DNA in a 50 nm70 nm channel (Fig. 4C). The phase diagram shows that transition from de Gennes' to Odijk's regime can occur when decreases with fixed, or when increases with fixed. Figure 7. (A) Phase diagram showing two regimes on the plane, assuming nm for DNA. Transition from de Gennes' to Odijk's regime can occur when decreases with fixed, or when increases with fixed. (B) DNA with local folded structures as an intermediate state between de Gennes's and Odijk's regimes. In experiments, we observe heterogeneity in the intensity profile of YOYO-1 dye along the backbone of a confined DNA, which suggests the existence of the local folded structures (see Supporting Information Fig. S2). We also measure the end-to-end extension for DNA with different lengths (longer than 10 microns) in a 60 nm100 nm channel and the result agrees with Odijk's theory (Fig. S3). In the above analysis, we have applied the theories (de Gennes, Odijk, Wang and Gao) for the end-to-end extension/fluctuation to evaluate the internal, or local extension/fluctuation of a confined DNA. The assumption behind this is that when the internal segments are much longer than the persistence length of the DNA, the behavior of the segments is not very different from that of the entire DNA (with the same length) because the boundary conditions do not play a significant role [24]–[26]. To verify such an assumption, we explicitly calculate the internal fluctuation in Odijk's regime by extending a theory we developed earlier [26], and then compare our results to the theories developed for an entire piece of DNA. Following the procedure in ref.[26], we model the polymer as a confined discrete segment wormlike chain, or fluctuating elastic rod (Fig. 8). The Hamiltonian consists of 3 terms (Eq.11): (1) bending energy, (2) confinement energy, and (3) potential energy of an end-to-end applied force as shown in Fig. 8.(11) In the bending energy term, is the bending modulus of the DNA and it can vary along the arc length so that the polymer is not necessarily homogeneous in mechanical properties. is the tangent vector along the polymer. For the confinement potential term, we follow Wang and Gao's approach [11] and use an effective quadratic energy characterized by the coefficient , with being the transverse displacement. In general, can be a function of the arc length in case the confinement is not uniform. Also, for 3D chains in rectangular channels, can be different in the two transverse directions. For the potential energy term, we consider the chain subjected to an end-to-end force , which can be set to zero if no force is applied. is the end-to-end extension of the chain. Up to a second order approximation, the Hamiltonian can be written in matrix form as shown in Eq.12, with being the discretized tangent angles and being the stiffness matrix of the chain [26]. Figure 8. Discrete wormlike chain model for confined DNA in a nanochannel. The confined wormlike chain, subjected to and end-to-end applied force in general, has bending energy represented by a spring of stiffness at each node. It has been shown that when there are no constraints on twist (as is the case here), thermodynamic properties of a 3D chain can be easily generated from those of two 2D chains [26]. Therefore, for simplicity, here we describe the theory for 2D chains and plot the results for the corresponding 3D chains. To get the internal fluctuation, we first need to calculate (1) the partition function, and (2) the angle fluctuation . These are evaluated in the “Materials and Methods” section. Finally, for any internal segment between node and node of the discrete chain, the mean extension and the corresponding rms fluctuation can be explicitly calculated as:(13) where is the segment length of the discrete chain. In Fig. 9, we consider DNA in 60 nm60 nm channels and plot versus for all the pairs of internal nodes and see if the profiles match with the theories developed for the entire piece of DNA. Fig. 9(A) shows the result for a chain with contour length m, which is much larger than its persistence length nm. The internal fluctuation profile agrees exactly with Eq.9, which is derived for the end-to-end fluctuations. In particular, all the data collapses into a single curve with power law. As the contour length of the polymer decreases, however, (Fig. 9B–D), the internal fluctuation profile begins to scatter around the curve for the end-to-end fluctuation. This implies that, for short chains, the magnitude of internal fluctuation can be different, even if two internal segments have the same mean extension. The magnitude of the fluctuation depends strongly on where the internal segment is located. In fact, we show in Fig. 10 that the internal segments located at the two boundaries have larger fluctuation because they have more freedom to fluctuate compared to the segments inside the chain. The strong boundary effects on short chains (such as, DNA with contour length 0.6–7 m) have been discussed by several groups recently [24]–[26]. Our results suggest that the accuracy of DNA sizing depends on the DNA contour length. For a short DNA with contour length m confined in a 60 nm60 nm channel, the uncertainty of the measurement will be high. For the experimental results we discussed earlier, the DNA, T4 DNA and BAC DNA all have contour lengths of tens of microns, for which boundary effects can be neglected. Therefore, it is safe to use the formulae for end-to-end extension/fluctuation to estimate the internal properties of the confined DNA in our experiments. Figure 9. Fluctuation versus mean extension of internal segments of the strongly confined DNA in nm channels (Eq.13 and Eq.14). The contour lengths of the DNA are (A) m, (B) m, (C) m and (D) nm. For a long DNA (A and B), data from internal segments of various locations of the chain collapse on the a curve with power law (light green). The result agrees with Eq.9 (blue), which is derived for the end-to-end fluctuation of a confined DNA. For short DNA however (C and D), no power law is found as data from various locations of the chain do not collapse onto a single curve (light green). Therefore, formulae derived for the end-to-end fluctuation of the confined DNA, such as Eq.9 (blue), cannot be used for internal fluctuation. The boundary effect is so significant that the rms fluctuation not only depends on , but also on the location of the internal segments. Figure 10. Fluctuation as a function as the position of an internal segment for a short chain. The contour length of the entire chain is short ( nm), so that the fluctuation not only depends on the length of the internal segment, but also on its position. Here we plot the fluctuation versus position for internal segments with the same size: 50 nm (red) and 10 nm (blue). For the internal segments close to the boundaries, the fluctuation is larger because they have more freedom compared to the segments inside the chain. To measure the internal fluctuation, we have introduced nicks into the DNA so that internal sites along the DNA can be labeled. Since the theory discussed above allows for arbitrary bending modulus as a function of the arc length , we can model the effect of nicking by setting on some nodes of the discrete chain and see whether the nicks have significant effects on the behavior of the DNA. For simplicity, we assume here that the nicks are equally spaced along the chain. Fig. 11 shows that the fluctuation profile does not significantly deviate from the homogeneous chain with uniform when there are less than nicks along a m chain (50 kbp DNA in a 60 nm60 nm channel). In our experiments, the fluorescent tagging is introduced at the nicking endonuclease recognition sequence sites, which have much lower density than 1 nick/kbp in , T4 and BAC DNA. Therefore, the nicks will not significantly affect the DNA internal fluctuation. Figure 11. Fluctuation of a m long chain with persistence length nm confined in a 60 nm 60 nm channel. From bottom to top: (1) : no nicks; (2) : 10 nick in m; (3) : 50 nicks in m; (4) : 100 nicks in m; (5)□ : 200 nicks in m. This figure shows that when the density of nicks is lower than nicks per m, or nick per kbp of DNA, the fluctuation profile is almost the same as that for a chain without nicks. To summarize, in this paper, we have investigated the thermal fluctuations of the internal segments of a piece of confined DNA in a nanochannel. The channel size is on the order of the persistence length of the DNA and we have compared the fluctuation data to several theories in literature. We have found that for channel widths on the order of nm there exists a critical length scale 10 m for the mean extension of an internal segment below which the de Gennes' theory describes the internal fluctuations and above which the data agree better with Odijk's deflection theory. For long DNAs confined in nanochannels we have inferred that there are folded structures whose branches are about 3 times the persistence length of DNA which are separated by segments with mean extension 10m. We surmise that these folded structures are indicative of a transition from the Odijk regime, in which the DNA is relatively straight, to the deGennes regime, in which the DNA is more blob-like. We have also presented a more detailed theory based on small fluctuations and incorporating the effects of confinement. We have shown that one can use the existing theories for end-to-end extension/ fluctuations to study the statistical properties of internal segments only when the contour length of the chain is much larger than the persistence length of the molecule so that boundary effects play no role. Our calculations suggest that introducing nicks into the DNA can change its fluctuation behavior if the density of nicks is greater than about nick per kbp DNA. Materials and Methods Sequence specific labeling and DNA staining In a l reaction native, duplex DNA samples ng/l (, T4 DNA and also MCF7 BAC clone 9I10) are incubated with U of Nb.BbvCI ( U/l) (NEB, Ipswich, MA) in 1 NEB buffer 2 (NEB) for hr at 37C and min at 65C. The nicked DNA samples ( ng/l) are then incubated for min at 50C in 1NEB thermopol buffer with DNA polymerase Vent (exo-) (NEB) at U/l in presence of a mixture of nM dAGC and nM Alexa-546 labeled dUTP. Then, the DNA (4 ng/l) samples are stained with intercalating dye YOYO-1 iodide at dye molecule per base pairs of DNA (Invitrogen Inc, Carlsbad, CA) in presence of M DTT (Promega Inc, Madison, WI). Loading DNA into nanochannels Fabrication of silicon based nanochannel chips has been described elsewhere [27], [28]. The DNA sample is diluted by 2 times using the flow buffer consisting of 1TBE, 3.6% Tween, and 10% Polyvinylpyrrolidone (PVP). Ultrapure distilled water is used for making solutions (Invitrogen Corp., Ultrapure water). The DNA molecules are driven by electric field ( V) at the port of entrance of the chip and allowed to populate there for minutes [29]. Under higher voltage (10 V), the populated molecules are moved to the locos and then through the micro pillar structure of the chip to convert from a compact globular conformation to an open relaxed one. At the nm channel area the molecules adopt a more relaxed linear form with some heterogeneity on the backbone. With one end entering the nanochannel under the electric field, the DNA molecules elongate to a linear conformation with almost homogeneous backbone. Most of the structural heterogeneity progressively disappears as it interacted with the nanochannels, adopting fully confined equilibrium conformation after the field is off (relaxation time s). A buffer consisting 0.5TBE, 1.8% Tween 20, 5% PVP has been used to flow the DNA molecules resulting in a stretch of 65%. Microscopy and image processing The epi-fluorescence imaging is done in Olympus microscope (Model IX-71, Olympus America Inc, Melville, NY) using a 100SAPO objective (Olympus SApo 100X/1.4 oil). YOYO-1, the DNA backbone staining dye (491 nm absorption, 509 nm emission) is excited using nm laser (BCD1, Blue DDD Laser Systems, CVI Melles Griot, Rochester, NY) whereas Alexa-546 (550 nm absorption, 570 nm emission) is excited using nm green laser (Voltex Inc, Colorado Springs, CO). The same filter cube consisting triple band dichroic and dual band pass emission filters (Z488/532/633rpc, z488/543 m respectively) (Custom made, Chroma Technology Corp. Rockingham, VT) is used for detection of YOYO-1 and Alexa-546 emission by alternative laser excitation (using external laser shutters, Thorlabs, Newton, NJ). The emission signal is magnified 1.6 and detected by a back-illuminated, thermoelectric cooled charge coupled device (EMCCD) detector (iXon) (Andor, Ireland). About 200 sequential images of the labeled DNAs confined in nanochannels are recorded at ms exposure time in blue-green alternative laser excitation. Recording and calculations The intensity profile of each Alexa-546 label is fitted by a 2D Gaussian function to determine the position of the label in the channel (Fig. 1B). The position of each internal label is followed frame-by-frame at a time interval of about ms. The probability distribution, the mean value and the corresponding standard deviation of the distance between each pair of internal labels are Partition function and angle fluctuation The partition function for a confined DNA, whose Hamiltonian is expressed in Eq.12, is: , where is the number of segments in the discrete chain. The angle fluctuation or correlation is the Boltzmann weighted average of over all the configurations [26], [30]:(15) Using Eq.15, we can explicitly calculate the mean extension and fluctuation of the internal segments (Eq.13–14). Supporting Information versus profile for the m region. Fluctuation of short internal DNA segments from different sources matches with de Gennes' theory with NO fitting parameters. (A) The backbone intensity images of a confined DNA fragment (34 m) stained with YOYO-1 iodide in a 80 nm130 nm channel. The images are recorded at time interval of s. From the heterogeneity of the intensity profile, we infer that there exist some local structures on the backbone. (B) Images of the time series (8 seconds) of a T4 DNA fragment (32 m). The backbone of the DNA is shown in red and the internal dyes are shown in green. The region with high fluorescence density is the area with local folded structures. The green traces are the trajectories of internal dye labels in the time series. This image shows two internal dyes coming together, which is evidence of formation of local folded structures. Mean end-to-end extension versus contour length of confined DNA in a 60 nm100 nm channel. The fitting result is , which is consistent with the prediction of the Odijk deflection theory: . Author Contributions Conceived and designed the experiments: TS SKD MX PKP. Performed the experiments: SKD MX. Analyzed the data: TS SKD MX PKP. Contributed reagents/materials/analysis tools: TS SKD MX. Wrote the paper: TS SKD MX PKP.
{"url":"http://www.plosone.org/article/info:doi/10.1371/journal.pone.0016890?imageURI=info:doi/10.1371/journal.pone.0016890.g003","timestamp":"2014-04-19T15:27:37Z","content_type":null,"content_length":"301207","record_id":"<urn:uuid:19ee7ea8-d913-4213-bd4a-bce3649661d1>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00067-ip-10-147-4-33.ec2.internal.warc.gz"}
Integration by parts, quick question June 17th 2013, 08:18 AM #1 Junior Member Mar 2012 Integration by parts, quick question So i understand this question mostly, i dont understand where they got the (1+4) from? and shouldnt there be a negative in front of the last part on the 2nd line. Re: Integration by parts, quick question At the end of the second line what you have is: $\int e^{2t} \cos t dt = -[2e^{2t})(- \cos t)] + \int 4 e^{2t}(- \cos t) dt$ Notice that the last term is common with the left hand side, so move it to the other side to get: $(1+4) \int e^{2t} \cos t dt = -2e^{2t}( - \cos t)$ And no, the plus sign before the last term of the 2nd line is correct. Starting with $- \int 2 e ^{2t} \sin t dt$ and using integration by parts, let $u = -2e ^{2t}$, $dv = \sin t dt$, $\int u dv = uv - \int v du = -2e^{2t}(-\cos t) - \int (-\cos t) (-4) e^{2t} dt$ $= -2e^{2t} (-\cos t) + \int 4 e^{2t} (-\cos t) dt$ Re: Integration by parts, quick question I agree, that is hard to make sense of- they leave a lot out. The problem is to integrate $\int_{-\pi}^{\pi} e^{2t}cos(t)dt$. They note that "it doesn't matter which function is chosen to differentiate" and then choose to differentiate the $e^{2t}$. That is we let $u= e^{2t}$ and $dv= cos(t)dt$. Then $du= 2e^{2t}dt$ and $v= sin(t)$. $uv- \int v du$, then, is $e^{2t}sin(t)\left]_{-\pi}^\pi- 2\int_{-\pi}^\pi e^{2t}sin(t)dt$ Of course, $sin(\pi)= sin(-\pi)=$ so that first term is 0. That's the first "0" in the second line. What's left is $2\int_{-\pi}^\pi e^{2t}sin(t)dt$. Now, do the integration by parts again, letting $u= e^{2t}$ and $dv= sin(t)dt$ so that $du= 2e^{2t}dt$ and $v= -cos(t)$. $2\int_{-\pi}^\pi e^{2t}sin(t)dt= 2\left( -e^{2t}cos(t)\right]_{-\pi}^\pi- 2\int_{-\pi}^\pi e^{2t}cos(t)dt\right)$ $cos(\pi)= cos(-\pi)= -1$ so the first term is $2((-1)e^{2\pi}-(-1)e^{-2\pi})= 2(e^{-\pi}- e^{\pi})$ The integral, at this point is $\int e^{2t}cos(t)dt= 2(e^{-\pi}- e^{\pi})- 4\int_{-\pi}^\pi e^{2t}cos(t) dt$ Now, here's the point. Because the first integral takes us from "cos(t)" to "sin(t)" and the second integral takes back to "cos(t)", while the derivatives of $e^{2t}$ always gives " $e^{2t}$", we have come right back (that integral on the right) to what we started with (the integral on the left). So combine those by adding $4\int_{-\pi}^\pi e^{2t}cos(t)dt$, giving $5\int_{-\pi}^\pi e^{2t} cox(t)dt$ on the left. That is where the "4+ 1= 5" on the left came from. Last edited by HallsofIvy; June 17th 2013 at 10:29 AM. Re: Integration by parts, quick question same______integrate **************(line one) -(diff________same) *******************(line 2) This is a boss way for noobs like me to remember the order. brackets mean with an integral sign at the front. June 17th 2013, 10:19 AM #2 June 17th 2013, 10:27 AM #3 MHF Contributor Apr 2005 June 17th 2013, 03:07 PM #4 Junior Member Mar 2012
{"url":"http://mathhelpforum.com/calculus/219923-integration-parts-quick-question.html","timestamp":"2014-04-16T08:00:16Z","content_type":null,"content_length":"48677","record_id":"<urn:uuid:ac47ab8b-9075-4033-ab4a-4c603e0bc1a5>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00448-ip-10-147-4-33.ec2.internal.warc.gz"}
svm {e1071} svm is used to train a support vector machine. It can be used to carry out general regression and classification (of nu and epsilon-type), as well as density-estimation. A formula interface is ## S3 method for class 'formula': svm((formula, data = NULL, ..., subset, na.action = na.omit, scale = TRUE)) ## S3 method for class 'default': svm((x, y = NULL, scale = TRUE, type = NULL, kernel = "radial", degree = 3, gamma = if (is.vector(x)) 1 else 1 / ncol(x), coef0 = 0, cost = 1, nu = 0.5, class.weights = NULL, cachesize = 40, tolerance = 0.001, epsilon = 0.1, shrinking = TRUE, cross = 0, probability = FALSE, fitted = TRUE, seed = 1L, ..., subset, na.action = na.omit)) a symbolic description of the model to be fit. an optional data frame containing the variables in the model. By default the variables are taken from the environment which ‘svm’ is called from. a data matrix, a vector, or a sparse matrix (object of class Matrix provided by the Matrix package, or of class matrix.csr provided by the SparseM package, or of class simple_triplet_matrix provided by the slam package). a response vector with one label for each row/component of x. Can be either a factor (for classification tasks) or a numeric vector (for regression). A logical vector indicating the variables to be scaled. If scale is of length 1, the value is recycled as many times as needed. Per default, data are scaled internally (both x and y variables) to zero mean and unit variance. The center and scale values are returned and used for later predictions. svm can be used as a classification machine, as a regression machine, or for novelty detection. Depending of whether y is a factor or not, the default setting for type is C-classification or eps- regression, respectively, but may be overwritten by setting an explicit value. Valid options are: □ C-classification □ nu-classification □ one-classification (for novelty detection) □ eps-regression □ nu-regression the kernel used in training and predicting. You might consider changing some of the following parameters, depending on the kernel type. (gamma*u'*v + coef0)^degree radial basis: tanh(gamma*u'*v + coef0) parameter needed for kernel of type polynomial (default: 3) parameter needed for all kernels except linear (default: 1/(data dimension)) parameter needed for kernels of type polynomial and sigmoid (default: 0) cost of constraints violation (default: 1)---it is the ‘C’-constant of the regularization term in the Lagrange formulation. parameter needed for nu-classification, nu-regression, and one-classification a named vector of weights for the different classes, used for asymmetric class sizes. Not all factor levels have to be supplied (default weight: 1). All components have to be named. cache memory in MB (default 40) tolerance of termination criterion (default: 0.001) epsilon in the insensitive-loss function (default: 0.1) option whether to use the shrinking-heuristics (default: TRUE) if a integer value k>0 is specified, a k-fold cross validation on the training data is performed to assess the quality of the model: the accuracy rate for classification and the Mean Squared Error for regression logical indicating whether the fitted values should be computed and included in the model or not (default: TRUE) logical indicating whether the model should allow for probability predictions. integer seed for libsvm (used for cross-validation and probability prediction models). additional parameters for the low level fitting function svm.default An index vector specifying the cases to be used in the training sample. (NOTE: If given, this argument must be named.) A function to specify the action to be taken if NAs are found. The default action is na.omit, which leads to rejection of cases with missing values on any required variable. An alternative is na.fail, which causes an error if NA cases are found. (NOTE: If given, this argument must be named.) For multiclass-classification with k levels, k>2, libsvm uses the ‘one-against-one’-approach, in which k(k-1)/2 binary classifiers are trained; the appropriate class is found by a voting scheme. libsvm internally uses a sparse data representation, which is also high-level supported by the package SparseM. If the predictor variables include factors, the formula interface must be used to get a correct model matrix. plot.svm allows a simple graphical visualization of classification models. The probability model for classification fits a logistic distribution using maximum likelihood to the decision values of all binary classifiers, and computes the a-posteriori class probabilities for the multi-class problem using quadratic optimization. The probabilistic regression model assumes (zero-mean) laplace-distributed errors for the predictions, and estimates the scale parameter using maximum likelihood. An object of class "svm" containing the fitted model, including: The resulting support vectors (possibly scaled). The index of the resulting support vectors in the data matrix. Note that this index refers to the preprocessed data (after the possible effect of na.omit and subset) The corresponding coefficients times the training labels. The negative intercept. In case of a probabilistic regression model, the scale parameter of the hypothesized (zero-mean) laplace distribution estimated by maximum likelihood. probA, probB numeric vectors of length k(k-1)/2, k number of classes, containing the parameters of the logistic distributions fitted to the decision values of the binary classifiers (1 / (1 + exp(a x + b))). • Chang, Chih-Chung and Lin, Chih-Jen: LIBSVM: a library for Support Vector Machines • Exact formulations of models, algorithms, etc. can be found in the document: Chang, Chih-Chung and Lin, Chih-Jen: LIBSVM: a library for Support Vector Machines • More implementation details and speed benchmarks can be found on: Rong-En Fan and Pai-Hsune Chen and Chih-Jen Lin: Working Set Selection Using the Second Order Information for Training SVM Data are scaled internally, usually yielding better results. Parameters of SVM-models usually must be tuned to yield sensible results! See Also predict.svm plot.svm tune.svm matrix.csr (in package SparseM) ## classification mode # default with factor response: model <- svm(Species ~ ., data = iris) # alternatively the traditional interface: x <- subset(iris, select = -Species) y <- Species model <- svm(x, y) # test with train data pred <- predict(model, x) # (same as:) pred <- fitted(model) # Check accuracy: table(pred, y) # compute decision values and probabilities: pred <- predict(model, x, decision.values = TRUE) attr(pred, "decision.values")[1:4,] # visualize (classes by color, SV by crosses): col = as.integer(iris[,5]), pch = c("o","+")[1:150 %in% model$index + 1]) ## try regression mode on two dimensions # create data x <- seq(0.1, 5, by = 0.05) y <- log(x) + rnorm(x, sd = 0.2) # estimate model and predict input values m <- svm(x, y) new <- predict(m, x) # visualize plot(x, y) points(x, log(x), col = 2) points(x, new, col = 4) ## density-estimation # create 2-dim. normal with rho=0: X <- data.frame(a = rnorm(1000), b = rnorm(1000)) # traditional way: m <- svm(X, gamma = 0.1) # formula interface: m <- svm(~., data = X, gamma = 0.1) # or: m <- svm(~ a + b, gamma = 0.1) # test: newdata <- data.frame(a = c(0, 4), b = c(0, 4)) predict (m, newdata) # visualize: plot(X, col = 1:1000 %in% m$index + 1, xlim = c(-5,5), ylim=c(-5,5)) points(newdata, pch = "+", col = 2, cex = 5) # weights: (example not particularly sensible) i2 <- iris levels(i2$Species)[3] <- "versicolor" wts <- 100 / table(i2$Species) m <- svm(Species ~ ., data = i2, class.weights = wts) Documentation reproduced from package e1071, version 1.6-2. License: GPL-2
{"url":"http://www.inside-r.org/packages/cran/e1071/docs/svm","timestamp":"2014-04-21T02:05:11Z","content_type":null,"content_length":"42318","record_id":"<urn:uuid:ff218edf-9d00-4759-bd0f-e24efc74a81a>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00064-ip-10-147-4-33.ec2.internal.warc.gz"}
Rita Sue and Bob too In article <cg3ksb$bg5$(E-Mail Removed)>, "M. Clift" <(E-Mail Removed)> wrote: > Hi All, > Can someone help. I promise I've looked how to do this but can't find a > way... > Ok, to find one name is easy > if 'Bob' in list: > print "They were found" > else: > print "They are not in list" > But, how to I find a sequence in a list of unknown size? i.e. this sequence > in list of other names and replace it with three others? > 'Rita','Sue','Bob' > This is almost a nightly occurrence (my posting questions), but I am > learning : ) You've gotten several other answers, but I'd like to propose yet another def replace_sublist(L, S, T): """Replace each sublist of L equal to S with T. Also returns the resulting list.""" assert(len(S) == len(T)) for p in [ x for x in xrange(len(L)) if L[x] == S[0] and L[x : x + len(S)] == S ]: L[p : p + len(S)] = T return L In short, the list comprehension gives all the offsets in L where a copy of S can be found as a sublist. Now, if it happens that some of these positions overlap (e.g., L = [ 1, 1, 1 ] and S = [ 1, 1 ]), then the results will be strange. But as long as your matches do not overlap, this should work well. Michael J. Fromberger | Lecturer, Dept. of Computer Science | Dartmouth College, Hanover, NH, USA
{"url":"http://www.velocityreviews.com/forums/t334643-rita-sue-and-bob-too.html","timestamp":"2014-04-21T02:40:52Z","content_type":null,"content_length":"61788","record_id":"<urn:uuid:0ba4469b-c0d4-421b-917c-59a1aea4d541>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00311-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-user] Construct sparse matrix from sparse blocks Neilen Marais nmarais@sun.ac... Fri Feb 8 08:15:14 CST 2008 On Thu, 07 Feb 2008 10:45:32 -0500, David Warde-Farley wrote: > If they're stored as a vector of row indices, a vector of column indices > and a vector of values (as in the scipy.sparse.coo_matrix ) then > constructing it should be as straightforward as doing a few array > concatenations (or copies). This format can then be efficiently > converted to CSR or CSC with the tocsr() or tocsc() methods, which is > the format you want it in if you're going to be doing any multiplies, > etc. Thanks for the suggestion. I ended up writing the function at the end of message. If anyone else finds it useful I think it may be a good idea to it somewhere in scipy.sparse? def merge_sparse_blocks(block_mats, format='coo', dtype=N.float64): Merge several sparse matrix blocks into a single sparse matrix Input Params block_mats -- sequence of block matrix offsets and block matrices such that block_mats[i] == ((row_offset, col_offset), format -- Desired sparse format of output matrix dtype -- Desired dtype, defaults to N.float64 Global matrix containing the input blocks at the desired block locations. If csr or csc matrices are requested it is ensured that indices are sorted. The 5x5 matrix A containing a 3x3 upper diagonal block, A_aa and 2x2 diagonal block A_bb: A = [A_aa 0 ] [0 A_bb] A = merge_sparse_blocks( ( ((0,0), A_aa), ((3,3), A_bb)) ) nnz = sum(m.nnz for o,m in block_mats) data = N.empty(nnz, dtype=dtype) row = N.empty(nnz, dtype=N.intc) col = N.empty(nnz, dtype=N.intc) nnz_o = 0 for (row_o, col_o), bm in block_mats: bm = bm.tocoo() data[nnz_o:nnz_o+bm.nnz] = bm.data row[nnz_o:nnz_o+bm.nnz] = bm.row+row_o col[nnz_o:nnz_o+bm.nnz] = bm.col+col_o nnz_o += bm.nnz merged_mat = sparse.coo_matrix((data, (row, col)), dtype=dtype) if format != 'coo': merged_mat = getattr(merged_mat, 'to'+format)() if format == 'csc' or format == 'csr': if not merged_mat.has_sorted_indices: merged_mat.sort_indices return merged_mat > David More information about the SciPy-user mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-user/2008-February/015471.html","timestamp":"2014-04-17T21:58:20Z","content_type":null,"content_length":"4943","record_id":"<urn:uuid:ed579194-2c7b-49da-be0e-c45c34ca1454>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00497-ip-10-147-4-33.ec2.internal.warc.gz"}
Lemon Grove Math Tutor Find a Lemon Grove Math Tutor ...I'm currently finishing my university transfer requirements for Civil/Structural Engineering at San Diego City College this spring semester. I am also currently employed by San Diego Community College District at City College as a Senior Tutor for the Math Center of City College. This semester will be the 4th semester being employed. 5 Subjects: including algebra 1, algebra 2, geometry, prealgebra ...I met weekly with a group of eleven students, gave feedback on their work, and led discussions. I've been tutoring in San Diego for about a year now, and I really enjoy establishing connections with students and watching them grow. There's nothing cooler than bringing someone to their "aha!" moment of the day. 15 Subjects: including prealgebra, reading, English, probability ...I am an avid reader of American History and regularly keep up on historical and political events. I am able to provide assistance on any of these subjects. In tutoring, I focus on providing practical examples and background information that help make the subject matter come to life.B.S. in Political Science J.D. 16 Subjects: including logic, English, writing, economics ...At times, I must go with the flow, making adjustments as students' energy and focus changes. With love, respect , patience, and the ability to improvise, we can achieve not only the academic goals but also life skills! One of the biggest reasons that I was able to graduate from UCSD with a GPA of 3.5 at the age of 46 years old is that I developed outstanding study skills. 28 Subjects: including geometry, SAT math, trigonometry, probability ...I recently graduated from UC Berkeley, where I double majored in pure mathematics and physics. I'm currently working on my PhD in physics at UCSD. (In case you're curious, I'm probably going into some field of theoretical physics :)). During my time as an undergrad, I was able to publish a few ... 16 Subjects: including algebra 1, algebra 2, calculus, geometry
{"url":"http://www.purplemath.com/lemon_grove_math_tutors.php","timestamp":"2014-04-19T15:06:25Z","content_type":null,"content_length":"23953","record_id":"<urn:uuid:afacbd85-608b-476e-afe3-fe2db1df3e10>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00094-ip-10-147-4-33.ec2.internal.warc.gz"}
Copyright © University of Cambridge. All rights reserved. 'Russian Cubes' printed from http://nrich.maths.org/ Why do this problem? This problem invites learners to consider a familiar geometrical setting (the cube) and think more deeply about how you can utilise its properties to analyse a problem, and develop and share their ideas and Possible approach Invite the learners to colour nets of cubes (using just two colours) so that they can be made into different cubes. Ask them how they decide whether two cubes are identical and allow time for the group to investigate. This process could be shortened by using linking squares of two colours. Allow groups to share their findings and encourage others to challenge them so that further reflection can take place. • How do they know they have them all? The surprising result that there are just two, helps with the final part of the problem and here you might wish to encourage the sharing of different approaches to providing a convincing argument. Learners may chose to do this practically (several cubes each partially painted to show the possibilities at each stage) or more theoretically by defining sides and edges and talking about the freedoms and constraints arising at each stage. Key questions How do you know you have all the possibilties? Possible extension Consider using three colours instead of two. How many more possibilities does that open up? Possible support Making nets and creating different cubes may be essential for learners to be able to viusalise and remove redundant examples. Focus on the discussions around what you could do to one cube to make it look the same as another before actually doing it.
{"url":"http://nrich.maths.org/296/note?nomenu=1","timestamp":"2014-04-17T07:14:01Z","content_type":null,"content_length":"5133","record_id":"<urn:uuid:bcc58384-d1ee-4b46-8c7b-21d76bf68598>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00174-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: October 1998 [00288] [Date Index] [Thread Index] [Author Index] Re: Plot, Cursor and Spelling Errors questions • To: mathgroup at smc.vnet.net • Subject: [mg14497] Re: Plot, Cursor and Spelling Errors questions • From: Paul Abbott <paul at physics.uwa.edu.au> • Date: Fri, 23 Oct 1998 20:59:15 -0400 • Organization: University of Western Australia • References: <70k5b2$eou@smc.vnet.net> • Sender: owner-wri-mathgroup at wolfram.com Ranko Bojanic wrote: > I still do not understand why anh how your module works. > PrecisionPlot[f_,{x_,xmin_,xmax_},opts___?OptionQ]/; > Head[f]=!=List:= > Module[{g,h}, > g=Evaluate[f/.x->#]&; > h=g[SetPrecision[#,17]]&; > Plot[h[x],{x,xmin,xmax}, opts] > ] Here is another approach: instead of modifying Plot, modify your function instead. E.g., the following plot generates garbage (since LegendreP[n, x] are bounded in the range [-1,1]): In[1]:= Plot[Evaluate[LegendreP[50, x]], {x, 0.95, 1}, PlotRange -> All]; Modify the function to use SetPrecision: In[2]:= p[n_, x_, prec_:40] := LegendreP[n, SetPrecision[x, prec]] Now plot the function: In[3]:= Plot[p[50, x], {x, 0.95, 1}, PlotRange -> All]; > The program I posted is > just the first step in the construction of the polynomial of best > approximation to Exp[x] on [-1,1], of degree 14. If you want a > polynomial of degree 30, set n=31 and the precision 50 istead of 17 > since the magnitude of the error curve is 10^(-42). The PrecisionPlot > module works fine in this case as well. Why restrict attention to polynomial approximations? You can, in general, do much better with rational approximations. Also, you might want to look at the NumericalMath`Approximations` and Calculus`Pade` packages. For example, after loading In[4]:= << "NumericalMath`Approximations`" you can compute the best approximation to Exp[x] on [-1,1] of degree 14 using MiniMaxApproximation. Here we compute the _exact_ difference between the MiniMaxApproximation and Exp[x]: In[5]:= Delta[x_] = Exp[x] - MiniMaxApproximation[Exp[x], {x, {-1, 1}, 14, 0}, WorkingPrecision -> 30][[2,1]] Now we overload this function so that a second argument indicates the In[6]:= Delta[(x_)?NumericQ, prec_] := Delta[SetPrecision[x, prec]] Now we can plot the difference: In[7]:= Plot[Delta[x, 40], {x, -1, 1}]; Paul Abbott Phone: +61-8-9380-2734 Department of Physics Fax: +61-8-9380-1014 The University of Western Australia Nedlands WA 6907 mailto:paul at physics.uwa.edu.au AUSTRALIA God IS a weakly left-handed dice player
{"url":"http://forums.wolfram.com/mathgroup/archive/1998/Oct/msg00288.html","timestamp":"2014-04-19T22:23:21Z","content_type":null,"content_length":"36981","record_id":"<urn:uuid:6ee41582-583a-49eb-adfe-edb4671483a4>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00121-ip-10-147-4-33.ec2.internal.warc.gz"}
Bayesian semiparametric analysis of gene-environment interactionn unde conditional gene-environment independence (Venue: GH seminar RM2) Seminar Room 2, Newton Institute Gatehouse In case-control studies of gene-environment association with disease, when genetic and environmental exposures can be assumed to be independent in the underlying population, one may exploit the independence in order to derive more efficient estimation techniques than the traditional logistic regression analysis (Chatterjee and Carroll, 2005). However, covariates that stratify the population, such as age,ethnicity and alike, could potentially lead to non-independence. Modeling these stratification effects introduce a large number of parameters in the retrospective likelihood. We provide a novel semiparametric Bayesian approach to model stratification effects under the assumption of gene-environment independence in the control population using a Dirichlet Process Mixture. We illustrate the methods by applying them to data from a population-based case-control study on ovarian cancer conducted in Israel. A simulation study is conducted to compare our method with other popular choices. The results reflect that the semiparametric Bayesian model allows incorporation of key scientific evidence in the form of a prior and offers a flexible, robust alternative when standard parametric modelassumptions for the distribution of the genetic and environmental exposures do not hold. Related Links
{"url":"http://www.newton.ac.uk/programmes/BNR/seminars/2007080709002.html","timestamp":"2014-04-24T12:59:01Z","content_type":null,"content_length":"5201","record_id":"<urn:uuid:0c12bf4d-7857-4351-a089-0b8a86477ba9>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00336-ip-10-147-4-33.ec2.internal.warc.gz"}
Belvedere, CA Algebra 1 Tutor Find a Belvedere, CA Algebra 1 Tutor ...After earning my Multiple Subject Teaching Credential from CSU Sacramento, my wife and I moved to Washington, DC, where I began working with an education non-profit. The Literacy Lab allowed me to use my skills and content knowledge to improve their programs when working with small groups of stu... 16 Subjects: including algebra 1, English, algebra 2, special needs ...During the school year, I have been employed as a nanny for multiple families offering homework help in addition to the usual care-giving. And in my previous job, I delivered regular "kid-friendly" composting workshops to libraries across San Francisco. My approach: Learning and especially private tutoring should be a fun, meaningful experience. 24 Subjects: including algebra 1, English, reading, writing ...There will be an emphasis on having fun, so puzzles and games, and real-world problems, will be used as instruction tools. No homework will be assigned. Struggling students will improve their 13 Subjects: including algebra 1, calculus, physics, geometry ...But I still wanted to do something to continue making the world a better place. So I turned to something I had done for my friends in High School, my troops in the field, and my neighborhood kids, TUTORING! I have been doing it professionally now for over ten years. 10 Subjects: including algebra 1, calculus, geometry, precalculus ...I obtained a Master's in Education, as well as my Teaching Credential, from Stanford University (through the Stanford Teacher Education Program). I obtained a Bachelor's of Science from UCLA (majoring in Psychobiology, minoring in Education Studies). Over the past 5 years, I have been teaching h... 11 Subjects: including algebra 1, geometry, biology, elementary math
{"url":"http://www.purplemath.com/Belvedere_CA_algebra_1_tutors.php","timestamp":"2014-04-17T04:10:03Z","content_type":null,"content_length":"24124","record_id":"<urn:uuid:ad7a74eb-2bb0-4681-88d1-789f4b994834>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00098-ip-10-147-4-33.ec2.internal.warc.gz"}
Cranbury Prealgebra Tutor Find a Cranbury Prealgebra Tutor ...I am well versed in how to approach each of these skills. I take the time to read along with my students when they are assigned books for school, so that I can talk to them about their work. I also read along with students on the books they choose for themselves. 43 Subjects: including prealgebra, reading, English, writing ...My goal is to become a Math professor for students in either middle or high school. I can tutor students with Algebra 1 and 2. My experience is helping my friends and other students pass their math classes. 4 Subjects: including prealgebra, Spanish, algebra 1, algebra 2 ...I have learned calculus both in my high school and university. The topics cover: Algebraic expressions, algebraic equations, inequalities, functions, graphing. Exponential, logarithmic, and trigonometric functions, etc. 10 Subjects: including prealgebra, calculus, Chinese, algebra 2 ...I continue to explore and learn about the history of Theatre. I was born and raised in Saint Petersburg, Russia. I continue to speak Russian with my family and friends, with whom it is 14 Subjects: including prealgebra, calculus, physics, algebra 2 ...Some of my achievements include a high SAT score, passing the AP biology, AP calculus, and AP government exams with high scores, as well as over 200 college credits specializing in science-based courses. Tutoring has always been a passion of mine; the ability to enhance another's knowledge on a ... 26 Subjects: including prealgebra, reading, English, writing
{"url":"http://www.purplemath.com/cranbury_nj_prealgebra_tutors.php","timestamp":"2014-04-19T23:11:46Z","content_type":null,"content_length":"23788","record_id":"<urn:uuid:56f8092b-ef02-41ed-872a-264422f1ca6b>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00407-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Applicable to this and all other questions-- For any solution that involves two answers, type one answer separated by a comma and then the other answer. Solve the equation x2 = 81 by finding square • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/513e3cf4e4b029b0182bf94a","timestamp":"2014-04-16T04:34:19Z","content_type":null,"content_length":"47042","record_id":"<urn:uuid:e8315632-519f-47ed-93d6-78809c8736f5>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00136-ip-10-147-4-33.ec2.internal.warc.gz"}
$\frac{\pi ^{2n}}{(2n+1)!} = \displaystyle \sum_{j_1,...j_n=1 \atop j_i eq j_k}^{\infty} (j_1j_2...j_n)^{-2}$ My friend, and fellow math blogger, Owen, tried to tell me I was dealing with the multi-zeta function the other day, but the very general definition on WolframMathworld left me feeling a little mystified. I could see how the identities I was playing with fit in to it, but what was all of that other stuff (like $\sigma_1,...,\sigma_k$? I guess in my equation they are all 1? Why isn't it dealing with powers of the active variable in the product in the denominator? Why the extra level of subscripts?) So, I asked about it on stackexchange to gain more insight, and, thanks to Marni, I was promptly pointed to this lovely paper: Now, I can see easily that the function is: $A(i_1, i_2, ... , i_k) = \sum_{n_1 > n_2 > \cdots > n_k > 1} \frac{1}{n_1^{i_1}n_2^{i_2} \cdots n_k^{i_k}}$ With $i_1, i_2, ... , i_k = 2$. More on this later. Meaning one may repeat values,
{"url":"http://www.futurebird.com/tag/combinations/","timestamp":"2014-04-18T01:04:04Z","content_type":null,"content_length":"23716","record_id":"<urn:uuid:fc21e97e-0df2-4763-ad62-79fcff4d42ed>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00462-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: The electric field between the 2 parallel plates of an oscilloscope is 1.2 x 10^5 V/m. If an electron of energy 2 keV enters at right angles to the field, what will be its deflection if the plates are 1.5 cm long? • one year ago • one year ago Best Response You've already chosen the best response. The deflection angle is required? Best Response You've already chosen the best response. deflection angle is not mentioned in the book Best Response You've already chosen the best response. then what is asked? what you mean find deflection?? Best Response You've already chosen the best response. |dw:1361792988295:dw| Y is the deflection that has to be calculated Best Response You've already chosen the best response. the angle should be 37 Best Response You've already chosen the best response. |dw:1361855358005:dw|for angles 180=x+y+z Best Response You've already chosen the best response. oh 45 is the other number it could be Best Response You've already chosen the best response. in this case, since it is constant, uniform acceleration due to the electric field, by Newton's second law, \(\Sigma \vec F= q \vec E = m \vec a\) where q and m of the electron are known constants. so, \(\vec a=\frac{q \vec E}{m}\), which will be different in direction of the electric field, as expected, now, the original speed of the electron can be calculated from: \(\frac{1} {2}mu^2= 2KeV\), so \(u=\sqrt{ \frac{4KeV}{m}}\) by the equations of motion, \(\vec S_x=\vec u_x t + \frac{1}{2} \vec a_x t^2\), but \(\vec u_x=\vec u\) and \(\vec a_x =0\) and \(\vec S_x =1.5cm \) so, \(t=\frac{1.5}{\sqrt{ \frac{4KeV}{m}}}\) now, for \(\vec S_y=\vec u_y t + \frac{1}{2} \vec a_y t^2\), but this time,\(\vec u_y=0\), \(\vec a_y = \vec a = \frac{q \vec E}{m}\) so, \(\vec S_y= \frac{1}{2} \vec a_y t^2\) sub it all in to get the displacement along the y axis. Do you need further help? Best Response You've already chosen the best response. Cartesian coordinate system + newton law+Gauss's Law+Coulomb's Law=electric field Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/512af3d1e4b02acc415d0fd4","timestamp":"2014-04-21T15:17:21Z","content_type":null,"content_length":"103642","record_id":"<urn:uuid:bf90c1e1-cce6-4bc6-b33c-a408de78ce27>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00134-ip-10-147-4-33.ec2.internal.warc.gz"}
Excerpted from Beachy/Blair, Abstract Algebra, 2nd Ed. © 1996 Forward to §3.1 | Back to §2.2 | Up | Table of Contents | About this document § 2.3 Permutations Definition 2.3.1. Let S be a set. A function permutation of S if The set of all permutations of S will be denoted by Sym(S). The set of all permutations of the set { 1, 2, ..., n } will be denoted by S[n]. Proposition 2.1.6 shows that the composition of two permutations in Sym(S) is again a permutation. It is obvious that the identity function on S is one-to-one and onto. Proposition 2.1.8 shows that any permutation in Sym(S) has an inverse function that is also one-to-one and onto. We can summarize these important properties as follows: (i) If (ii) 1[S] is in Sym(S); (iii) if ^-1 is in Sym(S). Definition 2.3.2. Let S be a set, and let cycle of length k if there exist elements a[1], a[2], ..., a[k] in S such that [1]) = a[2], [2]) = a[3], . . . , [k-1]) = a[k], [k]) = a[1], and [i] (for i = 1, 2, ..., k). In this case we write [1],a[2],...,a[k]). We can also write [2],a[3],...,a[k],a[1]) or [3],...,a[k],a[1],a[2]), etc. The notation for a cycle of length k can thus be written in k different ways, depending on the starting point. The notation (1) is used for the identity permutation. Definition 2.3.3. Let [1],a[2],...,a[k]) and [1],b[2],...,b[m]) be cycles in Sym(S), for a set S. Then disjoint if a[i] [j] for all i,j. Proposition 2.3.4. Let S be any set. If Theorem 2.3.5. Every permutation in S[n] can be written as a product of disjoint cycles. The cycles that appear in the product are unique. Definition 2.3.6. Let [n]. The least positive integer m such that ^m = (1) is called the order of Proposition 2.3.7. Let [n] have order m. Then for all integers j,k we have ^j = ^k if and only if j Proposition 2.3.8. Let [n] be written as a product of disjoint cycles. Then the order of Definition 2.3.9. A cycle (a[1],a[2]) of length two is called a transposition. Proposition 2.3.10. Any permutation in S[n], where n Theorem 2.3.11. If a permutation is written as a product of transpositions in two ways, then the number of transpositions is either even in both cases or odd in both cases. Definition 2.3.12. A permutation even if it can be written as a product of an even number of transpositions, and odd if it can be written as a product of an odd number of transpositions. § 2.3 Permutations: Solved problems If you are reading another book along with Abstract Algebra, you need to be aware that some authors multiply permutations by reading from left to right, instead of the way we have defined multiplication. Our point of view is that permutations are functions, and we write functions on the left, just as in calculus, so we have to do the computations from right to left. You need to do enough calculations so that you will feel comfortable in working with permutations. Certain sets of permutations provide the last major example that we need before we begin studying groups in Chapter 3. You will need the next definition to work some of the problems. Definition. If G is a nonempty subset of Sym (S), we say that G is a group of permutations if the following conditions hold: (i) If (ii) 1[S] is in G; (iii) if ^-1 is in G. We will see later that this agrees with Definition 3.6.1 in the text. 13. For the permutation ^ -1. Solution 14. For the permutations ^-1, ^-1, ^-1, ^-1. 15. Let [9] . Write ^-1. Solution 16. Compute the order of ^-1. Solution 17. Prove that if [n] is a permutation with order m, then ^-1 has order m, for any permutation [n]. Solution 18. Show that S[10] has elements of order 10, 12, and 14, but not 11 or 13. Solution 19. Let S be a set, and let X be a subset of S. Let G = { Prove that G is a group of permutations. Solution 20. Let G be a group of permutations, with G ^-1 = { ^-1 for some µ in G } is a group of permutations. Solution Solutions to the problems | Forward to §3.1 | Back to §2.2 | Up | Table of Contents
{"url":"http://math.niu.edu/~beachy/abstract_algebra/study_guide/23.html","timestamp":"2014-04-18T10:34:54Z","content_type":null,"content_length":"12842","record_id":"<urn:uuid:b95846a9-8886-4174-a233-d2124f548c4b>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00472-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help February 10th 2008, 02:25 AM #1 Mar 2007 I'm stuck on the following question: Three small smooth identical spheres each of mass 1kg move on a straight line on a smooth horizontal floor. Initially B lies between A and C. Spheres A and B are projected directly towards each other with speeds 3m/s and 2m/s respectively and sphere C is projected directly away from sphere A with speed 2m/s. The coefficient of restitution between any two spheres is e. Show that B will only collide with C if e>3/5. I've come up with 4 equations by considering the momentum and coefficient of resitution but I have five unknowns, the speed of A and B after 1s collision, speed of B and C after second collision and e. Have I missed something from the question? Help would be greatly appreciated. Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/advanced-applied-math/27876-collisions.html","timestamp":"2014-04-20T20:28:02Z","content_type":null,"content_length":"29181","record_id":"<urn:uuid:3a68c1fa-4515-4180-acd8-fb1b04f40f84>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00644-ip-10-147-4-33.ec2.internal.warc.gz"}
Lecture 1 - It's Full of Stars! (1/12/99) Next Lecture - How Far the Stars Chapter 11 (ZG4) pages 1 - 3 Open cluster M45, the Pleiades, in the constellation Taurus. (Courtesy SEDS) Key Question: How do we find out about the distant Universe? Key Principle: The Electromagnetic Spectrum Key Problem: What does light tell us about where it came from? 1. Stars, Galaxies and the Universe □ How many orders of magnitude is it from the human scale of 1 meter to the edge of our Solar System? To the outermost observable scale of the Universe? □ How do we get our information about the distant universe when we cannot send space probes to directly sample the conditions as we can in our own solar system? □ What assumptions are critical to our ability to relate results of experiments on the Earth to what we see in the Universe? □ What are examples of the ubiquitous nature of clustering on all scales in our Universe? □ What are some of the wide range of conditions (density, temperature, pressure, gravity, energy) that we will find out there? 2. Electromagnetic Waves □ What is an electromagnetic wave? □ What is the relation between the speed, wavelength, and frequency of electromagnetic waves in free space? □ What is the frequency of a 1-meter radio wave? □ What is the frequency of a 1-cm microwave? □ What is a typical wavelength for visible light? □ What is the difference between radio waves and visible light? □ What is the electromagnetic spectrum? □ What is a quantum of light? □ What is Planck's constant (h)? □ What is the energy of a photon of light with a given wavelength? □ What is infrared radiation and why do we associate it with heat? □ Why are ultraviolet rays and X-rays more dangerous to our bodies than visible light and radio waves? □ What is a gamma ray and how are they generally generated? 3. Parallax □ How do your eyes use stereo vision to give depth perception? □ How does triginometric parallax work? □ For what distance does a 1 AU baseline show a parallax of 1 arc second? □ What is stellar parallax? □ Which component of the Earth's velocity is relevant for parallax? □ What is a parsec? □ How do we use parallax to determine distances? □ What is the angle subtended by the Earth's orbit at a distance of 1 parsec? Goals of this course: ○ Understand how we came to our current paradigm for the Universe ○ Construct our own model for the Universe ○ Build intuition about astrophysical equations ○ See how new technology and observational facilities push astronomy forward ○ Discover our place in the larger picture of the Universe ○ Learn how physics, engineering, mathematics, chemistry, geology and biology all interact in astronomy If you missed the first lecture: No worries. I mostly talked about the stuff in the course guide, and some of the topics listed above. Oh, and I showed some cool laserdisk pictures too. You can find photocopies of my lecture notes on reserve in the Physics, Math & Astronomy Library, 3rd Floor, North Wing, DRL. The notes summarized below give extra explanations for some of the key concepts from this lecture: Electromagnetic Waves: The electromagnetic wave, as described by Maxwell's equations, consists of mutually perpendicular harmonic electric and magnetic fields oscillating in phase and propagating in the orthogonal Almost everything we know about the Universe has been brought to us by the light that we observe with our instruments and eyes. Light, radio waves, radar, X-rays and ultraviolet rays are all examples of the same phenomenon of electromagnetic radiation. Light can be thought of as a electromagnetic wave with crests and troughs as in ocean waves (but without the ocean!). The things that are rising and falling in an electromagnetic wave are the electric and magnetic "fields", which rise and fall in step as a function of space and time. If we were to freeze time, and move along the direction of a wave, we would measure electric and magnetic forces that increased then decreased then changed direction and back again, like a sine wave. The distance between crests of the wave is called the wavelength and is measured in meters (or centimeters or nanometers, whatever is convenient). If one were to let time move normally, but measure the electric and magnetic forces as a wave passes by, then we would see the crests and troughs moving past us at the speed of light. Thus, we would see the waves pass us at the frequency of a certain number of cycles per second, say f, given by: f = c / l where l is meant to be the wavelength. We are familiar in using frequency to describe radio waves. When we say our favorite radio station has a frequency of 100 Megacycles (or MegaHertz), we mean that f = 100 x 10^6, that is 10^8, cycles per second. (Remember, mega means 10^6). Note that since the speed of light c = 3 x 10^8 m/s, that a frequency of 100 megacycles corresponds to a wavelength l = c/f = 3 meters. The electric and magnetic forces in an electromagnetic wave are at right angles (perpendicular) to each other always. The electric and magnetic forces from an electromagnetic wave, such as visible light or radio waves, can be measured. A radio wave, with a wavelength of 1 meter for example, can cause electrons in a wire of about that same length to move back and forth due to the oscillating electric force in the wave. These moving electrons are measureable as an electric current in the wire - this is how a radio or TV antenna works to receive signals! The electrons in the metal of the antenna (which is basically just a long wire) are free to move a little in response to the radio wave electric field. The current alternates like a sine wave. The reverse is also true! If you oscillate the current like a sine wave in a long wire, an electromagnetic wave of the same frequency is generated. This is how radio and TV transmitters work. Light as Photons: Light can also be thought of as made up of particles, known as photons. Light is both a wave and a particle at the same time. The reconciliation of this dual nature has been one of the greatest breakthroughs in physics at the turn of the century. Each photon has a specific energy given proportional to the frequency of the wave that it corresponds to. The energy of a photon of frequency f E = h f where the constant h is known as Planck's constant. Planck's constant h = 6.63 × 10^-34 Joules sec. There are many different energy photons, with energies ranging from zero to infinity, each corresponding to waves with wavelengths from infintely long to infintesimally small (zero). This is what is called the electromagnetic spectrum Radio waves, with wavelengths around 1 meter, are fundamentally the same as visible light, with wavelength around 500 nanometers (nm), the same as X-rays, with wavelengths around 1 nm. Because the photon energy is proportional to frequency, the high frequency, short wavelength radiation like UV and X-rays and Gamma Rays can actually do major damage to our bodies if we are exposed to it. It takes many microwaves (wavelength about 1 cm) to cook food in a microwave oven, because the photons each have very low energy. Nuclear explosions release large amounts of deadly Gamma Rays, each of which can do lots of molecular damage when they pass into organic material. Different materials absorb specific frequencies of light. This is due to resonances with atomic and molecular structures as we will see later on. The air in the Earth's atmosphere absorbs the waves except special "windows" or wavelength bands where it is transparent. Both visible light and radio waves fall in these windows, so we can see out from the Earth into the heavens. Stellar Parallax: The small angle formula tells us that the angle (in radians) subtended by an object of diameter D at distance r is: This can be turned around in that if we observe an object shift direction by this angle (in radians) when we move a distance D perpendicular to its direction, then it must be at a distance r. There are 206265 arcseconds (") in a radian, so the angular size of an object of diameter D at distance r is If we use the baseline of the Earth's orbit (2 AU for 6 months apart) and see a shift of theta = 2 * p arcseconds, then r = 206265 AU / p It is customary to define p as the parallax. Thus a parallax p of 1" means that the distance r is 206265 AU. If we define the unit of distance called the parsec to be 1 pc = 206265 AU, then r = 1 pc / p A star 1 parsec away has a parallax of 1 arcsecond. Talk about convenience! Note that 1 pc is about 3.26 light-years. The nearest star is around 1 pc away. Next Lecture --- Astr12 Index --- Astr12 Home smyers@nrao.edu Steven T. Myers
{"url":"http://www.aoc.nrao.edu/~smyers/courses/astro12/L1.html","timestamp":"2014-04-17T09:43:26Z","content_type":null,"content_length":"11706","record_id":"<urn:uuid:9959dc30-0ec3-497c-8ad3-687f71e1e472>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00264-ip-10-147-4-33.ec2.internal.warc.gz"}
The Axiom of Choice 1. Zermelo does not, however, actually give the principle an explicit name at this point. ( He does so only in 1908, where he uses the term “postulate of choice”. We note that Zermelo here uses the term “product” , which is the reason why a number of mathematicians (in particular Russell and Ramsey) referred to the axiom as the “multiplicative axiom” (see §4 below). 2. A function f: A → B is surjective if for any element b ∈ B there is a ∈ A such that f(a) = b. A right inverse to f is a function g: B → A such that, for any x ∈ B , f(g(x) = x. 3. Zermelo's formulation reads literally: A set S that can be decomposed into a set of disjoint parts A, B, C, … , each containing at least one element, possess at least one subset S[1] having exactly one element with each of the parts A , B, C, … , considered. 4. The difficulty here is amusingly illustrated by an anecdote due to Bertrand Russell. A millionaire possesses an infinite number of pairs of shoes, and an infinite number of pairs of socks. One day, in a fit of eccentricity, the millionaire summons his valet and asks him to select one shoe from each pair. When the valet, accustomed to receiving precise instructions, asks for details as to how to perform the selection, the millionaire suggests that the left shoe be chosen from each pair. Next day the millionaire proposes to the valet that he select one sock from each pair. When asked as to how this operation is to be carried out, the millionaire is at a loss for a reply, since, unlike shoes, there is no intrinsic way of distinguishing one sock of a pair from the other. In other words, the selection of the socks must be truly arbitrary. 5. In 1923 Hilbert asserted: The essential idea on which the axiom of choice is based constitutes a general logical principle which, even for the first elements of mathematical inference, is indispensable. (Quoted in section 4.8 of Moore 1982.) 6. For a detailed history of the Axiom of Choice, see Moore 1982. 7. Here (x)[x ∈ X] indicates that all the members of X are taken as designated elements of the structure, i.e. as being named by constant symbols in the associated first-order language. 8. For a detailed account of the proof of the independence of the Axiom of Choice, see Bell 2005 or Jech 1973. 9. For the proof of ZL from AC in ZF, see Mendelson 1987 (Ch. 4, sect. 5). For a proof not using ordinals, and so formulable in Zermelo set theory, see Bourbaki 1950 or Lawvere and Rosebrugh 2003 (Appendix B). 10. Thus a nest in S is a chain in S partially ordered by set inclusion. 11. That minimal samplings are transversals requires demonstration. Suppose S is a minimal sampling; then, given X ∈ I, either (1) S ∩ X is finite nonempty or (2) X ⊆ S. In case (1) S ∩ X cannot contain two distinct elements because the removal of one of them from S would yield a sampling smaller than S, violating its minimality. So in this case S ∩ X must be a singleton. In case (2) S cannot contain two distinct elements a, b since, if it did, S′ = [(S – X) ∪ {a}] would be a sampling smaller than S (notice that S′ ∩ X = {a} and the relations of S′ with the members of I − {X} are the same as those of S), again violating the minimality of S. So in this case X, and a fortiori S ∩ X, must be a singleton. 12. Had we elected to follow more closely the intuitive combinatorial derivation of AC as sketched above by using selectors instead of samplings we would have encountered the obstacle that—unlike the set of samplings—the set of selectors is not necessarily reductive. 13. For equivalents of the Axiom of Choice, see Rubin and Rubin 1985; for consequences, see Howard and Rubin 1998. 14. “Weaker” here means that, within the context of the axioms of set theory (minus, of course, AC), the proposition in question is entailed by AC, but not vice-versa. 16. The idea of the proof goes back to Goodman and Myhill 1978, which was itself built on a result of Diaconescu 1975. See below. 17. See Bishop and Bridges 1985. 18. See Supplementary Document. For an analysis of various versions of AC within constructive type theory, see Martin-Löf 2006. 19. Also, of course, use was made of 0 ≠ 1, but this assumption is so weak that it may be consistently added to virtually any axiomatic system. 20. But in weak set theories lacking the axiom of extensionality the derivation of Excluded Middle from AC does not go through: some form of extensionality, or the existence of quotient sets for equivalence relations, needs to be assumed. See Bell 2008. 21. With Zorn's Lemma the situation is otherwise. For despite its equivalence to AC in classical set theory, in intuitionistic set theory Zorn's Lemma does not imply LEM: in fact, it has no “non-constructive” logical consequences whatsoever (Grayson 1975, Bell 1997). Thus Zorn's Lemma is considerably weaker than AC in intuitionistic set theory. Intuitionistic equivalents of Zorn's Lemma are scarce; a few have been formulated in Bell 2003. 22. That is, the set of all R-equivalence classes of members of A. Notes to Supplementary Document: The Axiom of Choice and Type Theory 1. Dependent types were actually first studied in the late 1960s by de Bruijn and his colleagues at the University of Eindhoven in connection with the AUTOMATH project. CDTT has been employed as a basis for various computational devices employed for the verification of mathematical theories and of software and hardware systems in computer science. 2. Analogous constructive versions of set theory, incorporating choice principles, have been developed by Peter Aczel. See Aczel 1978, 1982, 2005 and Aczel & Rathjen 2001. 3. This idea was advanced by Curry and Feys 1958 and later by Howard 1980. As the Curry-Howard correspondence it has come to play an important role in theoretical computer science. 4. For a complete specification of the operations and rules of DCTT, see Chapter 10 of Jacobs 1999 or Gambino & Aczel 2005. 5. See §4. 6. It is interesting to note that Ramsey 1926 also asserted the tautological character of AC but for quite different reasons. Like Zermelo, Ramsey construed—and accepted the truth of — AC as asserting the objective existence of choice functions, given extensionally and so independently of the manner in which they might be described. But the intensional nature of constructive mathematics, and, in particular, of CDTT, decrees that nothing is given completely independently of its description. This leads to a strong construal of the quantifiers which, as we have observed, essentially trivializes AC by rendering the antecedent of the implication constituting it essentially equivalent to the consequent. It is remarkable that AC can be considered tautological both from an extensional and from an intensional point of view. 7. While in classical set theory each singleton in this sense is either empty or a one-element set, in intuitionistic set theory there are many additional “intermediate” sets qualifying as
{"url":"http://plato.stanford.edu/entries/axiom-choice/notes.html","timestamp":"2014-04-16T07:35:51Z","content_type":null,"content_length":"22315","record_id":"<urn:uuid:cd28ce92-74d1-42bb-b449-79d90e593bd4>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00619-ip-10-147-4-33.ec2.internal.warc.gz"}
Taylor's theorem for complex functions There is an analogue for complex function s of the well-known Taylor theorem for real functions . It roughly states that any complex differentiable is locally equal to a power series . Taylor's theorem is nice because power series are (in particular the convergence of the power series is As usual the complex result is much nicer than the corresponding real one. Contrasting them we note that: 1) For real functions we only get an approximation (with error bounds) while for complex functions we actually have that the power series is equal to the original function. 2) We only require a complex function to be once complex differentiable, while a real function has to be several times differentiable to apply the theorem. Indeed the complex Taylor theorem allows us to deduce that any analytic function is in fact infinitely complex differentiable, while on the other hand even if a real function is infinitely (real) differentiable this does not guarantee the existence of a power series that equals the function (the traditional counterexample being f(x) = exp(-x^ -2). The proof is essentially an application of the Cauchy integral formula, together with some technical fiddling to ensure that everything converges properly. Unfortunately the HTML makes for very unpleasant reading. Taylor's theorem for complex functions: Let f : D → C be an analytic function on a domain D containing 0. Then if R > 0 is such that z ∈ D for all |z| ≤ R then for any |z| < R (sum from k = 0 to ∞) f(z) = Σ z^ka[k] where (integrals round the circle of radius R centred at 0) a[k] = (2πi)^-1∫ f(w)/w^k dw Note the identity (this and subsequent sums from k = 0 to n) (w-z)^-1 = w^-1(Σ (z/w)^k) + (z/w)^n+1/(z-w) which holds for all n ∈ N when |z| < |w|. Hence by the Cauchy integral formula 2π*|f(z) - Σ z^ka[k]| = |∫ (w-z)^-1f(w) dw - Σ (z/w)^kf(w) dw| = |∫ (z/w)^n+1(z-w)^-1f(w) dw| ≤ (|z|/R)^n+1|∫ (z-w)^-1f(w) dw| → 0 as n → ∞ for any |z| < R. So the series converges to f(z). Any analytic function f : D → C is infinitely complex differentiable on D. By Taylor's theorem f can be written locally as a power series, and any power series is infinitely differentiable.
{"url":"http://everything2.com/title/Taylor%2527s+theorem+for+complex+functions?showwidget=showCs1463922","timestamp":"2014-04-17T10:22:38Z","content_type":null,"content_length":"22771","record_id":"<urn:uuid:7895fbc2-a4b3-4de9-8dcb-283c61c5dd34>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00605-ip-10-147-4-33.ec2.internal.warc.gz"}
On the shortest route through a network - IEEE/ACM Transactions on Networking , 2004 "... Abstract—The underlying concepts of an exact QoS routing algorithm are explained. We show that these four concepts, namely 1) nonlinear definition of the path length; 2) a-shortest path approach; 3) nondominance; and 4) look-ahead, are fundamental building blocks of a multiconstrained routing algori ..." Cited by 29 (1 self) Add to MetaCart Abstract—The underlying concepts of an exact QoS routing algorithm are explained. We show that these four concepts, namely 1) nonlinear definition of the path length; 2) a-shortest path approach; 3) nondominance; and 4) look-ahead, are fundamental building blocks of a multiconstrained routing algorithm. The main reasons to consider exact multiconstrained routing algorithms are as follows. First, the NP-complete behavior seems only to occur in specially constructed graphs, which are unlikely to occur in realistic communication networks. Second, there exist exact algorithms that are equally complex as heuristics in algorithmic structure and in running time on topologies that do not induce NP-complete behavior. Third, by simply restricting the number of paths explored during the path computation, the computational complexity can be decreased at the expense of possibly loosing exactness. The presented four concepts are incorporated in SAMCRA, a self-adaptive multiple constraints routing algorithm. Index Terms—Look-ahead, path dominance, QoS routing, shortest path. - J. ACM , 1980 "... ASSTRACT. In the all-pair shortest distance problem, one computes the matrix D = (du), where dq is the minimum weighted length of any path from vertex i to vertexj in a directed complete graph with a weight on each edge. In all the known algorithms, a shortest path p, ~ achieving di./is also implici ..." Cited by 22 (1 self) Add to MetaCart ASSTRACT. In the all-pair shortest distance problem, one computes the matrix D = (du), where dq is the minimum weighted length of any path from vertex i to vertexj in a directed complete graph with a weight on each edge. In all the known algorithms, a shortest path p, ~ achieving di./is also implicitly computed. In fact, logs(f (n)) is an information-theoretic lower bound, wheref(n) is the total number of distinct patterns (Po) for n-vertex graphs. As f(n) potentially can be as large as 2&quot;:', it would appear possible that a nontrivial lower bound can be derived this way in the decision tree model. The characterization and enumeration of realizable patterns is studied, and it is shown thatf(n) < C &quot;~. Thus no lower bound greater than Cn 2 can be derived from this approach. It is proved as a corollary that the Triangular polyhedron T ~&quot;~, defined in E ¢~' ~ by d,j> 0 and the triangle inequalities d~j + dik> d,k, has at most C&quot; ' faces of all dimensions, thus resolving an open question in a similar information bound approach to the shortest distance problem. - IN PROCEEDINGS OF THE 24TH INTERNATIONAL SYMPOSIUM ON THEORETICAL ASPECTS OF COMPUTER SCIENCE (STACS’07 , 2007 "... During the last years, several speed-up techniques for Dijkstra’s algorithm have been published that maintain the correctness of the algorithm but reduce its running time for typical instances. They are usually based on a preprocessing that annotates the graph with additional information which can ..." Cited by 13 (7 self) Add to MetaCart During the last years, several speed-up techniques for Dijkstra’s algorithm have been published that maintain the correctness of the algorithm but reduce its running time for typical instances. They are usually based on a preprocessing that annotates the graph with additional information which can be used to prune or guide the search. Timetable information in public transport is a traditional application domain for such techniques. In this paper, we provide a condensed overview of new developments and extensions of classic results. Furthermore, we discuss how combinations of speed-up techniques can be realized to take advantage from different strategies. - J. ALGORITHMS , 1998 "... Given an n-vertex, m-edge directed network G with real costs on the edges and a designated source vertex s, we give a new algorithm to compute shortest paths from s. Our algorithm is a simple deterministic one with O(n² log n) expected running time over a large class of input distributions. This is ..." Cited by 10 (1 self) Add to MetaCart Given an n-vertex, m-edge directed network G with real costs on the edges and a designated source vertex s, we give a new algorithm to compute shortest paths from s. Our algorithm is a simple deterministic one with O(n² log n) expected running time over a large class of input distributions. This is the first strongly polynomial algorithm in over 35 years to improve upon some aspect of the O(nm) running time of the Bellman-Ford algorithm. The result extends to an O(n² log n) expected running time algorithm for finding the minimum mean cycle, an improvement over Karp's O(nm) worst-case time bound when the underlying graph is dense. Both of our time bounds are shown to be achieved with high probability. , 1998 "... Network flow problems form a core area of Combinatorial Optimization. Their significance arises both from their very large number of applications and their theoretical importance. This thesis focuses on efficient exact algorithms for network flow problems in P and on approximation algorithms for NP ..." Cited by 5 (3 self) Add to MetaCart Network flow problems form a core area of Combinatorial Optimization. Their significance arises both from their very large number of applications and their theoretical importance. This thesis focuses on efficient exact algorithms for network flow problems in P and on approximation algorithms for NP -hard variants such as disjoint paths and unsplittable flow. Given an n-vertex , 1967 "... AND ITS APPLICATION TO BIOMEDICAL LITERATURE F. Lunin . . . . . . 47 I ..." , 1994 "... This report describes the methodologies and procedures developed through a contract to the University of Texas at Austin, in collaboration with the University of Maryland, to address these essential needs. Specifically, a simulation-assignment methodology has been developed to describe user's path ..." Add to MetaCart This report describes the methodologies and procedures developed through a contract to the University of Texas at Austin, in collaboration with the University of Maryland, to address these essential needs. Specifically, a simulation-assignment methodology has been developed to describe user's path choices in the network in response to real-time information, and the resulting flow patterns that propagate through the network, yielding information about overall quality of service and effectiveness, as well as localized information pointing to problem spots and opportunities for improvement. This methodology is intended for use off-line for evaluation purposes, or on-line for prediction purpose in support of advanced traffic management functions. In additional, algorithmic procedures have been developed to determine the best paths to which users should be directed so as to optimize overall system performance. Powerful extension
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=850760","timestamp":"2014-04-20T21:08:06Z","content_type":null,"content_length":"28584","record_id":"<urn:uuid:7c0bd20a-4031-4313-a9b0-93019276804c>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00397-ip-10-147-4-33.ec2.internal.warc.gz"}
When is the category of pro-objects a homotopy category? up vote 12 down vote favorite For a category $C$, there is a category Pro-$C$ whose objects are cofiltered diagrams $I \to C$ and whose morphisms are given by $$ {\rm Hom}(\{x_s\},\{y_t\}) = \varprojlim_t\ \varinjlim_s\ {\rm Hom} (x_s,y_t). $$ Generally, this category is fairly hard to work with. This is especially true because several types of maps of pro-objects are defined in terms of the existence of a representing map of diagrams with certain properties, and it can be very difficult to rectify several distinct properties at once. One way to describe the category of pro-objects is by inverting morphisms. Specifically, we can form a more restricted category of pairs $(I,F)$ of a cofiltered index category $I$ and a diagram $F: I \to C$, with morphisms defined as pairs of a functor and a natural transformation of diagrams. Certain maps of cofiltered diagrams become isomorphisms of pro-objects (the most important ones being reindexing along a final subcategory). Inverting them gives us the pro-category. In some sense, this automatically provides us with a "category with weak equivalences", but it's intrinsically very large and it's not necessarily clear if the "homotopy theory" is tractable. Are there any circumstances under which the category of diagrams in $C$ automatically has the structure of a model category, with weak equivalences being pro-isomorphisms? In these cases, does the category Pro-$C$ have an interesting homotopy theory or are the mapping spaces essentially discrete? Obviously being complete and cocomplete is going to be an obstacle to this kind of structure. Failing that, is there any further possibility of gaining control on the homotopy theory? Having said all this, I've been a little bit vague about what I mean by a "map of diagrams" because I'd be open to the idea of having slightly restricted classes of maps in the definition. at.algebraic-topology ct.category-theory model-categories 2 Another way to think about pro-objects in a small category $C$ is as left exact functors $C\to Set$. I usually find this description more convenient, but I don't see any obvious homotopical structure associated to it. – Marc Hoyois Mar 3 '12 at 3:30 2 Two examples and a question: The category of pro-finite sets is equivalent to the category of compact spaces that are Hausdorff and totally disconnnectd. The bigger category of pro-sets is equivalent to the category of complete uniform spaces that are Hausdorff and totatlly disonnected. When can Pro $C$ be described as a category of "topological" objects in $C$? – Jeff Smith Mar 3 '12 at 21:06 @Jeff It depends what you mean by 'topological'? There are several categories that occur in pro-finite theory that have a topological side, e.g. pseudocompact modules over a pseudocompact ring, but they are not that far from the profinite set case. – Tim Porter Mar 4 '12 at 16:40 @Tyler: A paper by Barnea--Schlank: arxiv.org/pdf/1109.5477v6.pdf might be relevant. You probably can in usual circumstances define a weak fibration category structure on your category with isomorphisms as weak equivalences; their machinery gives you then a model structure on the pro category. – Lennart Meier Dec 9 '13 at 20:13 add comment 1 Answer active oldest votes There are some answers to this way back. There is a lovely answer to several of your questions in D. A. Edwards and H. M. Hastings, 1976, ˇCech and Steenrod homotopy theories with applications to geometric topology , volume 542 of Lecture Notes in Maths , Springer-Verlag. The category of prosimplicial sets has a model category structure that corresponds to a geometrically defined notion of strong shape theory (i.e. a homotopy coherent version of Borsuk's shape theory). Edwards and Hastings extended a result of Chapman and showed this model category theory also to be a form of proper homotopy theory. (There is also some discussion of this in my article: T. Porter, 1995, Proper homotopy theory , in Handbook of Algebraic Topology , 127–167, North-Holland, Amsterdam. ) The story does not end there. Because of the connection with étale homotopy theory (Artin and Mazur), there was a revival of interest in pro-categories in the last few years and there is a good discussion in up vote H. Fausk and D. Isaksen, Model structures on pro-categories , Homology, Homotopy and Applications, 9, (2007), 367 – 398. 5 down vote I suggest that you also look at others of Dan Isaksen's papers on this area as they answer more of the quetions that you have asked. On another point that you mention, the rectification process for properties is reasonably well understood due to what is known as the reindexing lemma (the simplest case is in Artin and Mazur's lecture notes but there are much fuller versions some of which are discussed in another of Isaksen's papers D. C. Isaksen, Completions of pro-spaces , Math. Z., 250, (2005), 113 – 143. ) If you read these papers carefully you will come to the conclusion that certain problems are still not fully understood especially when pro-finite simplicial sets are concerned, and the applications of those beasties are again very important so that is a good area to explore!!! (See also work by Quick (Profinite homotopy theory , Documenta Mathematica, 13, (2008), 585–612.) and Pridham (Pro-algebraic homotopy types , Proc. Lond. Math. Soc. (3), 97, (2008), 273 – 338. ) They show some of the more recent stuff on this with some good applications. There are copies on the ArXiv.) If you only care about having an $\infty$-category rather than a model structure, things are much easier: pro-objects in an $\infty$-category $C$ are simply left exact (accessible) functors $C\to \infty Grpd$, which naturally form an $\infty$-category. The étale homotopy type is also easily defined in this language (see Higher Topos Theory, section 7.1.6). But this doesn't help for putting a nontrivial homotopy structure on pro-objects if the category $C$ did not have one in the first place. – Marc Hoyois Mar 4 '12 at 22:10 Tim, thanks for the references. I haven't looked into Edwards and Hastings, but part of my motivation for asking the question was working with some of the model structures that Fausk and Isaksen work with. My understanding is that most of these references still work with honest pro-objects, rather than having a "weak equivalence" structure on diagrams, correct? – Tyler Lawson Mar 5 '12 at 5:56 @ Tyler I don't know if this helps but I tried out a version of doing pro-homotopy in three papers on coherent pro-homotopy. (The references are on my nLab personal home page: ncatlab.org/ timporter/show/HomePage but I let you look down the list for strong shape.) The idea was to take the diagram categories with a good homotopy structure and paste them together. My method was motivated by some of Vogt's work (his theorem on categories of homotopy coherent categories). Perhaps the methods would now be considered a bit 'crude' but they could be revisited using more modern viewpoints. – Tim Porter Mar 5 '12 at 8:11 @Marc Batanin also did some work on strong shape and infinity category structures and, if you have the time, that would be well worth looking at, again with a view to pushing through a modern view of it. His results might be compared with Lurie's approach. – Tim Porter Mar 5 '12 at 8:15 add comment Not the answer you're looking for? Browse other questions tagged at.algebraic-topology ct.category-theory model-categories or ask your own question.
{"url":"http://mathoverflow.net/questions/90101/when-is-the-category-of-pro-objects-a-homotopy-category?sort=oldest","timestamp":"2014-04-16T22:03:28Z","content_type":null,"content_length":"65671","record_id":"<urn:uuid:6bb07606-572f-4ffa-a995-757e16dc4e99>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00418-ip-10-147-4-33.ec2.internal.warc.gz"}
Audio Watermarking Test Statistic I To determine the quality of the watermarking process, you can test the outcome. One metric is the difference between the population means of A and B rather than their values per se. The standardized test statistic is given by the equation in Figure 2(a). Figure 2: Test statistic I. Both sample means can be assumed normally distributed under the Central Limit Theorem, because of the large amount (M>>30) of Fourier coefficients. Therefore, ' is normally distributed and you can make the estimations [][],[][]and AUB of Fourier coefficients, leads to the approximations [] [] Therefore, you can formulate the mutually exclusive propositions: • H[0]: (z)=N(0,1) • H[1]: [m](z)=N([m],1), [m]k/k^2)x1/ with N([m ]can be derived with the approximation in Figure 2(b). With these equations, the threshold can be calculated as T=z[1]-P[I]. According to the symmetry of the normal distributions (z) and [m](z), k can be calculated from [m] T=z[1] P[II ]and the approximation for [m], see above. This is a reasonable assumption because [] and [] are both estimators calculated from the mixed sets A and B; see Figure 2(c). Therefore, you can define the probabilities of correct detection (1-P[II]), rejection (1-P[I]), measure the variation coefficient in Figure 2(d) of the mean of set AUB, and calculate the necessary embedding factor k from the equation in Figure 2(c). But a factor k, ensuring the probabilities of correct detection or rejection according to Figure 2(c), may result in audible distortions. To satisfy both requirements, you can define the probabilities of correct detection (1 P[II]), rejection (1 P[II]), use the effective embedding factor k according to the psychoacoustic model, and calculate the number of Fourier coefficients from Figure 2(c). Test Statistic II According to the equation in Figure 2(a), you can extend the test statistic by normalizing the random variable z from equation >Figure 2(a) to the mean (/2 of the whole set AUB; see Figure 3. Therefore, you expect z to be a random variable with mean 0 in the unmarked and an approximated mean of [m]k in the marked case; again, see Figure 3. Figure 3: Test statistic II. One disadvantage of this kind of metric is the complicated PDF is not suitable for the calculation of the threshold and embedding parameter. I measured [m] for different k and verified the linear relation of equation in Figure 3. Properties of the Algorithm The general approach in quality evaluation is to compare the original signal with the watermarked signal for different k during subjective listener tests. To ensure the quality, the power of the watermarked noise is shifted just below the masking threshold. This results in an average embedding factor that is equivalent to k=0.15 derived without using the psychoacoustic model. Shifting the watermark noise to higher power levels decreases the quality and increases the average embedding factor. The embedding factor k=0.10 was used for robustness tests.
{"url":"http://www.drdobbs.com/security/audio-watermarking/184404839?pgno=3","timestamp":"2014-04-16T10:46:58Z","content_type":null,"content_length":"98386","record_id":"<urn:uuid:32c4878b-1acd-40a5-a361-6407e30b7323>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00311-ip-10-147-4-33.ec2.internal.warc.gz"}
Recall the basic definition for a derivative This definition gives the instantaneous slope of a line tangent to a curve. We also write the derivative as f’(x) [f prime of x]. From the above equation with just a little algebra you can derive the general formula for polynomials There are a few basic rules that will allow you to apply this to a large number of functions. The product rule states that The quotient rule states that If you have a hard time remembering the order of f(x) and g(x) in the quotient rule you can also treat f(x)/g(x) as the product of f(x) and 1/g(x). This has the form Which is completely equivalent to the quotient rule. Note that we used the polynomial rule here since 1/g(x) = g(x)-1. In general, if you are given a function in the denominator just write as a negative exponent first. This will make taking the derivative much easier. where we treated the function 1/f(x) as f(x)-1 and therefore n = -1 and nf(x)n-1= (-1)f(x)-2. In this example, we have used the chain rule. The chain rule applies when one function is “buried” inside another, e.g. g(f(x)). First, take the derivative with respect to g(x) treating the whole of f(x) as the variable, then take the derivative with respect to f(x). In this example, we take the derivative of a Gaussian function with respect to x. Note that here g(f(x)) = ef(x) and f(x) = -ax2. The derivative of an exponential is the exponential itself times the derivative of the exponent. Functions of more than one variable If we have a function of more than one variable, f(x,y) we can take the derivative with respect to either one. These are called partial derivatives with respect to x or y (or whatever the variable The partial derivative with respect to x is The partial derivative with respect to y is The total derivative is We say the total derivative is an exact differential is the second cross derivatives are equal If these cross derivatives are not equal the total derivative is not an exact differential. In physical chemistry this is important because State functions are exact differentials Path functions are inexact differentials A state function has the same magnitude regardless of the path taken. The integral has the same magnitude regardless of the path taken if the total derivative of x is exact. If the total derivative is not exact then For example, in thermodynamics we show that the internal energy is a state function, but the work and the heat are path functions.
{"url":"http://chemwiki.ucdavis.edu/Under_Construction/VV%3A_Mathematical_Concepts/Differentiation","timestamp":"2014-04-17T07:03:00Z","content_type":null,"content_length":"43560","record_id":"<urn:uuid:d8ecb223-3be0-4440-ae7a-e62d7f294dcc>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00231-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Collatz Sequence Paths To generate the values of the Collatz sequence, start with a number ; if it is even, halve it, but if it is odd, triple it and add 1. Repeat the process. For example, if the sequence is 3, 10, 5, 16, 8, 4, 2 and finally 1. The Collatz probem (or problem) states that for any starting number the sequence eventually reaches 1. This is true at least for the many numbers that have been tried. Despite work since the 1930s, no proof for the general case is known. This Demonstration follows the path to the value 1 for the first 100 integers.
{"url":"http://demonstrations.wolfram.com/CollatzSequencePaths/","timestamp":"2014-04-21T15:06:35Z","content_type":null,"content_length":"41395","record_id":"<urn:uuid:36145bd8-31b5-4d6b-bbc7-392a2b4edea9>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00052-ip-10-147-4-33.ec2.internal.warc.gz"}
Stuck finding the derivative of x^2 using difference quotient July 26th 2012, 03:23 AM #1 Jul 2012 Stuck finding the derivative of x^2 using difference quotient Hey id be greatful to anyone who can help me with this which i guess is easy but im too dumb to get... Whilst finding the derivative of x^2 (using the difference quotient) it is at this step that i get confused and need help on... f'(x)= (x+dx)^2-(x)^2/dx how do they get from the above equation to...... any explanation of this would be great, cheers Re: Stuck finding the derivative of x^2 using difference quotient Hey id be greatful to anyone who can help me with this which i guess is easy but im too dumb to get... Whilst finding the derivative of x^2 (using the difference quotient) it is at this step that i get confused and need help on... f'(x)= (x+dx)^2-(x)^2/dx how do they get from the above equation to...... any explanation of this would be great, cheers Expand the brackets. Re: Stuck finding the derivative of x^2 using difference quotient $(a+ b)^2= a^2+ 2ab+ b^2$ so that $(x+ dx)^2= x^2+ 2(x)(dx)+ (dx)^2$. Last edited by HallsofIvy; July 26th 2012 at 09:53 AM. July 26th 2012, 03:47 AM #2 July 26th 2012, 05:22 AM #3 MHF Contributor Apr 2005
{"url":"http://mathhelpforum.com/calculus/201376-stuck-finding-derivative-x-2-using-difference-quotient.html","timestamp":"2014-04-16T17:56:59Z","content_type":null,"content_length":"39230","record_id":"<urn:uuid:6bb23bae-39f9-4257-9c86-b56849694e7e>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00074-ip-10-147-4-33.ec2.internal.warc.gz"}
FOM: Feasibility and (in)determinateness of the standard numbers Torkel Franzen torkel at sm.luth.se Thu Apr 9 06:09:00 EDT 1998 Vladimir Sazonov says: >It is this unqualified "in principle" which is unclear. (In >which "principle"?) The "principle" is rather like the "sake" in "for clarity's sake". The phrase "in principle" means "disregarding all questions of >As I have replied to another participant of FOM, each concrete >(i.e. feasible) formal proof in a concrete formal system is >rather clear (fixed, concrete, unambiguous). Maybe so, but a "representation of my beliefs and intuitions by a formal system" requires a formal system in my sense, not a feasible formal system (a formal system with attention restricted to feasible formal proofs). For example, the formal system must settle every equation of the form s+t=r, not only "feasible equations". >Yes. But I actually said somewhat different: that we usually >introduce a formal system to *formalize* some unclear intuition >(limits, continuity, infinity, feasibility, etc.) Yes, but any formal system that I introduce to formalize my intuition of "0,0+1,0+1+1, and so on" will itself be based on that intuition. >But we hardly need the full power of PA to understand what is a >formal language, a formal proof rule, how to use it, etc. Indeed not, but we do need the "full power" of "0,0+1,0+1+1, and so on". Torkel Franzen More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/1998-April/001815.html","timestamp":"2014-04-16T19:28:31Z","content_type":null,"content_length":"4096","record_id":"<urn:uuid:31755710-93aa-46de-a984-464f76874b76>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00641-ip-10-147-4-33.ec2.internal.warc.gz"}
a Closed set in the Complex Field This is elementary but surely this set is closed | c – i | ≥ | c | with c being in ℂ I am trying to picture the set. Is it outside the disc centered at (0,1) with radius equal to modulus c (whatever that is) ? |c- i| is the distance from point c to i. |c| is the distance from c to 0. Saying that |c- i|= |c| (as pwsnafu suggested "find the boundary first") is the same as saying that those two distance are equal for all c. Geometrically, that is the perpendicular bisector of the segment from 0 to i, Im(c)= 1/2 or c= x+ (1/2)i for any real x. The set [itex]|c- i|\ge |c|[/itex] is set of points on ther side of that line closer to 0 than to i. Corrected thanks to oay.
{"url":"http://www.physicsforums.com/showthread.php?p=4256768","timestamp":"2014-04-17T21:37:08Z","content_type":null,"content_length":"34618","record_id":"<urn:uuid:46eaf4d9-f602-4a5b-a569-724e52effad4>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00122-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] Why inclusive disjunction? Timothy Y. Chow tchow at alum.mit.edu Fri Jan 12 09:47:42 EST 2007 Arguments about natural language, and about the choice of OR vs. XOR *in isolation*, are kind of fun but are doomed to be inconclusive, and I don't think they address the right question anyway. A better question is, given that we're interested in the AND operation, should we pair it with OR or with XOR? The answer is, we pair AND with OR if we're interested in the duality between them---de Morgan's laws; series vs. parallel circuits; the analogy with intersection/union, meet/join, for all/there exists, etc.; the fact that each is distributive over the other; and so on. We pair AND with XOR if we're interested in arithmetic/algebra/geometry over F_2, where AND is multiplication and XOR is addition. More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2007-January/011275.html","timestamp":"2014-04-19T14:54:30Z","content_type":null,"content_length":"3093","record_id":"<urn:uuid:497d96bf-16d4-4e93-856c-06d40e0e9f24>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00206-ip-10-147-4-33.ec2.internal.warc.gz"}
Librarian Evaluation Survey Librarian Evaluation Survey 1. Course/Instructor Ratings Please answer the following questions with regard to the course and instructor. Completion of this survey is voluntary and confidential. You are free to leave some or all questions unanswered. Excellent Very Good Good Fair Poor Very Poor 1) This instruction as a whole was 2) The content of this instruction was 3) The instructor's effectiveness in teaching the subject matter was 4) Lesson organization was 5) Clarity of instructor's voice was 6) Explanations by instructor were 7) Instructor's use of examples and illustrations was 8) Quality of questions or problems raised by the instructor was 9) Student confidence in instructor's knowledge was 10) Instructor's enthusiasm was 11) Encouragement given students to express themselves was 12) Answers to student questions were 13) Amount you learned in the lesson was 14) Relevance and usefulness of lesson was Thank you for completing this survey.
{"url":"http://www.surveymonkey.com/s/B3FZWSG","timestamp":"2014-04-17T05:08:36Z","content_type":null,"content_length":"51323","record_id":"<urn:uuid:f790f136-b1d3-4ddf-ae48-2ccfd1adb611>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00443-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-user] Optimization Working only with a Specific Expression of the input Parameters Brandon Nuttall bnuttall@uky.... Fri Mar 2 07:50:53 CST 2007 I'm just an amateur, but it seems to me like the array data in myvar1 are likely integers. When you raise the data to a power of type float (i.e. 2.0) all the members of the array are automatically converted to real (float) types. Easiest and fastest thing I know to do would be: myvar1 = myvar1*1.0 Or, and probably preferred (assuming you are using the numpy array type and have imported it): myvar1 = numpy.array(myvar1,dtype=float) At 08:25 AM 3/2/2007, you wrote: >Dear All, >I was trying to fit some data using the leastsq package in >scipy.optimize. The function I would like to use to fit my data is: > where A1, mu1 and myvar1 are fitting parameters. >For some reason, I used to get an error message from scipy.optimize >telling me that I was not working with an array of floats. >I suppose that this is due to the fact that the optimizer also tries >solving for negative values of mu1 and myvar1, for which the log >function (x is always positive) does not exist. >In fact, if I use the fitting function: >Where mu1 and myvar1 appear squared, then the problem does not exist >any longer and the results are absolutely ok. >Can anyone enlighten me here and confirm this is what is really going on? >Kind Regards >SciPy-user mailing list Brandon C. Nuttall BNUTTALL@UKY.EDU Kentucky Geological Survey (859) 257-5500 University of Kentucky (859) 257-1147 (fax) 228 Mining & Mineral Resources Bldg http://www.uky.edu/KGS/home.htm Lexington, Kentucky 40506-0107 More information about the SciPy-user mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-user/2007-March/011195.html","timestamp":"2014-04-16T10:34:15Z","content_type":null,"content_length":"5265","record_id":"<urn:uuid:03aead99-faa0-4593-b673-77f3eec86a88>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00368-ip-10-147-4-33.ec2.internal.warc.gz"}
How to get to Caen from Paris? Unfollow Follow How to get to Caen from Paris? by akashg Apr 27, 2010 at 8:32 AM Dear friends i am reaching Paris at around 12:30 pm by flight.please suggest a reasonable and cheapest transport with specific details on how to get to Caen from Paris,with luggage.I need this info soon as I will be coming on May 4.i did find that there are trains from Paris St. Lazerre to Caen.how much does it cost?Can I get a booking on the spot.Also what is the local means of transport in Caen? Quote & Answer Re: How to get to Caen from Paris? The distance on the rails is 239 km, the price on a regular train should be about 30 Euro. You should go indeed to Paris St Lazare station. Train #3343 departs 13:45, arrives 15:51 Train #3311 departs 15:10, arrives 16:57 Train #3349 departs 16:45, arrives 18:51 Train #3315 departs 17:10, arrives 18:57 Train #3351 departs 17:45, arrives 19:51 Train #3317 departs 18:10, arrives 19:57 Train #3353 departs 18:45, arrives 20:51 Train #3361 departs 19:59, arrives 22:17 Train #3323 departs 20:45, arrives 22:45 These trains are "local" (but very fast) trains. Do not use other options (TGV) because it takes longer, costs more and you will need to change trains. Be the first to rate this answer! Re: How to get to Caen from Paris? (In French, but easy to work out) to find trains times, details and fares. Paris > Caen takes around 2 hours (varies according to departure, lots of them during the day departing from Paris St Lazare), one-way fare costs 31.20 euro. Yes, you can buy a ticket at the station. In Caen, there are buses and taxis. Which is most appropriate for you depends on where you are going. Buses will, of course, be much cheaper than taxis. Be the first to rate this answer!
{"url":"http://forum.virtualtourist.com/Caen-132312-7-3516157/How-to-get-to-Caen-from-Paris.html","timestamp":"2014-04-20T03:37:38Z","content_type":null,"content_length":"65681","record_id":"<urn:uuid:ddff5d4c-1e90-4d47-a21a-91f2aee998f9>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00004-ip-10-147-4-33.ec2.internal.warc.gz"}
2. Geometric Primitive Types OpenGL requires you to specify the Geometric Primitive Type of the Vertices you wish to draw. This is usually expected when you begin drawing in either Immediate Mode (GL.Begin), GL.DrawArrays or Fig. 1: In the above graphic all valid Geometric Primitive Types are shown, their winding is Clockwise (irrelevant for Points and Lines). This is important, because drawing a set of Vertices as Triangles, which are internally set up to be used with Quads, will result only in garbage being displayed. Examine Figure 1, you will see that v3 in a Quad is used to finish the shape, while Triangles uses v3 to start the next shape. The next drawn Triangle will be v3, v4, v5 which isn't something that belongs to any surface, if the Vertices were originally intended to be drawn as Quads. However Points and Lines are an Exception here. You can draw every other Geometric Primitive Type as Points, in order to visualize the Vertices of the Object. Some more possibilities are: • QuadStrip, TriangleStrip and LineStrip can be interchanged, if the source data isn't a LineStrip. • Quads can be drawn as Lines with the restriction that there are no lines between v1, v2 and v3, v0. • Polygon can be drawn as LineLoop • TriangleFan can be drawn as Polygon or LineLoop The smallest common denominator for all filled surfaces (i.e. no Points or Lines) is the Triangle. This Geometric Primitive Type has the special attribute of always being planar and is currently the best way to describe a 3D Object to GPU hardware. While OpenGL allows to draw Quads or Polygons aswell, it is quite easy to run into lighting problems if the surface is not perfectly planar. Internally, OpenGL breaks Quads and Polygons into Triangles, in order to rasterize them. 1. Points Specifies 1 Point per Vertex v, thus this is usually only used with GL.DrawArrays(). n Points = Vertex * (1n); 2. Lines Two Vertices form a Line. n Lines = Vertex * (2n); 3. LineStrip The first Vertex issued begins the LineStrip, every consecutive issued Vertex marks a joint in the Line. n Line Segments in the Strip = Vertex * (1+1n) 4. LineLoop Same as LineStrip, but the very first and last issued Vertex are automatically connected by an extra Line segment. n Line Segments in the Loop = Vertex * (1n); 5. Polygon Note that the first and the last Vertex will be connected automatically, just like LineLoop. Polygon with n Edges = Vertex * (1n); Note: This primitive type should really be avoided whenever possible, basically the Polygon will be split to Triangles in the end anyways. Like Quads, polygons must be planar or be displayed incorrectly. Another Problem is that there is only 1 single Polygon in a begin-end block, which leads to multiple draw calls when drawing a mesh, or using the Extensions GL.MultiDrawElements or 6. Quads Quads are especially useful to work in 2D with bitmap Images, since those are typically rectangular aswell. Care has to be taken that the surface is planar, otherwise the split into Triangles will become visible. n Quads = Vertex * (4n); 7. QuadStrip Like the Triangle-strip, the QuadStrip is a more compact representation of a sequence of connected Quads. n Quads in Quadstrip = Vertex * (2+2n); 8. Triangles This way to represent a mesh offers the most control over how the Triangles are sorted, a Triangle always consists of 3 Vertex. n Triangles = Vertex * (3n); Note: It might look like an inefficient brute force approach at first, but it has it's advantages over TriangleStrip. Most of all, since you are not required to supply Triangles in sequenced strips, it is possible to arrange Triangles in a way that makes good use of the Vertex Caches. If the Triangle you currently want to draw shares an edge with one of the Triangles that have been recently drawn, you get 2 Vertices, that are stored in the Vertex Cache, almost for free. This is basically the same what stripification does, but you are not restricted to a certain Direction and forced to insert degenerated Triangles. 9. TriangleStrip The idea behind this way of drawing is that if you want to represent a solid and closed Object, most neighbour Triangles will share 2 Vertices (an edge). You start by defining the initial Triangle (3 Vertices) and after that every new Triangle will only require a single new Vertex for a new Triangle. n Triangles in Strip = Vertex * (2+1n); Note: While this primitive type is very useful for storing huge meshes (2+1n Vertices per strip as opposed to 3n for BeginMode.Triangles), the big disadvantage of TriangleStrip is that there is no command to tell OpenGL that you wish to start a new strip while inside the glBegin/glEnd block. Ofcourse you can glEnd(); and start a new strip, but that costs API calls. A workaround to avoid exiting the begin/end block is to create 2 or more degenerate Triangles (you can imagine them as Lines) at the end of a strip and then start the next one, but this also comes at the cost of processing Triangles that will inevitably be culled and aren't visible. Especially when optimizing an Object to be in a Vertex Cache friendly layout, it is essential to start new strips in order to reuse Vertices from previous draws. 10. TriangleFan A fan is defined by a center Vertex, which will be reused for all Triangles in the Fan, followed by border Vertices. It is very useful to represent convex n-gons consisting of more than 4 vertices and disc shapes, like the caps of a cylinder. When looking at the graphic, Triangle- and Quad-strips might look quite appealing due to their low memory usage. They are beneficial for certain tasks, but Triangles are the best primitive type to represent an arbitrary mesh, because it's not restricting locality and allows further optimizations. It's just not realistic that you can have all your 3D Objects in Quads and OpenGL will split them internally into Triangles anyway. 3 ushort per Triangle isn't much memory, and still allows to index 64k unique Vertex in a mesh, the number of Triangles can be much higher. Don't hardwire BeginMode.Triangles into your programs though, for example Quads are very commonly used in orthographic drawing of UI Elements such as Buttons, Text or Sprites. Should TriangleStrip get an core/ARB command to start a new strip within the begin/end block (only nVidia driver has such an Extension to restart the primitive) this might change, but currently the smaller data structure of the strip does not make up for the performance gains a Triangle List gets from Vertex Cache optimization. Ofcourse you can experiment with the GL.MultiDraw Extension mentioned above, but using it will break using other Extensions such as DirectX 10 instancing.
{"url":"http://www.opentk.com/doc/chapter/2/opengl/geometry/primitives","timestamp":"2014-04-16T07:35:07Z","content_type":null,"content_length":"17104","record_id":"<urn:uuid:b43bc9af-a388-4f49-947d-ccb7a380af85>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00090-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics please check Posted by Mary on Wednesday, March 28, 2007 at 1:25pm. A bowling ball encounters a 0.76 m vertical rise on the way back to the ball rack, as the drawing illustrates. Ignore frictional losses and assume that the mass of the ball is distributed uniformly. If the translational speed of the ball is 3.60 m/s at the bottom of the rise, find the translational speed at the top. KE = 1/2(2/5)MR^2(V/R)^2= 1/5MV^2 KE = 1/5mv^2 - mgh the mass as stated in the question is uniformly so it can be deleted from the above formula. KE = 1/5v^2 - gh KE = 1/5(3.60)^2 - (9.81 x 0.76) KE = 2.592 - 7.4566 KE(final)= -4.8636 Am I right so far? If I am how do I calculate the final speed? You have only treated the rotational part of the kinetic energy. You also must include the (1/2)MV^2 "translational" part. The total KE is (7/10)M V^2 You also have a problem with signs. The kinetic energy cannot be negative. The sum of potential and kinetic energies is constant. So the decrease in total KE as it goes to the top of the ramp equals the increase in potential energy, MgH. (7/10)[Vo^2 - V1^2) = g H Vo is the initial velocity. Solve for V1 I would like to say I appreciate your help with my homework! I did the problem according to the formula you provided and the answer is incorrect. Can you please tell me where I went wrong. I may have misinterpret the formula. This is what I came up with: 7/10 (V0^2 - V1^2)= gh 7/10((3.6)^2 - V1^2)= gh 7/10 (12.96 - V1^2)= 9.81 X 0.76 9.072 - V1^2= 7.4556 V1^2= 9.072 - 7.4556 V1^2= 1.6164 V1= square root 1.6164 V1= 1.2714 DrWLS is not on right now but one error I see you you didn't multiply V1 by 0.7. I've marked it below. 7/10 (V0^2 - V1^2)= gh 7/10((3.6)^2 - V1^2)= gh 7/10 (12.96 - V1^2)= 9.81 X 0.76 9.072 - V1^2= 7.4556 From the previous step, the V1^2 must be 0.7V1^2 V1^2= 9.072 - 7.4556 V1^2= 1.6164 V1= square root 1.6164 V1= 1.2714 Thank you so much!!!!! Related Questions Physics - A bowling ball encounters a 0.76 m vertical rise on the way back to ... Physics - A bowling ball encounters a 0.76m vertical rise on the way back to ... physics - A bowling ball encounters a 0.76m vertical rise on the way back to ... Physics - Equate the increase in potential energy at the higher elevation, M g H... physics - After you pick up a spare, your bowling ball rolls without slipping ... Physics - After you pick up a spare, your bowling ball rolls without slipping ... Physics - You decide to make a simple pendulum by attaching a bowling ball to a ... Physics for engineers - A bowling ball weighing 71.3 is attached to the ceiling ... physics - A bowling ball weighing 71.8 N is attached to the ceiling by a rope of... physics - In order to convert a tough split in bowling, it is necessary to ...
{"url":"http://www.jiskha.com/display.cgi?id=1175102744","timestamp":"2014-04-16T19:50:53Z","content_type":null,"content_length":"10165","record_id":"<urn:uuid:188776ab-bc86-4dd3-bd59-cfbe6486aa48>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00578-ip-10-147-4-33.ec2.internal.warc.gz"}
Kocay's Lemma, Whitney's Theorem, and some Polynomial Invariant Reconstruction Problems Given a graph $G$, an incidence matrix ${\cal N}(G)$ is defined on the set of distinct isomorphism types of induced subgraphs of $G$. It is proved that Ulam's conjecture is true if and only if the $ {\cal N}$-matrix is a complete graph invariant. Several invariants of a graph are then shown to be reconstructible from its ${\cal N}$-matrix. The invariants include the characteristic polynomial, the rank polynomial, the number of spanning trees and the number of hamiltonian cycles in a graph. These results are stronger than the original results of Tutte in the sense that actual subgraphs are not used. It is also proved that the characteristic polynomial of a graph with minimum degree 1 can be computed from the characteristic polynomials of all its induced proper subgraphs. The ideas in Kocay's lemma play a crucial role in most proofs. Kocay's lemma is used to prove Whitney's subgraph expansion theorem in a simple manner. The reconstructibility of the characteristic polynomial is then demonstrated as a direct consequence of Whitney's theorem as formulated here. Full Text:
{"url":"http://www.combinatorics.org/ojs/index.php/eljc/article/view/v12i1r63/0","timestamp":"2014-04-17T15:30:06Z","content_type":null,"content_length":"15788","record_id":"<urn:uuid:0436e3df-5d2c-415f-a2e3-51d6df81fd29>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00481-ip-10-147-4-33.ec2.internal.warc.gz"}
Deformations and moduli of semistable sheaves in mixed characteristic up vote 4 down vote favorite Let $X$ be a projective scheme over an algebraically closed field $k$. There is the coarse moduli space $M_X$ parametrizing semistable sheaves on $X$ with fixed reduced Hilbert polynomial $p$. Now, the deformation theory of $M_X$ at a stable sheaf $E$ is (in some sense) understood: Let $Def_E$ be the deformation functor, that is, it assigns to each local Artinian $k$-algebra $A$ with residue field $k$, the isomorphism classes of sheaves $\mathcal E$ on $X \otimes_k A$, flat over $A$ and with $\mathcal E \otimes_A k \cong E$. Then, $Def_E$ is pro-represented by the completion of the local ring $\mathcal O_{M_X, E}$. Now, the moduli space $M_X$ also exists in mixed characteristic, e.g. for $X$ a smooth projective surface over a discrete valuation ring $R$ of characteristic $(0,p > 0)$. If $E$ is a stable sheaf on the special fiber $X_\kappa$ of $X$ ($\kappa$ - residue field of $R$), then I think the deformation functor to consider here should live on the category of Artinian rings $A$ with residue field $\ kappa$, which are not necessarily $\kappa$-algebras. For example, if one wants to consider $Def_E(R_n)$ where $R_n = R/\mathfrak m_R^n$ is the truncated ring $R$ for some $n \geq 2$. Is there a similar description of $Def_E$ in mixed characteristic (or in this particular situation)? My motivation to ask this (rather vague, sorry for that) question comes from the question: when two semistable families (or deformations) $\mathcal E, \mathcal E'$ on $X \otimes R_n$ define the same point in $M_X(R_n)$? In the situation, where $X$ is defined over a field, the answer (if we replace $R_n$ for example with $k[t]/t^n$) is given by the above description of $Def_E$: they are just ag.algebraic-geometry reference-request moduli-spaces add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged ag.algebraic-geometry reference-request moduli-spaces or ask your own question.
{"url":"http://mathoverflow.net/questions/144973/deformations-and-moduli-of-semistable-sheaves-in-mixed-characteristic","timestamp":"2014-04-20T11:13:12Z","content_type":null,"content_length":"47572","record_id":"<urn:uuid:45fc6ca1-79ae-49a0-b201-29f4e53b46e4>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00308-ip-10-147-4-33.ec2.internal.warc.gz"}
Industrial Problems at ESGI 35 Problem 1: Danfoss Modeling thermostatic radiator valves. The problem is to model the flow through a typical Danfoss thermostatic radiator valve. Danfoss is able to employ Computational Fluid Dynamics (CFD) in calculations of the capacity of valves, but an experienced engineer can often by rules of thumb "guess" the capacity with a precision similar to the one achieved by the expensive and time consuming CFD calculations. So CFD is only used in case of entirely new designs or where a very detailed knowledge of the flow is required. Even though rules of thumb are useful for those, who have developed them, Danfoss wants an objective and general method, which can be used to calculate the capacity of valves. One proposed solution is to identify the significant parts of the interior geometry, quantify the influence and model the valves as a sum of resistors in series. The model should be able to predict the capacity with a precision of 10% in the interesting range of capacities. Information about other industrial problems Return to front page June 4 1999, Marit Hvalsøe Schou, e-mail: marit@mip.sdu.dk
{"url":"http://miis.maths.ox.ac.uk/past/ESGI/35/problem1_Danfoss.html","timestamp":"2014-04-19T12:04:17Z","content_type":null,"content_length":"2188","record_id":"<urn:uuid:b2057ca6-26d1-4f52-8556-d3f48ac4476c>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00537-ip-10-147-4-33.ec2.internal.warc.gz"}
Waverley Math Tutor Hi! I am currently a graduate student in an MD/PhD program, with an undergraduate education in Computer Science. I worked as a software engineer before switching careers and going to graduate 17 Subjects: including algebra 1, geometry, prealgebra, precalculus ...I have also tested very well on vocabulary on standardized testing. Another thing that helps my vocabulary is that I have studied both Latin and ancient Greek which form the roots for a significant amount of vocabulary. Due to having an excellent teacher for grammar early in my education, I hav... 28 Subjects: including algebra 1, algebra 2, ACT Math, SAT math ...I also have experience in Pascal from my high school years. I have an undergraduate degree from Harvard University in Computer Science. Computer Science is the art/science of creating computer 19 Subjects: including discrete math, algebra 1, algebra 2, calculus I am a recent graduate of MIT with 6 years of experience working with a wide range of students in grades 8-12 to improve SAT scores in the Reading, Math, and Writing sections. I have successfully tutored students at all skill levels and have achieved measurable success in all cases. I am a patient... 18 Subjects: including calculus, precalculus, SAT math, trigonometry ...I live in Cambridge and I'm available evenings and weekends. If you want more information, do not hesitate to contact me.I've taken organic chemistry I and organic chemistry II at Stevens Institute of Technology successfully. In addition, I've taken general chem I and II as well as labs for both general chem and organic chem. 31 Subjects: including differential equations, linear algebra, chemical engineering, mechanical engineering
{"url":"http://www.purplemath.com/waverley_ma_math_tutors.php","timestamp":"2014-04-21T12:33:36Z","content_type":null,"content_length":"23666","record_id":"<urn:uuid:153ba686-7a53-4556-90ad-67ced4cebef0>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00411-ip-10-147-4-33.ec2.internal.warc.gz"}
The Discriminant of Quadratic Equations Date: 08/06/98 at 10:18:07 From: Quincy Subject: Quadratic equation Will you explain why b^2 - 4ac = a triangle? Date: 08/06/98 at 11:41:58 From: Doctor Santu Subject: Re: Quadratic equation Hello, Quincy! This is all about how to solve equations such as: 40 x^2 + 5 x + 63 = 0 right? People learned how to solve lots and lots of equations like this and then, a few hundred years ago, somebody asked what do the 40, the 5, and the 63 have to do with it? Can we find rules about what the answer is for every possible combination of the three "coefficients"? [In our example, 40 is the x^2-coefficient, 5 is the x-coefficient, and 63 is the constant term.] So one day, they set out to solve the a x^2 + b x + c = 0. But the answers they got depended (obviously) on a, b and c, the coefficients. There were several different ways the problem could turn out, and, most interestingly, exactly which way it went was decided by this famous b^2 - 4ac. So they wanted to call that formula by a name, and the name that was decided was "discriminant," because it discriminated between the various solutions. But, as you know, in math we like to call things by single letters, like a, b, f, x and so on. So we use the Greek letter Delta as its symbol. Delta looks exactly like a triangle, but it is a Greek letter, not a geometric shape (just as the letter o looks like a circle to people who don't know what it is). The expression ax^2 + bx + c is an interesting one. With some careful math, it can be split into two parts: a * [varying part that's never negative] - [fixed part] The fixed part is Delta/[4a], so you see the connection! If Delta is negative, the whole combination has absolutely no chance of being zero, so the equation ax^2 + bx + c = 0 cannot be solved. To see this, put the whole thing over a common denominator, and you get: 4a^2 * [varying part, never negative] - Delta If Delta is negative, -Delta will be positive, and 4a^2 is certainly positive, so the numerator is going to be at least as big as -Delta, so it can never be zero, regardless of the denominator. If Delta is positive, there will be two different values for x at which the whole expression becomes zero. This is the so-called generic case. To see this, I have to tell you that the "varying part" is just (x + b/(2a))^2, which takes any desired positive value exactly twice. If Delta is zero, the only value of x that will make the expression zero is x = -b/(2a), so you have one solution. A much nicer way of understanding the solution is to think of it in terms of graphs. You must know what the graph of the expression: ax^2 + bx + c looks like, in which case the three cases become obvious. Look in your textbook. There should be diagrams showing you the situation. There will be a u-shaped curve, called a parabola. There will be a horizontal line that's the x-axis. Each point on the x-axis stands for a possible x-value. The corresponding value of ax^2 + bx + c is just how high the point on the parabola is from the x-axis. When the curve actually crosses the x-axis, the height is zero, of course, so the value is zero. The question then becomes does this graph cross the x-axis? At how many points? The a, b, and c determine the position of the parabola. The "a" determines whether the parabola is arch-shaped (a is negative) or u-shaped (a is positive). Delta determines whether the parabola crosses the x-axis (Delta is positive), just touches the x-axis (Delta is zero) or misses the x-axis completely (Delta is negative). Thanks for writing, and write back if you want more information - Doctor Santu, The Math Forum Check out our web site! http://mathforum.org/dr.math/
{"url":"http://mathforum.org/library/drmath/view/53137.html","timestamp":"2014-04-16T18:58:58Z","content_type":null,"content_length":"9017","record_id":"<urn:uuid:59f4a423-34f8-4da6-89d0-c8b7645845c6>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00056-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: I need help in writing a java code for this. A sales person fixed salary is $50,000. A sales person will also receive a commission as an incentive. commission is a percentage of the sales person annual sale. The current commission is 10% of the total sales The total annual compensation is the fixed salary plus the commission earned. Need help in java code for this please. • one year ago • one year ago Best Response You've already chosen the best response. TAC= FS+CE What specifically do you need to write? Best Response You've already chosen the best response. you could create a small method that takes in Total sales earned Public int totalSalary( int totalSale) { int percent= .10 * totalSale; intTotalSalary= fixedSalary+percent; return TotalSalary; } Best Response You've already chosen the best response. The code using netbean Best Response You've already chosen the best response. Would it be perfect if I just do a small method like u have Best Response You've already chosen the best response. I would type this in under the public void section correct. would I have to figure it out at the exit part Best Response You've already chosen the best response. i dont know what netbean is, sorry Best Response You've already chosen the best response. it is a java program in helping to write code instead of using command Best Response You've already chosen the best response. it through java Oracle Best Response You've already chosen the best response. is this all I would need for this code liliy Best Response You've already chosen the best response. ya. i think so. Best Response You've already chosen the best response. okay I thought I would have to figure out the total for the exit portion Best Response You've already chosen the best response. You need to write a method for the salary of the salesman, which is dependent on the total sales. So you would write a method that takes an int as argument for total sales, and returns the total salary as another int (Since we can probably assume the commission gets rounded to the nearest dollar): public int calculateTAC(int totalSales) { int ce = totalSales/10; int tac = 50000 + ce; return tac; } So basically what liliy already said, except his piece of code was missing the method definition around it Best Response You've already chosen the best response. Nevermind my bad, I didn't see the public int on the first line :/ Best Response You've already chosen the best response. this is all I would need Best Response You've already chosen the best response. so just enter this liliy Best Response You've already chosen the best response. Thank you Best Response You've already chosen the best response. is this what I would need to put in meepi Best Response You've already chosen the best response. Best Response You've already chosen the best response. There is more that you need, but you should know if your program outputs correctly or not ;). Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/51227829e4b03d9dd0c5a218","timestamp":"2014-04-21T02:21:29Z","content_type":null,"content_length":"70945","record_id":"<urn:uuid:47149fb2-9d54-4b36-ad8e-d6fcd42f2138>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00622-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent US7786991 - Applications of interval arithmetic for reduction of number of computations in ray tracing problems This is a Continuation application of application Ser. No. 11/024,527, filed Dec. 28, 2004, now U.S. Pat. No. 7,348,975 entitled “Applications of Interval Arithmetic for Reduction of Number of Computations in Ray Tracing Problems”, (Publication No. US 2006-0139340 A1). Implementations of the claimed invention generally may relate to ray tracing and, more particularly, to interval arithmetic for ray tracing. Ray tracing is a well know method used in modeling of a variety of physical phenomena related to wave propagation in various media. For example it is used for computing an illumination solution in photorealistic computer graphics, for complex environment channel modeling in wireless communication, aureal rendering in advanced audio applications, etc. A ray is a half line of infinite length originating at a point in space described by a position vector which travels from said point along a direction vector. Ray tracing is used in computer graphics to determine visibility by directing one or more rays from a vantage point described by the ray's position vector along a line of sight described by the ray's direction vector. To determine the nearest visible surface along that line of sight requires that the ray be effectively tested for intersection against all the geometry within the virtual scene and retain the nearest intersection. When working with real values, data is often approximated by floating-point (FP) numbers with limited precision. FP representations are not uniform through the number space, and usually a desired real value (i.e. ⅓) is approximated by a value that is less than or greater than the desired value. The error introduced is often asymmetrical—the difference between the exact value and the closest lower FP approximation may be much greater or less than the difference to the closest higher FP approximation. Such numerical errors may be propagated and accumulate though all the computations, sometimes creating serious problems. One way to handle such numerical inaccuracies is to use intervals instead of FP approximations. In this case, any real number is represented by 2 FP values: one is less than the real one, and another is greater than the real one. The bound values are preserved throughout all computations, yielding an interval, which covers the exact solution. Usually, applications using interval arithmetic are limited to certain classes of workloads (such as quality control, economics or quantum mechanics) where the additional costs of such interval computations significantly outweigh the implications of dealing with inexact FP numbers for any final values. The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate one or more implementations consistent with the principles of the invention and, together with the description, explain such implementations. The drawings are not necessarily to scale, the emphasis instead being placed upon illustrating the principles of the invention. In the drawings FIG. 1 illustrates an example of multiple rays traced through a cell from a common origin, executed during traversal of a binary tree. FIG. 2 illustrates an example of interval implementation of traversing multiple rays through a binary tree. FIG. 3 is a flow chart illustrating a process of traversing multiple rays through a binary tree using an interval technique. The following detailed description refers to the accompanying drawings. The same reference numbers may be used in different drawings to identify the same or similar elements. In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular structures, architectures, interfaces, techniques, etc. in order to provide a thorough understanding of the various aspects of the claimed invention. However, it will be apparent to those skilled in the art having the benefit of the present disclosure that the various aspects of the invention claimed may be practiced in other examples that depart from these specific details. In certain instances, descriptions of well known devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail. Embodiments provide for ray tracing traversal that relies on selected geometrical properties of the application to reduce the number of floating point (or other data type operations such as integer, fixed point) operations required during each traversal step. The interval traversal algorithm does not depend on the number of rays in the group. Multi-level traversal schemes may be implemented, starting with a large number of rays in a group and then reducing it as needed to maintain group coherency. Additional rays may be generated during traversal to improve anti-aliasing properties of the resulting image in areas of high geometrical complexity. The interval traversal algorithm groups parallel geometrical queries, extracts selected common geometrical characteristics pertinent for the whole group, and then executes a query using only these characteristics (and not the whole group). Ray tracing is based on massively parallel geometrical queries, executed against some spatially ordered geometrical database. The interval traversal algorithm may be extended to cover other types of applications, where it may be possible to find and trace certain group properties against a specialized database. One skilled in the art will recognize that embodiments of the invention are not limited to floating point implementation. Rather, the embodiments of the invention may be implemented using various data types, including but not limited to integer, fixed point and so forth. FIG. 1 illustrates an example 100 of multiple rays 102 traced through a cell 104 from a common origin 106 for traversal of a binary tree. One cell 104 is split into two subspaces including a nearest cell C0 108 and farthest cell C1 110 by split plane P0. Rays 102 may be shot through pixels on the screen into a database representing all of the objects in a scene. These objects (suitably sub-divided) and data may represent empty space, and may be stored in a hierarchical spatial partitioning structure. Shooting a ray 102 involves tracing the path the ray 102 may take through this structure. Opportunities for parallelism exist but are limited, as each ray may take a different path through the database, and since the data structure is hierarchical, there is a sequential dependency as the ray goes from one level to the next. The database may represent the distribution of objects and empty space as a collection of axis aligned spatial intervals. A collection of rays may be directly tested against any arbitrary level of the database hierarchy (i.e. not necessary starting at the top). The bundles of rays may be subdivided proceeding down the structure. This results in improved numerical fidelity and simplifies the process of tracing rays. In particular, the number of operations required per ray is reduced, resulting in an improvement in overall application performance. Furthermore, hardware may be designed to directly implement such interval arithmetic, allowing additional performance improvement. Shooting of rays is not particular to graphics, similar technology may also used to track the propagation of waves of various kinds, calculating radar cross sections for military purposes etc. In a ray tracing environment, it may be required to shoot lots of rays. One way to accomplish this is to determine the intersection of all rays against all of the polygons that define all of the geometrical objects in the scene. Another way to accomplish this is to partition all of these polygons into an axis aligned partitioning structure. One implementation of this is to split the entire scene up into a uniform grid of cubes, while replicating polygons that straddle the cube boundaries. A ray may be shot and the cubes the ray passes through predicted. The ray is only tested against the contents of each of these cubes, ignoring the rest. Due to the relative efficiency of using such a representation versus testing every ray against every polygon, the term “acceleration structure” may be used to describe any such data structure designed to reduce the total number of ray-polygon intersection tests. The above uniform grid of cubes has the advantage that the trajectory of a ray through the cubes may be calculated easily, and the relevant data accessed directly. The detail in the scene may not be distributed evenly though. For example, a huge amount of polygons may end up in one cube, and very little detail in the others. Another acceleration structure construct is commonly referred to as a kd-tree. In this acceleration structure, some cost function may be used to recursively split the scene by axis-aligned planes. Initially, the scene may be split in two by such a plane, each half may then be split again along some other plane, and so forth. This results in a hierarchical organization of the structure. Each level of the acceleration structure may be recursively traversed to determine where the next level of the structure can be found. Cost functions are carefully chosen in the construction phase of these structures to achieve optimum performance while traversing these trees later when shooting the various rays needed for visualization. The leaf nodes of a kd-tree represent a small axis aligned cell wherein there is some number of polygons. At the next level up the tree, each node represents an axis aligned box which is completely filled by two of the leaf nodes (a “split-plane” splits the larger volume into the two leaf cells). At the next level, each node represents an axis aligned box completely filled by two of the lower level nodes using a similar split-plane and so on. The tree is not required to be balanced, that is any internal node may be split into leaf node and another internal node. At any given level, a ray may be intersected against the bounding box to determine whether: (1) the ray completely misses the box, (2) the ray hits the box and passes through the “left” sub-node—i.e. to the “left” of the split-plane, (3) the ray hits the box and passes through the “right” sub-box, or (4) the ray hits and passes through both of the sub-boxes. In the first case (1), the further processing of the lower level nodes is no longer necessary, as the ray “misses” the entire lower part of the tree. Embodiments of the invention are applicable to many acceleration structures, including those that use separation planes to determine which objects have to be tested for a particular ray. These acceleration structures include but are not limited to grids, bounding boxes and kd-trees. Referring to FIG. 1, For any given block of rays, the traversal algorithms determine if the rays 102: (1) pass through subspace 108 to the left of the split-plane 104; (2) pass through the subspace 110 to the right of the split-plane 104; or (3) passes through both sub-spaces 108 and 110. During full traversal of a binary tree, for each ray 102, the cell entry and exit points are known. These are the distances represented by oa, ob, oc, od, and oA, oB, oC, oD which are known from previous computations. The intersection points with the split-plane P[0 ]are calculated. They are represented as distances oα, oβ, oχ, and oδ. Entry and exit distances are compared with the plane intersection. For example, referring to FIG. 1, rays oa and ob will go only through the left cell 108, while rays oc and od go through both cells 108 and 110. The process is repeated for each subsequent cell that the rays pass through. If the algorithm requires ray traversal of both cells 108 and 110, then all information, pertinent to the farthest cell such as 110, is stored in a stack-like structure. It includes, in particular, the distances to the entry points oχ and oδ and the exit points oC and oD. The nearest cell 108 is recursively traversed first by executing all of the steps of the current process with entry points a, b, c, and d and exit points A, B, χ, and δ. Once all cells within the nearest one have been traversed, the farthest cell data 110 is retrieved from the stack and the whole process is repeated. If some cell contains primitive objects (such as triangles), the remaining rays which pass through this cell are tested against these objects. For example, ray/triangle intersection tests are In some cases, for each ray, a primitive object has been discovered such that the distance to it is less than the distance to the current cell. In this case, subsequent traversal steps are not necessary. If ray tracing is being used for rendering purposes, this refinement may be used if such a primitive object is opaque. FIG. 2 illustrates an example 200 of multiple rays 202 traced through a cell 204 from a common origin 206 for interval traversal of a binary tree. FIG. 2 shows the next traversal step after the cell is split. In particular, FIG. 1 shows the case where one cell is split into two subspaces C[0 ] 108 and C[1 ] 110 by split-plane P[0]. In FIG. 2, C[0 ] 208 is further split into C[00 ] 210 and the union of C[2 ] 212 and C[3 ] 214. No ray 202 intersects the nearer sub-space C[00 ] 210. The interval traversal algorithm is built upon the calculation and maintenance of one single interval for a group of rays, which includes minimum and maximum distances for all the rays in the bunch from a selected point (camera position) to a particular cell. Instead of representing individual rays as 3D vectors pointing in particular directions, a collection of rays may be represented as a single 3D vector of intervals pointing approximately in some particular direction. Typically, the more coherent these rays are, the tighter the intervals may be. For each coordinate x, y, and z, this interval may be defined as minimum and maximum coordinate value among all rays. Similarly, the individual cells of the acceleration structure may be represented as intervals in x, y and z. Cells at any level of a hierarchical acceleration structure may be represented as such an interval. Upon traversing deeper into the acceleration structure, the vector of intervals representing one group of rays may be sub-divided into multiple groups of rays for efficiency. Higher degrees of ray coherency are typically found deeper in the acceleration structure. FIG. 3 is a flow chart illustrating a process 300 of interval traversal of a binary tree. Although process 300 may be described with regard to FIG. 2 for ease of explanation, the claimed invention is not limited in this regard. Acts 302, 304, and 306 of the FIG. 3 are executed once per traversal step, while acts 308-316 are executed for each traversed cell. In act 302, a group of rays is generated and some common characteristics of the group of rays are computed. For those rays generated from a particular common point of origin such as camera position o through a screen with pixels p[xy], the following are computed for each coordinate axis: □ In act 304, the minimum and maximum distance values among all projections of direction vectors op[xy ]on any given axis are computed. By definition, for every ray in the group, the x, y, and z coordinates of op[xy ]vector will be inside an appropriate interval. At the beginning of the top cell traversal (act 304), the minimum and maximum distances oa[1 ]and oA[1 ]are determined. These may be designated as interval [oa[1], oA[1]]. This interval is maintained and potentially modified (narrowed) during the remaining traversal process. By definition, for any ray in the group, the distance to the nearest cell entry point is not less than oa[1 ]and the distance to the farthest cell exit point is less or equal to oA[1]. In act 306, inverse direction intervals are defined. In act 308, the minimum and maximum distances to the split plane od[min ]and od[max ]may be computed using inverse direction intervals defined in act 306. As shown in FIG. 2, sub-cells C[2 ] 212 and C[3 ] 214 are split by plane P[2]. It is determined whether both sub-cells C[2 ] 212 and C[3 ] 214 are traversed. In particular, this is determined by evaluating the following two conditions, which if satisfied, result in traversing only one sub-cell: □ In act 310, if the minimum distance to the cell (oa[1]) is more than maximum distance to the plane (oA[2]), the [oa[1], oA[1]] interval is modified and only the right sub-cell is traversed (act 312). □ In act 314, if the maximum distance to the cell (oA[1]) is less than minimum distance to the plane (oa[3]), the [oa1, oA1] interval is modified and only the left sub-cell is traversed (act If neither of these conditions are true, both sub-cells have to be traversed (act 318) and appropriate intervals have to be modified. As shown in FIG. 2, during C[2 ]traversal, the interval will be [oa[1], oA[2]]. For the cell C[3], it will be [oa[3], oA[1]]. One skilled in the art will recognize that different implementations of the interval traversal embodiments described herein are possible. For example, embodiments described can be extended to ray groups which do not have a common origin. Although process 300 may be implemented on modern vector or SIMD (Single Instruction Multiple Data) machines, the claimed invention is not limited in this Certainly, different implementations of the interval traversal algorithm are possible. One, provided above, is used only for presentation purposes, as well as particular cases featured on the supplied figures. It is also possible to extend the ideas, outlined here, to a more general case of ray bunches, which do not have common origin. The following observation helps to understand the differences between the full and the interval traversal algorithms. The full algorithm basically implements simultaneous bounding box clipping of a particular group of rays. For any given cell, reached in the acceleration structure, the entry and exit points for all rays are known. The interval algorithm shown in FIG. 3 represents a lazy distributed box clipping, yielding guaranteed minimum and maximum clipping distances for the whole group of rays. Embodiments of the invention may sharply reduce the number of floating point or other data type operations required during each traversal step. Unlike the full traversal algorithm, the interval traversal algorithm does not depend on the number of rays in the group. Multi-level traversal schemes may be implemented starting with a large number of rays in a group and then reducing it as needed to maintain group coherency. The interval traversal algorithm, if implemented or supported in hardware, may enable a sharp reduction of power, consumed by the device, as well as increasing overall performance. Ray tracing is based on massively parallel geometrical queries, executed against some spatially ordered geometrical database. The interval traversal algorithm consists of grouping such queries, extracting certain common geometrical characteristics, pertinent for the whole group, and then executing a query using only these characteristics (and not the whole group). As such, the interval traversal approach may be extended to cover other types of applications, where it may be possible to find and trace certain group properties against a specialized database. Although systems are illustrated as including discrete components, these components may be implemented in hardware, software/firmware, or some combination thereof. When implemented in hardware, some components of systems may be combined in a certain chip or device. Although several exemplary implementations have been discussed, the claimed invention should not be limited to those explicitly mentioned, but instead should encompass any device or interface including more than one processor capable of processing, transmitting, outputting, or storing information. Processes may be implemented, for example, in software that may be executed by processors or another portion of local system. The foregoing description of one or more implementations consistent with the principles of the invention provides illustration and description, but is not intended to be exhaustive or to limit the scope of the invention to the precise form disclosed. Modifications and variations are possible in light of the above teachings or may be acquired from practice of various implementations of the No element, act, or instruction used in the description of the present application should be construed as critical or essential to the invention unless explicitly described as such. Also, as used herein, the article “a” is intended to include one or more items. Variations and modifications may be made to the above-described implementation(s) of the claimed invention without departing substantially from the spirit and principles of the invention. All such modifications and variations are intended to be included herein within the scope of this disclosure and protected by the following claims.
{"url":"http://www.google.co.uk/patents/US7786991","timestamp":"2014-04-21T12:48:32Z","content_type":null,"content_length":"77298","record_id":"<urn:uuid:016dd89a-b5c8-4a03-b890-3349b7989b9f>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00621-ip-10-147-4-33.ec2.internal.warc.gz"}
MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi Size refers to the number of nodes in a parse tree. Generally speaking, you can think of size as code length. This solution is locked. To view this solution, you need to provide a solution of the same size or smaller.
{"url":"http://www.mathworks.com/matlabcentral/cody/problems/943-mirror-matrix/solutions","timestamp":"2014-04-17T10:01:02Z","content_type":null,"content_length":"132681","record_id":"<urn:uuid:ea3fd536-1c6e-45ea-aafc-0fb16fb32ec7>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00055-ip-10-147-4-33.ec2.internal.warc.gz"}
Matlab: Data from Best Fit Lines into Legend Hey all, I am plotting some data using matlab and have a question. Basically, this is what I'm trying to do. I am calling a bunch of data from Excel, plotting this data in MATLAB (it gives me two lines), and then calculating the polyfit values for each line so I can find the equation y = mx +b for each one. I have no problem Doing all of this, but what I want to do next is display the polyfit data in the legend of the graph. I'm confused if this is possible with the legend function. So in other words where it says 'Best Fit of Reflection 1' and 'Best Fit of Reflection 2' I want to somehow place the data calculated by polyfit in the form y = mx +b. Any suggestions? Thanks in
{"url":"http://www.physicsforums.com/showthread.php?p=3813561","timestamp":"2014-04-20T01:00:46Z","content_type":null,"content_length":"31206","record_id":"<urn:uuid:c9044c98-0fa1-4d9c-a207-ecfafcefbf74>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00425-ip-10-147-4-33.ec2.internal.warc.gz"}