url
stringlengths
6
1.61k
fetch_time
int64
1,368,856,904B
1,726,893,854B
content_mime_type
stringclasses
3 values
warc_filename
stringlengths
108
138
warc_record_offset
int32
9.6k
1.74B
warc_record_length
int32
664
793k
text
stringlengths
45
1.04M
token_count
int32
22
711k
char_count
int32
45
1.04M
metadata
stringlengths
439
443
score
float64
2.52
5.09
int_score
int64
3
5
crawl
stringclasses
93 values
snapshot_type
stringclasses
2 values
language
stringclasses
1 value
language_score
float64
0.06
1
https://www.scribd.com/document/12404294/MPHG2009-Chapter-5-Light
1,500,559,134,000,000,000
text/html
crawl-data/CC-MAIN-2017-30/segments/1500549423183.57/warc/CC-MAIN-20170720121902-20170720141902-00715.warc.gz
838,757,218
35,722
# JPN Pahang Physics Module Form 4 Chapter 5 Light CHAPTER 5: LIGHT In each of the following sentences, fill in the bracket the appropriate word or words given below. solid, liquid, gas, vacuum, electromagnetic wave, energy 1. Light is a form of ( ). 2. It travels in the form of ( ) 3. In can travel through ( ) 4. It travels fastest in the medium of ( ) 5. Light of different colours travels at the same speed in the medium of ( Light allows us to see objects. Light can be reflected or refracted. 5.1 UNDERSTANDING REFLECTION OF LIGHT Plane mirror and reflection: In the boxes provided for the diagram below, write the name of each of the parts shown. ) i r Plane mirror Laws of Reflection: State the laws of reflection. (i) ……………………………………………………………………………………………. ………………………………………………………………………………………..…… . i r Plane mirror (ii) …………………………………………………………………………………………….. …………………………………………………………………………………………….. 1 JPN Pahang Physics Module Form 4 Chapter 5 Light Exercise 1. The diagram below shows how the relationship between incident angle and reflected angle can be investigated. Fill in the values of the angles of reflection, r in the table below mirror i r ON OFF ON OFF mirror i r Laser pen r Laser pen i 10 20 30 40 50 Exercise 2: Original direction Mirror Based on the diagram on the left, calculate the angle, θ. Hence determine the angle of deviation, d. θ 50o d normal Image formed by a plane mirror: Using the law of reflection, complete the ray diagram to determine the position of the image. A B object i1 r1 Eye What can you say about the line joining object and image? ……………………………………… What can you say about the distances AB and BC? ……………………………………………….. 2 JPN Pahang Physics Module Form 4 Chapter 5 Light Differences between real and virtual image: Real image Can be caught on a screen Formed by the meeting of real rays. Virtual image Cannot be caught on a screen Form at a position where rays appear to be originating. Characteristics of image formed by plane mirror: Observe the pictures below as well as using previous knowledge, list the characteristics. i) ii) object mirror iii) image iv) Exercise 1: Complete the ray diagram below consisting of 2 rays originating from the object, reflected and entering the eye such that the eye sees the image. Mirror Eye object Exercise 2: Ahmad is moving with speed 2 m s-1 towards a plane mirror. Ahmad and his image will approach each other at A. B. C. D. 1 m s-1 2 m s-1 3 m s-1 4 m s-1 3 JPN Pahang Physics Module Form 4 Chapter 5 Light Exercise 3: Four point objects A, B, C and D are placed in front of a plane mirror MN as shown. Between their images, which can be seen by the eye? M N Eye A B C D Curved Mirrors: Concave mirror Convex mirror C r P C r P Terminology: Refer to the diagrams above and give the names for the following: C = r = P = PC = 4 JPN Pahang Physics Module Form 4 Chapter 5 Light Effect of curved mirrors on incident rays: a) Incident rays parallel to the principal axis: Concave mirror Convex mirror P F C C F P f r Study the diagrams above and fill in the blanks for the following sentences. f r Rays parallel to the principal axis converge at the ……………………, F F is positioned at the ………………….. between C and P FP is named the ………………………… which is denoted by f. Hence write an equation giving the relationship between r and f. b) Incident rays parallel to each other but not parallel to the principal axis: Concave mirror Convex mirror Focal plane Focal plane F C P P F C f r Study the diagrams above and fill in the blanks in the following sentences. Parallel rays converge at a point called …………………………… f r The ray passing through C is reflected back along the line of the…………………….ray. The distance between the focal plane and the mirror is the ………………………….,f. 5 JPN Pahang Physics Module Form 4 Chapter 5 Light Image formed by curved mirror (ray diagram method) Principle of drawing ray diagrams: a. Rays parallel to the principal axis are reflected through the principal focus, F. Example: C F P P F C Concave mirror Convex mirror Exercise 1: Complete the ray diagrams below: C F P P F C Concave mirror Convex mirror b) Rays passing through the principal focus are reflected parallel to the principal axis. Example: C F P P F C Concave mirror Convex mirror 6 JPN Pahang Physics Module Form 4 Chapter 5 Light Exercise 2: Complete the ray diagrams below: C F P P F C Concave mirror Convex mirror c) Rays passing through the center of curvature are reflected directly back. C F P P F C Concave mirror Exercise 3: Complete the ray diagrams below: Convex mirror C F P P F C Concave mirror Convex mirror Image formed by concave mirror: Using the principles of construction of ray diagram, complete the ray diagrams for each of the cases shown below: u = object distance; v = image distance ; f = focal length ; r = radius of curvature Case 1: u > 2f 7 JPN Pahang Physics Module Form 4 Chapter 5 Light Concave mirror object C F F Hence state the characteristics of image formed: i) Case 2: u = 2f or u = r Concave mirror object C F F ii) iii) Characteristics of image formed: i) ii) iii) Case 3: f < u < 2f Concave mirror object C F F Characteristics of image formed: i) Case 4: u=f ii) iii) 8 JPN Pahang Physics Module Form 4 Chapter 5 Light Concave mirror object C F F Characteristics of image formed: i) Case 5: u<f Concave mirror object F F C Characteristics of image formed: i) ii) iii) Image formed by convex mirror: (using construction of ray diagram). u = object distance ; v = image distance ; f = focal length ; r = radius of curvature object C F Concave mirror F Characteristics of image formed: i) ii) iii) 9 JPN Pahang Physics Module Form 4 Chapter 5 Light Uses of curved mirrors: Newton’s Telescope: Fill in the boxes the type of mirror used Lens Eye Curved mirror lamp OFF ON Where should the lamp be placed to achieve the above result? 5.2 UNDERSTANDING REFRACTION OF LIGHT air water 10 JPN Pahang Physics Module Form 4 Chapter 5 Light What is the phenomenon which causes the bending of light in the picture above? ………………………………………………………………………………………………………… Why did this bending of light occur? (think in terms of velocity of light) ………………………………………………………………………………………………………… Refraction of light: Fill in each of the boxes the name of the part shown i Air Glass r r Air i Direction of refraction: normal Less dense medium Denser medium denser medium Less dense medium normal Draw on the diagrams above the approximate directions the refracted rays. When light travels from a less dense medium to a denser medium, the ray is refracted (toward/away from) the normal at point of incidence. When light travels from a more dense medium to a less dense medium, the ray is refracted (toward/away from) the normal at point of incidence. 11 JPN Pahang Physics Module Form 4 Chapter 5 Light Snell’s law: Snell’s law states that …………………………………………………… What is the name and symbol of the constant? ………………………….. Exercise 1: Referring to the diagram on the right, Calculate the refractive index of liquid-X. Air Liquid-X 30o 60o Exercise 2: Referring to the diagram on the right, Calculate the refractive index of liquid-Y. Air Liquid-Y 30o 45o Exercise 3: Eye Air water 12 JPN Pahang Physics Module Form 4 Chapter 5 Light On the diagram to the right, draw two rays which originate from the fish to show how a person observing from above the surface of the water is able to see the image of the fish at an apparent depth less than the actual depth of the fish. object Exercise 4: An equation that gives the relationship between apparent depth, real depth and the refractive index of water for the diagram above is real depth n= apparent depth If the fish is at an actual depth of 4 m and the refractive index of water is 1.33, what is the apparent depth of the image? 5.3 UNDERSTANDING TOTAL INTERNAL REFLECTION OF LIGHT Critical angle and total internal reflection: Figures a, b and c show rays being directed from liquid-Y which is denser than air towards the air at different angles of incident,θ. Air Liquid-Y Air Liquid-Y 90o θ<C Figure a C Figure b 13 JPN Pahang Physics Module Form 4 Chapter 5 Light Among the figures a, b and c, only Figure a has a complete ray diagram. (i) Complete the ray diagrams for Figure b and Figure c. The angle, C is called ……………………. The phenomenon which occurs in Figure c yang is called ……………………………………. (iv) State 2 conditions which must be satisfied in order for the phenomenon you mentioned in (iii) to occur. ……………………………………………………………………………………… ……………………………………………………………………………………… Air Liquid-Y (ii) (iii) θ>C Figure c Exercise 1: Air Referring to figure d and using Snell’s law, write an equation that gives the relationship between the critical angle, C, the refracted angle and the refractive index of liquid-Y Liquid-Y C 90o Figure d Exercise 2: Referring to Figure e, determine the refractive index of liquid-Z Air Liquid-Z 30o 90o Exercise 3: Figure e Explain why a pencil partially immersed in water looks bent.(Use a ray diagram). Eye 14 JPN Pahang Physics Module Form 4 Chapter 5 Light Exercise 4: Complete the path of the ray in the diagram below and explain how a mirage is formed. object Layer of cool air Eye Layer of hot air ground Exercise 5: Completing the ray diagram below, to show how a periscope works: (critical angle of glass = 42o) Object Glass prism 15 Eye JPN Pahang Physics Module Form 4 Chapter 5 Light 16 JPN Pahang Physics Module Form 4 Chapter 5 Light 5.4 UNDERSTANDING LENSES Thin Lenses : Types of lenses : Name the types of lenses shown below. (i) a. b. c. (ii) a. b. c. Formation of a convex lens and terminology: name the parts shown Formation of a concave lens and terminology: name the parts shown 17 JPN Pahang Physics Module Form 4 Chapter 5 Light Refraction of rays parallel to the principal axis of a convex lens: Draw in the following diagrams the paths of the rays after passing through the lens. Write in the boxed provided, the name of the point or line shown. i) F ii) F iii) F iv) F 18 JPN Pahang Physics Module Form 4 Chapter 5 Light Principles of constructing ray diagrams: Complete the path of each ray after passing through the lens i) ii) iii) F F F F F F iv) F F v) F F vi) F F vii) viii) F F F F Exercise 1: State the meaning of each of the following terms: i) Focal length , f : ii) Object distance, u : iii) Image distance, v : Exercise 2: Describe how you would estimate the focal length of a convex lens in the school lab. 19 JPN Pahang Physics Module Form 4 Chapter 5 Light Characteristics of image formed by a convex lens : (Construction of ray diagram method) Construct ray diagrams for each of the following cases and state the characteristics of the image formed. i) Case 1 : u > 2f where u = object distance ; and f = focal length of lens. Lens object 2F F F Characteristics of image: ii) Case 2 : u = 2f Lens object 2F F F Characteristics of image: iii) Case 3 : 2f > u > f Lens object 2F F F Characteristics of image: iv) Case 4 : u = f 20 JPN Pahang Physics Module Form 4 Chapter 5 Light Lens object 2F F F Characteristics of image: v) Case 5 : u < f Lens object F 2F F Characteristics of image: Exercise: In each of the following statements below, fill in the space provide one of the following conditions. ( u > 2f / 2f = u / 2f > u > f / u > f / u < f ) i) To obtain a real image, the object must be placed at a distance u such that … ……… ii) To obtain a virtual image, the object must be placed at a distance u such that ……… 21 JPN Pahang Physics Module Form 4 Chapter 5 Light Characteristics of image formed by concave lens : (by construction of ray diagrams ) Construct a ray diagram for each of the following and state the characteristics of the image formed i) Lens object 2F F F Characteristics of image: ii) Lens object 2F F F Characteristics of image : Note: Image formed by a concave lens is always diminished, virtual and on the same side of the lens as the object. Power of a lens (p) The power of the lens is given by: Power of lens = 1 focal length Sign convention (for focal length) and the S.I. unit for power of a lens. • The focal length of a convex lens is (positive/negative) • The focal length of a concave lens is (positive/negative) • The S.I. unit for the power of a lens is…....…and its symbol is… … • When calculating the power of a lens, the unit of the focal length must be in (m/cm) Exercise 1 : A concave lens has a focal length of 10 cm. What is its power? 22 JPN Pahang Physics Module Form 4 Chapter 5 Light Exercise 2 : The power of a lens is + 5 D. State whether it is a convex lens or a concave lens and calculate its focal length. Linear Magnification (m) : Definition: Linear magnification = height of image height of object hi h0 m= Based of the definition above and the ray diagram below, derive an expression for the relationship between linear magnification, m, the object distance, u and the image distance, v. B ho Lens v O C hi u D A Lens formula : The relationship between the object distance, u, image distance, v, and the focal length, f, of a lens is given by 1 1 1 + = u v f • This lens formula is valid for both convex and concave lenses. When using the lens formula, the ‘real is positive sign convention’ must be followed. 23 JPN Pahang Physics Module Form 4 Chapter 5 Light The rules stated in this sign convention are: 1) 2) 3) Application of the lens formula: Exercise 1. An object is placed 10 cm in front of a converging lens of focal length 15 cm. Calculate the image distance and state the characteristics of the image formed. Exercise 2 : An object is placed 30 cm in front of a converging lens of focal length 25 cm. a) Find the position of the image, and state whether the image is real or virtual. b) Calculate the linear magnification of the image. Latihan 3 : An object is placed 30 cm in front of a diverging lens of focal length 20 cm. Calculate the image distance and state whether the image is real or virtual. 24 JPN Pahang Physics Module Form 4 Chapter 5 Light Lenses and optical instruments : 1. Magnifying glass (simple microscope ): A lens acts as a magnifying glass when the object is placed as in case 5 on page 23. i) A magnifying glass consists of a (converging / diverging) lens. ii) The object must be placed at a distance (more than f / same as f / less than f / between f and 2f / more than 2f) in order for the lens to act as a magnifying glass. iii) The characteristics of the image formed by a magnifying glass are yang (real / virtual) ; (inverted / upright) ; (magnified /diminished) ; (on the same side as the object / on the opposite side of the object). iv) Greater magnification can be obtained by using a lens which has (long / short) focal length. Complete the ray diagram below to show how a magnifying glass produces an image of the object. Lens object F 2F F Exercise 1 : A magnifying glass produces an image with linear magnification = 4. If the power of the lens is +10 D, find the object distance and image distance. 25 JPN Pahang Physics Module Form 4 Chapter 5 Light Exercise 2: Which of the following lenses with their powers given below makes the magnifying glass with the highest power of magnification? A. – 5 D B. –25 D C. +5 D D. +25 D. 2. Simple camera : The diagram below shows the structure of a simple camera. In the boxes provided, write the names of the parts shown. Focusing screw Film drum Diaphragm adjustment ring For each of the parts you have named, state its function. 3. Slide projector : The diagram below shows the structure of a simple camera. In the boxes provided, write the names of the parts shown Screen Light source Complete the ray diagram above to explain how the slide projector works. 26 JPN Pahang Physics Module Form 4 Chapter 5 Light 4. Astronomical telescope : Making of the astronomical telescope. • • • The astronomical telescope consists of 2 (converging / diverging) lenses. The objective lens has focal length, fo and the eye lens has focal length, fe where ( fo < fe / fo > fe ). The lenses are arranged such that the distance between the objective lens and the eye lens is (fo – fe / fo + fe / fo x fe / fo/fe). Parallel rays from distant Objective lens object Fo Fe Eye lens Complete the ray diagram above to show how the astronomical telescope works. Characteristics of image formed by an astronomical telescope: • • • The first image formed by the objective lens is (virtual/real ; upright/inverted ; diminished/magnified). The final image is (virtual/real ; upright/inverted ; diminished/magnified). The final image is located at ( Fo / Fe / infinity). Magnifying Power (M) : M= f f 0 e Exercise: An astronomical telescope with high power of magnification can be built using eye lens of (long / short) focal length and objective lens of (long / short) focal length. 27 JPN Pahang Physics Module Form 4 Chapter 5 Light 5. The compound microscope : Structure of the compound microscope: • • • • • A compound microscope consists of 2 (converging / diverging) lenses The focal length of the eye lens is (long / short) and the focal length of the objective lens is (long / short). The objective lens is arranged such that the object distance, u is (u = fo / fo < u < 2 fo / u =2fo). The eye lens is used as a (magnifying / diverging / projector) lens. The total length, s, between both lenses is ( s = fo + fe ; s > fo+fe ) L0 Object Fo Fe Le Eye Complete the ray diagram above to show how the compound microscope works. Characteristics of image formed by compound microscope: • • The first image formed by the objective lens is (real/virtual ; diminished/magnified ; upright/inverted ). The final image is (real/virtual ; diminished/magnified ; upright/inverted ). Exercise 1 (a) : A compound microscope consists of two lenses of focal lengths 2 cm and 10 cm. Between them, which is more suitable as the eye lens? Explain your answer. (b): How would you arrange the lenses in (a) to make an astronomical telescope? 28 JPN Pahang Physics Module Form 4 Chapter 5 Light Reinforcement: Part A: 1. Between the following statements about reflection of light, which is not true? A. All light energy incident on a plane mirror is reflected. B. The angle of incidence is always the same as the angle of reflection. C. The incident ray, the reflected ray and the normal to the point of incidence, all lie on the same plane. D. The speed of the reflected ray is the same as the speed of the incident ray. 2. A boy stands in front of a plane mirror. He observes the image of some letterings printed on his shirt. The letterings on his shirt is as shown in Figure 1. Figure 1 Between the following images, which is the image observed by the boy? A B C D 3. Figure 2 shows an object, O placed in front of a plane mirror. Between the positions A, B, C and D, which is the position of the image? A Plane mirror O Figure 2 4. A student is moving with a velocity of 2 m s-1 towards a plane mirror. The distance between the student and his image will move towards each other at the rate A. 2 m s-1 B. 3 m s-1 C. 4 m s-1 D. 5 m s-1 E. 6 m s-1 B C D 5. The table below shows the characteristics of the images formed by a concave mirror for various positions of the object. All symbols used have the usual meanings. Which of them is not true? 29 JPN Pahang Physics Module Form 4 Chapter 5 Light A B C D Position of object u > 2f f < u < 2f u=f u<f Characteristics of image Diminished, inverted, real Magnified, inverted, real Same size, inverted, real Maginfied, upright, virtual 6. Which of the following ray diagram is correct? A 50o 50o C F C F B C Plane mirror Convex mirror Concave mirror 7. The depth of a swimming pool appears to be less than its actual depth. The light phenomenon which causes this is A. B. C. D. Reflection Refraction Diffraction Interference 8. The critical angle in glass is 42o. What is the refractive index of glass? A. 1.2 B. 1.3 C. 1.4 D. 1.5 E. 1.6 9. Which of the following are the characteristics of an image formed by a magnifying glass? A. B. C. D. Magnified, virtual, inverted Diminished, real, upright Magnified, virtual, upright Diminished, virtual, inverted 30 JPN Pahang Physics Module Form 4 Chapter 5 Light 10. A student is given three convex lenses of focal lengths 2 cm, 10 cm and 50 cm. He wishes to construct a powerful astronomical telescope. Which of the following arrangements should he choose? Focal length of objective lens / cm 50 10 2 50 Focal length of eye lens / cm 2 10 50 10 A B C D Part B 1. Eye air water Figure 3 Figure 3 shows the eye of a person looking at a fish. a) Sketch a ray diagram consisting of 2 rays originating from the eye of the fish to show why the image of the fish is seen closer to the surface. b) The fish is at a depth of 2 m. If the refractive index of water is 1.33, calculate the apparent depth of the fish. 31 JPN Pahang Physics Module Form 4 Chapter 5 Light 2. 1 1 1 + = , derive an equation that gives the relationship u v f between liner magnification, m and the image distance, v. Hence sketch the graph of m against v on the axes provided below. a) Starting with the lens formula, m 0 v (b) State the value of m at the point of intersection of the graph with the vertical axis. (c) Describe how you would determine the focal length of the lens using the graph. 32 JPN Pahang Physics Module Form 4 Chapter 5 Light Part C 1. A student used a slide projector to project a picture onto the screen. Figure 1a and 1b show the relative positions of the slide, projector lens and the screen. It is observed that when the screen is moved further away (Figure 1b), the lens of the projector has to be moved nearer to the slide to obtain a sharp image. Projector lens Screen Slide image Figure 1a Projector lens Screen Slide image Figure 1b Based on your observations and knowledge of lenses; a) make one suitable inference. b) state an appropriate hypothesis that could be investigated. c) describe how you would design an experiment to test your hypothesis using a convex lens, filament bulb and other apparatus. In your description, state clearly the following: (i) aim of the experiment 33 JPN Pahang Physics Module Form 4 Chapter 5 Light (ii) variables in the experiment (iii) List of apparatus and materials (iv) Arrangement of the apparatus (v) The procedure of the experiment, which includes the method of controlling the manipulated variable and the method of measuring the responding variable (vi) The way you tabulate the data 34 JPN Pahang Physics Module Form 4 Chapter 5 Light (vii) The way you would analyse the data 2. A student carried out an experiment to investigate the relationship between object distance, u, and image distance, v, for a convex lens. The student used various values of u and recorded the corresponding values of v. The student then plotted the graph of uv against u + v as shown in Figure 2. uv/ cm2 500 450 400 350 300 250 200 150 100 50 10 20 Figure 2 30 40 50 u + v / cm 35 JPN Pahang Physics Module Form 4 Chapter 5 Light a) Based on the graph in Figure 2, (i) state the relationship between uv and u + v ………………………………………………………………………………………… [1 mark] 2 (ii) determine the value of u + v when the value of uv = 400 cm . Show on the graph how you obtained the value of u + v. From the value of u + v obtained, calculate the image distance, v when u = 20 cm. [3 marks] (iii) calculate the gradient of the graph. Show clearly on the graph how you obtained the values needed for the calculation. [3 marks] b) Given that the relationship between u, v and focal length, f of the convex lens used, is represented by the equation 1 + 1 = 1 u v f Derive an equation which gives the relationship between uv and (u + v ). [2 marks] c) Using the equation derived in (b), and the value of gradient calculated in (a)(iii), determine the focal length of the lens used in the experiment. [2 marks] d) State one precaution taken to ensure the accuracy of the experiment. …………………………………………………………………………………… [1 mark] 36
6,210
24,281
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.5625
4
CC-MAIN-2017-30
latest
en
0.757981
http://perfectastronomy.com/16/
1,516,795,108,000,000,000
text/html
crawl-data/CC-MAIN-2018-05/segments/1516084894125.99/warc/CC-MAIN-20180124105939-20180124125939-00670.warc.gz
256,782,949
18,296
Astronomy compels the soul to look upwards and leads us from this world to another. Welcome! Astronomy is an infinitely captivating subject and the oldest of the natural sciences. It is one of the few areas of science that amateurs can assist professionals and directly contribute to science. Astronomy is the scientific study of the contents of entire Universe, stars, planets, comets, asteroids, galaxies, and space and time, as well as its history. If you’re starting out in Astronomy and looking at the sky at night, the contents of this site will guide you through what you can see at night as well as equipment advice, observation tips and tutorials. The astronomy articles cover the types of objects you can see and illustrate some basic astronomy concepts which are important to learn. You can also find several astronomy DIY projects which you can build yourself. 22nd September 2008Cosmology In this article, we will have a look at some of the important physics concepts needed to understand how the universe works. 22nd April 2008Astronomy The apparent magnitude and absolute magnitude are two ways of comparing an object's brightness. In this example, we look at the relationship between absolute magnitude, apparent magnitude and luminosity. If nobody hates you, you're doing something wrong.House 22nd April 2008Solar Physics In astronomy, luminosity is the total amount of energy radiated by a star, galaxy, or another astronomical object per unit time. It is related to the brightness, which is the luminosity of an object in a given spectral region. In SI units luminosity is measured in joules per second or watts. 21st April 2008Solar Physics Flux is a term used to describe the brightness of a star and is a measure of the total energy from an object per unit area over time. Flux calculations are used to calculate luminosity, a more meaningful representation of a star's brightness. 21st April 2008Astronomy The visual brightness of stars, planets and other astronomical objects is based on the visual magnitude scale. We look at this scale and how astronomers use it to measure relative brightnesses of objects in the night's sky. 17th April 2008Astronomy There are several different systems for measuring Time. Civil time is the system we are all familiar with, however, astronomers use a different system - sidereal time which is measured with reference to background stars in the sky, as opposed to the Sun. 17th April 2008Astronomy You may have heard the terms arc-minute and arc-second mentioned on the TV, magazines or other websites. These are the units of measurement for angular size used in modern astronomy. Angular size is used to describe the dimensions of an object as it appears in the sky. 17th April 2008Astronomy In Astronomy, parallax is a method used to determine the distance to the closest stars. This technique for measuring astronomical distances is very important because it is a geometric method and independent of the object being observed.
619
2,995
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.671875
3
CC-MAIN-2018-05
longest
en
0.94745
https://gigadom.in/tag/page-rank/
1,638,618,255,000,000,000
text/html
crawl-data/CC-MAIN-2021-49/segments/1637964362969.51/warc/CC-MAIN-20211204094103-20211204124103-00076.warc.gz
358,564,674
57,397
# TWS-5: Google’s Page Rank: Predicting the movements of a random web walker Internet history can be divided into 2 epochs. The epoch before the Google search and that after. Prior to Google there were many unsuccessful attempts to organize the Web, which  a miniscule fraction of what we have today, through Web portals. So we had Yahoo, Excite, Alta-vista, Lycos etc. trying to categorize the pages of the Web into News, Sports, and Finance etc. Navigating through them was an exercise an frustration but one had to live with this for quite some time. ( The material for this post is taken from Mining Massive Datasets lecture from Coursera – Lecture by Prof. Jure Leskovec, Stanford University) The Google Search powered by the Page Rank algorithm arrived at a time when the internet was exploding. This was precisely what ‘the doctor ordered’ as navigating the web became synonymous with the Web search. This post takes a look at the Page Rank algorithm behind Google Search. The Web can be viewed as a large directed graph with out-links from Web pages to other pages (links from a page to external Web pages) and in-links into Web pages from other pages. For the Google search, Google uses Web crawlers to index the pages of the Web and probably creates an inverted index of keywords to documents that contain them. It then uses the Page Rank algorithm to determine the relevance and importance of the Web page How does Google identify the importance of a Web page? The importance of a Web page is determined by the number of in-links to the page. Each in-link is considered a vote for this page. Also the in-link from an important page is higher than another in-link from a less important page. So for example an in-link from New York Times will be much larger than an in-link from the National Enquirer for example In the figure above it can B has a highest Page Rank because it has the highest number of in-links. In addition the out-link from B to C increases the Page Rank of C. A) Flow formulation: The Flow formulation for Page Rank is based on the following • Each Web page’s vote (in-link) is proportional to the importance of the source page • If a page ‘j’ with page rank rj has n out-links each link gets rj/n votes • Page ‘j’s own importance is the sum of all the votes on its in-links Where rj = ri/3 + rk/4 as seen from the above figure According to the Flow equation for Page rank, the rank rj for a page j is rj = ∑ ri/d I -> j In other words the rank rj is the sum of the the in-links from all pages ri divided by its out-degree. The flow equations for the above simple view of a Web links can be expressed as based on the rank ri of each node divided by its out-degree. So ry and ra have an out-degree of 2 and hence they are ry/2 and ra/2 per out-link ry = ry/2 + ra/2 ra = rm + ry/2 rm = ra/2 B) The Matrix formulation In the Matirx formulation for Page Rank an Adjacent matrix Mji is defined as follows If a page I has di out-links If page I has an out-link to page j then I -> j                   Mji = 1/di else Mji =0 The Rank vector ri is the importance of page i It is also assumed that  ∑ri = 1 The Flow formulation for the above was shown to be ry = ry/2 + ra/2 ra = rm + ry/2 rm = ra/2 The Matrix formulation is However when we a billions of Web pages with several hundred thousand in-links and out-links the Page rank is iteratively calculated To start the page rank of ra=ry=rm = 1/3 so that the sum ∑ri =1 This is then iterated Using the r = M x r to arrive at values that converge ry            ½     ½     0                             1/3 ra    =     ½     0      1          x                   1/3 r m         0     ½      0                               1/3 This will eventually converge at ry=2/5 ra=2/5 and rm =1/5 The ability to rank Web pages on the order of importance was a real breakthrough for Google The Page Rank also implies the probability that a Web surfer who randomly clicks the ou-links of a the Web pages will land on after some time. It is the probability of a random walk of the Web when clicking the Web links on pages at random. While Google does a great job in crawling and serving pages it is rumored that more than 75% of the Web is inaccessible to Web search engines. This is known as the “Dark Net‘ or “Dark Web” much like the dark matter of the universe # Thinking Web Scale (TWS-3): Map-Reduce – Bring compute to data In the last decade and a half, there has arisen a class of problem that are becoming very critical in the computing domain. These problems deal with computing in a highly distributed environments. A key characteristic of this domain is the need to grow elastically with increasing workloads while tolerating failures without missing a beat.  In short I would like to refer to this as ‘Web Scale Computing’ where the number of servers exceeds several 100’s and the data size is of the order of few hundred terabytes to several Exabytes. There are several features that are unique to large scale distributed systems 1. The servers used are not specialized machines but regular commodity, off-the-shelf servers 2. Failures are not the exception but the norm. The design must be resilient to failures 3. There is no global clock. Each individual server has its own internal clock with its own skew and drift rates. Algorithms exist that can create a notion of a global clock 4. Operations happen at these machines concurrently. The order of the operations, things like causality and concurrency, can be evaluated through special algorithms like Lamport or Vector clocks 5. The distributed system must be able to handle failures where servers crash, disk fails or there is a network problem. For this reason data is replicated across servers, so that if one server fails the data can still be obtained from copies residing on other servers. 6. Since data is replicated there are associated issues of consistency. Algorithms exist that ensure that the replicated data is either ‘strongly’ consistent or ‘eventually’ consistent. Trade-offs are often considered when choosing one of the consistency mechanisms 7. Leaders are elected democratically.  Then there are dictators who get elected through ‘bully’ing. In some ways distributed systems behave like a murmuration of starlings (or a school of fish),  where a leader is elected on the fly (pun unintended) and the starlings or fishes change direction based on a few (typically 6) closest neighbors. This series of posts, Thinking Web Scale (TWS) ,  will be about Web Scale problems and the algorithms designed to address this.  I would like to keep these posts more essay-like and less pedantic. In the early days,  computing used to be done in a single monolithic machines with its own CPU, RAM and a disk., This situation was fine for a long time,  as technology promptly kept its date with Moore’s Law which stated that the “ computing power  and memory capacity’ will  double every 18 months. However this situation changed drastically as the data generated from machines grew exponentially – whether it was the call detail records, records from retail stores, click streams, tweets, and status updates of social networks of today These massive amounts of data cannot be handled by a single machine. We need to ‘divide’ and ‘conquer this data for processing. Hence there is a need for a hundreds of servers each handling a slice of the data. The first post is about the fairly recent computing paradigm “Map-Reduce”.  Map- Reduce is a product of Google Research and was developed to solve their need to calculate create an Inverted Index of Web pages, to compute the Page Rank etc. The algorithm was initially described in a white paper published by Google on the Map-Reduce algorithm. The Page Rank algorithm now powers Google’s search which now almost indispensable in our daily lives. The Map-Reduce assumes that these servers are not perfect, failure-proof machines. Rather Map-Reduce folds into its design the assumption that the servers are regular, commodity servers performing a part of the task. The hundreds of terabytes of data is split into 16MB to 64MB chunks and distributed into a file system known as ‘Distributed File System (DFS)’.  There are several implementations of the Distributed File System. Each chunk is replicated across servers. One of the servers is designated as the “Master’. This “Master’ allocates tasks to ‘worker’ nodes. A Master Node also keeps track of the location of the chunks and their replicas. When the Map or Reduce has to process data, the process is started on the server in which the chunk of data resides. The data is not transferred to the application from another server. The Compute is brought to the data and not the other way around. In other words the process is started on the server where the data, intermediate results reside The reason for this is that it is more expensive to transmit data. Besides the latencies associated with data transfer can become significant with increasing distances Map-Reduce had its genesis from a Lisp Construct of the same name Where one could apply a common operation over a list of elements and then reduce the resulting list of elements with a reduce operation The Map-Reduce was originally created by Google solve Page Rank problem Now Map-Reduce is used across a wide variety of problems. The main components of Map-Reduce are the following 1. Mapper: Convert all d ∈ D to (key (d), value (d)) 2. Shuffle: Moves all (k, v) and (k’, v’) with k = k’ to same machine. 3. Reducer: Transforms {(k, v1), (k, v2) . . .} to an output D’ k = f(v1, v2, . . .). … 4. Combiner: If one machine has multiple (k, v1), (k, v2) with same k then it can perform part of Reduce before Shuffle A schematic of the Map-Reduce is included below\ Map Reduce is usually a perfect fit for problems that have an inherent property of parallelism. To these class of problems the map-reduce paradigm can be applied in simultaneously to a large sets of data.  The “Hello World” equivalent of Map-Reduce is the Word count problem. Here we simultaneously count the occurrences of words in millions of documents The map operation scans the documents in parallel and outputs a key-value pair. The key is the word and the value is the number of occurrences of the word. E.g. In this case ‘map’ will scan each word and emit the word and the value 1 for the key-value pair So, if the document contained “All men are equal. Some men are more equal than others” Map would output (all,1),  (men,1), (are,1), (equal,1), (some,1), (men,1), (are,1),  (equal,1), (than,1), (others,1) The Reduce phase will take the above output and give sum all key value pairs with the same key (all,1),  (men,2), (are,2),(equal,2), (than,1), (others,1) So we get to count all the words in the document In the Map-Reduce the Master node assigns tasks to Worker nodes which process the data on the individual chunks Map-Reduce also makes short work of dealing with large matrices and can crunch matrix operations like matrix addition, subtraction, multiplication etc. Matrix-Vector multiplication As an example if we consider a Matrix-Vector multiplication (taken from the book Mining Massive Data Sets by Jure Leskovec, Anand Rajaraman et al For a n x n matrix if we have M with the value mij in the ith row and jth column. If we need to multiply this with a vector vj, then the matrix-vector product of M x vj is given by xi Here the product of mij x vj   can be performed by the map function and the summation can be performed by a reduce operation. The obvious question is, what if the vector vj or the matrix mij did not fit into memory. In such a situation the vector and matrix are divided into equal sized slices and performed acorss machines. The application would have to work on the data to consolidate the partial results. Fortunately, several problems in Machine Learning, Computer Vision, Regression and Analytics which require large matrix operations. Map-Reduce can be used very effectively in matrix manipulation operations. Computation of Page Rank itself involves such matrix operations which was one of the triggers for the Map-Reduce paradigm. Handling failures:  As mentioned earlier the Map-Reduce implementation must be resilient to failures where failures are the norm and not the exception. To handle this the ‘master’ node periodically checks the health of the ‘worker’ nodes by pinging them. If the ping response does not arrive, the master marks the worker as ‘failed’ and restarts the task allocated to worker to generate the output on a server that is accessible. Stragglers: Executing a job in parallel brings forth the famous saying ‘A chain is as strong as the weakest link’. So if there is one node which is straggler and is delayed in computation due to disk errors, the Master Node starts a backup worker and monitors the progress. When either the straggler or the backup complete, the master kills the other process. Mining Social Networks, Sentiment Analysis of Twitterverse also utilize Map-Reduce. However, Map-Reduce is not a panacea for all of the industry’s computing problems (see To Hadoop, or not to Hadoop) But the Map-Reduce is a very critical paradigm in the distributed computing domain as it is able to handle mountains of data, can handle multiple simultaneous failures, and is blazingly fast. To see all posts click ‘Index of Posts
3,041
13,425
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.890625
3
CC-MAIN-2021-49
latest
en
0.938896
https://collegeassignmentshelp.com/jonathan-macintosh-is-a-highly-successful-upstate-2/
1,632,859,948,000,000,000
text/html
crawl-data/CC-MAIN-2021-39/segments/1631780060882.17/warc/CC-MAIN-20210928184203-20210928214203-00087.warc.gz
230,856,339
10,709
# Jonathan macintosh is a highly successful upstate E6-25 Estimating Cost Behavior; High-Low Method E6-25 Jonathan Macintosh is a highly successful upstate New York orchardman who has formed his own company to produce and package applesauce. Apples can be stored for several months in cold storage, so applesauce production is relatively uniform throughout the year. The recently hired controller for the firm is about to apply the high-low method in estimating the company’s energy cost behavior. The following costs were incurred during the past 12 months: Month Pints of Applesauce Produced Energy Cost January 35,000 \$23,400 February 21,000 22,100 March 22,000 22,000 April 24,000 22,450 May 30,000 22,900 June 32,000 23,350 July 40,000 28,000 August 30,000 22,800 September 30,000 23,000 October 28,000 22,700 November 41,000 24,100 December 39,000 24,950 Required: 1. Use the high-low method to estimate the company’s energy cost behavior and express it in equation form. 2. Predict the energy cost for a month in which 26,000 pints of applesauce are produced.
282
1,074
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.671875
3
CC-MAIN-2021-39
latest
en
0.965402
https://www.customproducttraining.com/calculate-your-own-odds-of-succeeding-the-particular-lotto-mega-millions-powerball/
1,701,835,218,000,000,000
text/html
crawl-data/CC-MAIN-2023-50/segments/1700679100583.13/warc/CC-MAIN-20231206031946-20231206061946-00434.warc.gz
785,449,496
11,659
# Calculate Your own Odds Of Succeeding the particular Lotto Mega Millions Powerball Do you know exactly how to calculate chances regarding winning the lottery, including the Florida Lottery? You may calculate each set connected with possibilities for each different lottery game you participate in. With the assistance associated with a small hand organised finance calculator as well as with often the free online car loan calculator on the computer, you just multiply the particular numbers together and even add one particular division approach when “the order” of your chosen statistics is not required for a new particular lottery game. Precisely what you “need to be able to know” is the number regarding overall balls that this winning statistics are attracted from….. is it fifty nine, 56, 38, 49, or 39? If there is definitely a secondary drawing with regard to the single extra golf ball, such as the “red ball” together with Powerball as well as the Mega Millions’ “gold ball” you need to help know just how many balls will be in this collection such as well. Are there forty-nine or 39? It does not matter if it is the Lakewood ranch, Ohio, Tx, PA or perhaps NJ Lotto. This strategy or perhaps formula gives you the genuine odds. Florida Lottery is definitely 6/53. New York Lottery is 6/59. The Ohio Lotto, Ma Lottery, Wisconsin Lotto, along with the State regarding Washington Lottery carry a 6/49 lotto numbers rate. Illinois Lotto carries a new 6/52. When you have this data correctly in front of you including your calculator inside hand, you can begin doing the job the formulas. Anyone need to select several regular balls and something more ball correctly matched up to help the winning drawn quantities to win the multi-million dollar jackpot that more individuals dream about succeeding someday. Within the first example there are 56 projectiles in the first class and fouthy-six balls inside secondary group. In order to get the Lottery jackpot you need to go with all these balls (5 & 1) exactly, although not really necessarily in buy. The California Lottery’s Super Lotto Additionally is 47/27. The big carol is definitely spinning with the original part of the pulling. You have a 1/56 chance to match your quantity to this 1st ball. With one baseball removed after the first number has been attracted, an individual now have a 1/55 chance of matching one more one of your statistics to the second ball drawn. With each drawn number a ball is eliminated lowering the number regarding remaining tennis balls by simply a new total of one. The odds of you correctly corresponding the number on the third ball to turn out to be drawn is now 1/54 from the total amount of golf balls remaining around the drum. While using next ball removed from often the drum and sitting with all the other two winning amounts, your odds of correctly matching the fourth baseball is reduced to 1/53. As you can see every time a ball is unveiled in the drum the probabilities are diminished by one particular. You started with a new 1/56 chance, in that case having each new receiving range this is minimized for you to 1/55, 1/54, 1/53, in addition to with the fifth baseball you have got the odds of 1/52 appropriately matching this specific fifth receiving number. This specific is the first area of the formula of how to help calculate your odds regarding winning the lottery, including the Sarasota Lottery. Today take these five prospects representing the five receiving numbers (1/56, 1/55, 1/54, 1/53, and 1/52). Typically the “1” on top involving the small fraction represents your own one and only possibility to correctly match the used number. Now you have your finance calculator and increase in numbers all top rated numbers (1x1x1x1x1) equal one particular (1). So next you multiply all the bottom part numbers (56x55x54x53x52). Properly inserted and multiplied you discover the total is 458, 377, 920. The innovative small fraction becomes 1/458, 377, 920. This is a 458 , 000, 000 to a single chance to win. If you were required to pick the numbers so as just just like they are drawn, after that these would be the particular odds versus you in order to win this Pick 5/56 ball lottery game. Luckily or even unfortunately, you will be certainly not required to pick this quantities in the actual order they can be drawn. This second step with the mixture will decrease the odds, which in turn makes it possible for you to match these five winning amounts in any order. Around this phase you may multiply the number regarding tennis balls drawn — a few (1x2x3x4x5). With calculator throughout hand the thing is that the total equals 120. For you to give you the right to choose the several matching numbers in different order, you create these kinds of chances by dividing 120/417, 451, 320. You undoubtedly will need a calculator for this specific a single. 120/458, 377, 920 reduces your odds of succeeding this lotto to one-half, 819, 816. These can be over 3. a few mil to one odds in opposition to you of winning this Pick 5/56 ball lottery game. If this have been often the Mega Millions Lottery, you should add the “gold ball” to these 5 winning drawn balls inside order to win the particular Multi-Million Dollar Jackpot. The only gold ball is computed as a 1/46 possibility of corresponding the idea correctly, and since you happen to be pulling just one number it really must be an exact match. Yet again, you should only have that “1” possibility to do it correctly. Now you need to be able to flourish 3, 819, 816 simply by 46. Grab togel online and do this repr�sentation. Your current final prospects against anyone winning the particular Mega Hundreds of thousands Jackpot are usually calculated to be 175, 711, 536 or clearly explained 175 million, 711 1, 000, 5 hundred thirty six thirty-six to one (175, 711, 536 to 1). You know how to analyze the odds involving being successful the Mega Millions Lottery. The Powerball Lottery information are based on a new 1/59 for the primary five bright balls and 1/39 for your “red” electric power ball. In your first set of multipliers is 59x58x57x56x55. This class totals 600, 766, 320. Now partition 600, 766, 360 by means of 120 (1x2x3x4x5). Your new full will be 5, 006, 386. Generally there is a 1/39 likelihood to find the “red” ball. 39 x a few, 006, 386 gives a person the real odds regarding winning the Powerball Goldmine, such as 195, 249, 054 to at least one. Another 5 +1 Lottery that would seem in order to be all around the United States is the “Hot Lotto” which has a 39/19 count. It can be played around 15 several States. DC Lottery, Delaware Lottery, Florida Lottery, Iowa Lottery, Kansas Lottery, Maine Lottery, Mn Lottery, Montana Lottery, Brand-new Hampshire Lotto, New Mexico Lottery, North Dakota Lottery, Oklahoma Lottery, South Dakota Lottery, Vermont Lottery, in addition to the West Va Lotto. The final odds associated with being successful the minimum \$1 Million Lottery jackpot is 15, 939, 383 to a single. A Pick 6/52 baseball Lottery game formula appears like this: (1/52, 1/51, 1/50, 1/49, 1/48, 1/47) for a total regarding 13, 658, 134, 500 divided by 720 (1x2x3x4x5x6) for your odds of 1/20, 358, 520. Your likelihood to win this 6/52 Lottery is over 16. 5 million to one particular for you to win, such like the Illinois Parte. This Hoosier Lottery the fact that uses Indiana State’s play name, bears a 6/48. The state of michigan Lotto is 6/47, Arizona Lottery and Missouri Lotto are 6/44, Maryland Lotto is 6/43, and Colorado Lottery is 6/42. Compare this particular to the Sarasota Lottery. A new Pick 5/39 soccer ball Lotto game method appears to be like this kind of: (1/39, 1/38, 1/37, 1/36, 1/35) for a total connected with 69, 090, 840 broken down simply by 120 (1x2x3x4x5) for your possibilities of 1/575, 757 regarding winning the Jackpot for example the Illinois Little Lotto. Some other States with the identical 5/39 lottery numbers include things like the NC Lottery, Atlanta in addition to Florida Lottery Dream your five, and Tennessee Lottery’s Pick 5. Virginia Lottery’s Income 5 carries the 5/34 range. Now, basically that better to decide on a Lottery game along with lower odds against anyone?
1,970
8,253
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.796875
3
CC-MAIN-2023-50
latest
en
0.948448
https://thvinhtuy.edu.vn/4-1-antiderivatives-and-indefinite-integration-fuqifyo5/
1,709,264,138,000,000,000
text/html
crawl-data/CC-MAIN-2024-10/segments/1707947474948.91/warc/CC-MAIN-20240301030138-20240301060138-00528.warc.gz
554,870,722
11,677
# 4.1 Antiderivatives and Indefinite Integration Indefinite Integration of a Quotient Using Substitution (Ln) Indefinite Integration of a Quotient Using Substitution (Ln) 1 4.1 Antiderivatives and Indefinite Integration Learning Goals: We are learning about: Antiderivatives Notation for Antiderivatives Basic Integration Rules Initial Conditions and Particular Solutions Why are we learning this? Helps us find useful formulas! (Example 8) 2 Antiderivatives: Find a function F whose derivative is: Now find another function! 3 Antiderivatives 4 Example 1: Solving a Differential Equation 5 Notation for Antidertivative: Antidifferentiation/Indefinite Integration Antiderivative/Indefinite Integral Need to find the y that makes the equation true. Once we know the y what are we doing to it? 6 Basic Integration Rules 7 Example 2: Applying The Basic Integration Rules Example 3: Rewriting Before Integrating 8 Example 5: Rewriting Before Integrating – We have no Quotient Rule Same applies with derivatives 9 Example 6: Rewriting Before Integration Using Trig Identities 10 Example 7: Finding a Particular Solution 11 Remember we need to rewrite the integrand to fit the basic integration rules: How does this show that integration is limited compared to differentiation? 12 Example 8 – Great Example. Read Together. 4 Example 8 – Great Example!! Read Together. 4.1 Hmwr: 1-13odd, 17, 19, 23, 25, 29-45odd, 51, 57, 67 Similar presentations
373
1,453
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.578125
4
CC-MAIN-2024-10
latest
en
0.806582
https://mcqslearn.com/cost-accounting/quizzes/quiz-questions-and-answers.php?page=88
1,713,837,511,000,000,000
text/html
crawl-data/CC-MAIN-2024-18/segments/1712296818452.78/warc/CC-MAIN-20240423002028-20240423032028-00534.warc.gz
349,995,102
20,790
BBA Finance Online Courses MBA Cost Accounting Certification Exam Tests MBA Cost Accounting Practice Test 88 # Target Costing and Target Pricing Multiple Choice Questions (MCQ) PDF - 88 Books: Apps: The Book Target Costing and Target Pricing MCQ Questions, target costing and target pricing quiz answers PDF download, e-Book Ch. 12-88 to learn free accounting online courses. Solve Cost Management and Pricing Decisions Test PDF, target costing and target pricing Multiple Choice Questions (MCQ Quiz) for online college degrees. The Target Costing and Target Pricing MCQ Quiz App Download: Free certification app for target costing and target pricing, value engineering, insurance and lock in costs, customer response time and on time performance, strategic decisions, estimating cost functions test prep for master's degree in business administration. The MCQ Quiz: Target price is subtracted from per unit target operating income to calculate; "Target Costing & Target Pricing" App APK Download (Free) with answers total cost per unit, total current full cost, target operating income per unit and target cost per unit for online masters in accounting degree. Study cost management and pricing decisions questions and answers, Apple Book to download free sample for online business administration degree classes. ## Target Costing & Target Pricing Questions and Answers : Quiz 88 MCQ 436: The target price is subtracted from per unit target operating income to calculate 1. total current full cost 2. total cost per unit 3. target operating income per unit 4. target cost per unit MCQ 437: The selection of target price, understanding customer requirements, improving product designs and use of cross functional teams are considered as aspects of 1. target pricing 2. target costing 3. value engineering 4. all of above MCQ 438: The time between a customer's order placement till the customer receives its delivery is known as 2. manufacturing cycle time 3. customer response time 4. system process time MCQ 439: The experimentation and generation of ideas related to new product or services are included in 2. research and development 3. value development 4. service provider MCQ 440: The description in mathematical form to represent changes in cost, with level of activity related to that cost is classified as 1. cost function 2. revenue function 3. unit function 4. relative function ### Target Costing & Target Pricing Learning App & Free Study Apps Download Target Costing & Target Pricing MCQ App to learn Target Costing & Target Pricing MCQs, Cost Accounting Learning App, and Marketing Management MCQs Apps. The "Target Costing & Target Pricing MCQ" App to download free Android & iOS Apps includes complete analytics with interactive assessments. Download App Store & Play Store learning Apps & enjoy 100% functionality with subscriptions! Target Costing & Target Pricing App (Android & iOS) Cost Accounting App (Android & iOS) Marketing Management App (Android & iOS) Business Statistics App (Android & iOS)
623
3,045
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.796875
3
CC-MAIN-2024-18
latest
en
0.849436
http://www.physicsforums.com/showthread.php?t=396692
1,394,429,784,000,000,000
text/html
crawl-data/CC-MAIN-2014-10/segments/1394010653177/warc/CC-MAIN-20140305091053-00007-ip-10-183-142-35.ec2.internal.warc.gz
476,488,992
12,213
# quantum waves have mass? by Hoku Tags: mass, quantum, waves P: 166 I don't usually consider "waves" to have mass. They're just energy that moves THROUGH mass. Light waves, sound waves, ocean waves... They are all massless energy. But I'm thinking about quantum wave/particle dualities. Electrons have mass and I'm having some trouble accepting how waves can have mass. Any insights or ideas for this seemingly trivial road-block? P: 1,937 Matterwaves, as they are called, aren't physical waves like sound, nor are they EM waves like light...they are probability waves. The probability wave itself does not have any "mass", it just tells you the probabilities of finding the particles at specific places. The particle has mass, not the wave. The wave simply describes the particle. P: 166 How funny! You're more famous than I though, Matterwave. So, you're saying that it's not just a "wave/partice" duality when it comes to electrons, it's also a "mass/massless" duality? So when an electron hits the film in the double slit experiment, it records mass when it's a particle and no mass when it's a wave? P: 842 ## quantum waves have mass? Err not exactly. The wave/particle duality is a bit of a different subject where the electron( or particle ) sometimes acts like a wave and sometimes acts like a particle. The wave equation was developed when we started looking at the idea that all matter is made up of waves, however there are still particle like aspects. So now we have these waves with no medium that are peculiar in that the entire wave seems to collapse when it is measured at any one position. The wave equations give scientists all the information they need to know about a particle even though there is no physical explanation for it. In some ways it is like a computer program trying to figure out what makes its bits flip. P: 1,937 Hmm, I think we should be very careful when discussing wave/particle duality and what exactly this means. Wave-particle duality generally means that the electron acts like a particle for some experiments and like a wave for others. This means, in simplified terms, if I am looking for particle-properties such as mass, the electron will act as a particle. If I am looking for wave-like properties such as diffraction, the electron will act like a wave. If you try to measure the mass of the electron, you get a definite mass because you are looking for a particle-tied property and the electron will act as a particle for this test! You will never measure the electron to have zero mass, because the electron will NOT act like a wave for a mass measurement. I'm trying to keep this discussion close to High-school level, so at higher levels of understanding, the picture is more complicated. I will digress a little bit into the more complicated picture, but if you don't understand it at this point, don't worry. You will, once you study QM at a deeper level. The wave is, as I mentioned, a probability wave. The particle is described by a wave-function, and this wave-function is NOT physical in any sense. You can't make any measurements on this wave-function. So it doesn't make sense to try to measure the "mass" of this wave-function. All you can do is measure many electrons prepared in the same state to try to get a feel for the probability distribution of the electrons. The double slit experiment, for example if you release one electron at a time, each detection event is particle-like. You see 1 electron at one detector, and then 1 electron at another detector. You never detect some sort of "wave". Where the wave characteristics come in is when you get many detections, you will see a diffraction pattern IN your detections which would not arise classically for particles. It is therefore easier to describe the sum total diffraction phenomenon in terms of waves. P: 842 Quote by Matterwave It is therefore easier to describe the sum total diffraction phenomenon in terms of waves. However a single particle must be described by a wave equation in a tunneling equation. P: 1,937 Sure, but when you make measurements in tunneling, you always measure the electron as either having tunneled or not having tunneled. You never measure an electron as "half-tunneled" or some such. Only when you get many electrons together, can you find that X% of them tunneled and 100-X% of them didn't. There is a definite disconnect between how electrons are described and how electrons are measured. They may be described by a wave-function, but they will inevitably be measured as electron particles. P: 842 Quote by Matterwave Sure, but when you make measurements in tunneling, you always measure the electron as either having tunneled or not having tunneled. You never measure an electron as "half-tunneled" or some such. Only when you get many electrons together, can you find that X% of them tunneled and 100-X% of them didn't. There is a definite disconnect between how electrons are described and how electrons are measured. They may be described by a wave-function, but they will inevitably be measured as electron particles. Another example of the wave-particle duality. I wouldn't expect you would need many electrons though. You could setup an experiment so that the potential allowed for a 90% chance of tunneling even though V > p_e. Do one experiment and if the single electron tunnels then you just proved wave like nature without any statistics. If it happens not to tunnel then wait 5 years, well you get the point. P: 1,937 ? First of all, how does 1 data point make any proof? Second of all, you are STILL detecting the electron AS AN electron. Perhaps I am not understanding your scenario correctly... P: 842 Just imagine you did an experiment once and got "lucky" and the electron tunneled. The wave equation describes the result and the "wave" is not based of a statistical collection of results. P: 166 This is interesting, and you haven't lost me at all. You're definitely bringing to light some simple, missing pieces in laymans literature. Let me pick through some of this for clarification. Are you saying that a single electron will never display a "wave" pattern, even if we try to measure it as such? Are you also saying that an electron cannot tunnel unless it is in a group? P: 842 Quote by Hoku ? Are you also saying that an electron cannot tunnel unless it is in a group? No, he is saying that the wave nature described in QM is based off a statistical collection of results. We can never say exactly what single particle will do, just what many of them will do, though in the end if 5 million electrons tunneled, they each tunneled. As for your first question, I think that a single electron can display wave nature when measured only once. MatterWave might have more to say on that though. P: 842 Quote by Matterwave You never measure an electron as "half-tunneled" or some such. You can find the electron within the area of space where V > p_e though. The probability wave is a gaussian here. Not sure thats what you meant though. P: 1,937 So you are saying, by virtue of the electron having tunneled, it must have wave-characteristics because particle characteristics cannot account for tunneling? Perhaps I should make my point clearer. When you make a measurement, e.g. a measurement of mass, you invariable are measuring the particle. Even in the tunneling example, you are measuring the position of the particle. You are not measuring a wave like you could measure a physical wave (like waves on a pond). With waves on a pond, I could measure the amplitude by using a ruler, for example; however, I can make no such measurements on the electron's wave-function (or the absolute square of the wave-function). I can only get an idea for the amplitude, if I get many measurements of identically prepared electrons. With your tunneling example, with 1 electron, I certainly can't measure the amplitude, or the wavelength, or any other wave-characteristic of the wave-function. All that I may be able to tell is that something weird is happening in that the electron moved through a region it shouldn't be able to move through. Perhaps I am wrong. And indeed, it is often hard to reconcile high-school level explanations with "real" explanations. But in any case, this is a digression from the OP's discussion. I think the main point as far as the OP is concerned is that if I measure the mass of an electron I will always measure a mass because the electron behaves particle-like for such a measurement. P: 1,937 Quote by LostConjugate You can find the electron within the area of space where V > p_e though. The probability wave is a gaussian here. Not sure thats what you meant though. Actually you can never find the electron in a classically forbidden region (V>E). You can only find it on one side or the other. The rough proof is that if you tried to measure the electron in the classically forbidden region, you must necessarily boost the energy of the electron such that that region is now classically allowed. You can't tell that originally the electron didn't have enough energy. This is intricately intertwined with the HUP. I don't have a very good proof, but there are rigorous proofs available, and you can search for them. =) P: 842 I agree with everything you say MatterWave, but not sure how it contradicts the fact that there are wavelike-characteristics with emphasis on "like". I am not saying it disproves the particle theory, just that there is wavelike nature describing the experiment. Just trying to say that wave equations are not all based off statistical results, otherwise it would be a boring subject and talk of a single particle borrowing energy, or interfering with itself would no longer be a subject. P: 842 Quote by Matterwave The rough proof is that if you tried to measure the electron in the classically forbidden region, you must necessarily boost the energy of the electron such that that region is now classically allowed. You can't tell that originally the electron didn't have enough energy. This is intricately intertwined with the HUP. ) Oh. Didn't know that. Understandable. P: 166 I appreciate your input, LostConjugate. I hope my continued questioning doesn't bother you. I do have a few layman's books on the subject and I feel pretty comfortable with things like probability distributions. I know that, in a non-controlled environment, particles will do (essentially) whatever they want without us being able to track exact locations or states (uncertainty principle). I also know that, in more natural environments, many electrons are required to "fulfill" a probability distribution. Without enough electrons, it's a free-for-all. I think even WITH enough particles it's still a free-for-all, but we only care how they perform as a group. I think I'm most interested in controlled experiments, like the double-slit one. What I'm picking up here is that the wavelike properties of electrons are described from the group that collectively display the wave pattern. Is that right? But a single photon CAN display both wave and particle dualities. Is this right? Related Discussions Astrophysics 10 Introductory Physics Homework 1 Special & General Relativity 2 Introductory Physics Homework 2 Quantum Physics 1
2,430
11,273
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.625
3
CC-MAIN-2014-10
longest
en
0.965596
https://avatest.org/2022/08/23/structural-mechanics%E4%BB%A3%E8%80%83cive514-tensor-invariants/
1,721,123,608,000,000,000
text/html
crawl-data/CC-MAIN-2024-30/segments/1720763514742.26/warc/CC-MAIN-20240716080920-20240716110920-00819.warc.gz
101,491,900
19,891
Posted on Categories:Structural Mechanics, 物理代写, 结构力学 # 物理代写|结构力学代写Structural Mechanics代考|CIVE514 Tensor invariants avatest™ ## avatest™帮您通过考试 avatest™的各个学科专家已帮了学生顺利通过达上千场考试。我们保证您快速准时完成各时长和类型的考试,包括in class、take home、online、proctor。写手整理各样的资源来或按照您学校的资料教您,创造模拟试题,提供所有的问题例子,以保证您在真实考试中取得的通过率是85%以上。如果您有即将到来的每周、季考、期中或期末考试,我们都能帮助您! •最快12小时交付 •200+ 英语母语导师 •70分以下全额退款 ## 物理代写|结构力学代写Structural Mechanics代考|Tensor invariants Tensor invariants. In subsequent chapters we will have occasions to wonder whether there are properties of the tensor components that do not depend upon the choice of basis. These properties will be called tensor invariants. The identities of Eqn. (43) will be useful in proving the invariance of these properties. The argument will go something like this: Let $f\left(T_{i j}\right)$ be a function of the components of the tensor $\mathbf{T}$. Under a change of basis, we can write this function in the form $f\left(Q_{i k} Q_{j i} T_{k 1}\right)$. If the function has the property that $$f\left(Q_{i k} Q_{j l} T_{k l}\right)=f\left(T_{i j}\right)$$ then the function $f$ is a tensor invariant. Since it does not depend upon the coordinate system, we can say that it is an intrinsic function of the tensor $T$, and write $f(\mathrm{~T})$. Three fundamental tensor invariants are given by $$f_{1}(\mathbf{T}) \equiv T_{i i} \quad f_{2}(\mathbf{T}) \equiv T_{i j} T_{j i} \quad f_{3}(\mathbf{T}) \equiv T_{i j} T_{j k} T_{k i}$$ ## 物理代写|结构力学代写Structural Mechanics代考|Eigenvalues and eigenvectors of symmetric tensors Eigenvalues and eigenvectors of symmetric tensors. A tensor has properties independent of any basis used to characterize its components. As we have just seen, the components themselves have mysterious properties called invariants that are independent of the basis that defines them. It seems reasonable to expect that we might be able to find a representation of a tensor that is canonical. Indeed, this canonical form is the spectral representation of the tensor that can be built from its eigenvalues and eigenvectors. In this section we shall build the mathematics behind the spectral representation of tensors. Recall that the action of a tensor is to stretch and rotate a vector. Let us consider a symmetric tensor $\mathbf{T}$ acting on a unit vector $\mathbf{n}^{\dagger}$ If the action of the tensor is simply to stretch the vector but not to rotate it then we can express it as $$\mathbf{T n}=\mu \mathbf{n}$$ where $\mu$ is the amount of the stretch. This equation, by itself, begs the question of existence of such a vector $\mathbf{n}$. Is there any vector that has the special property that action by $\mathrm{T}$ is identical to multiplication by a scalar? Is it possible that more than one vector has this property? Equation (47) is called an eigenvalue problem. Eigenvalue problems show up all over the place in mathematical physics and engineering. The tensor in three dimensional space is a great context in which to explore the eigenvalue problem because the computations are quite manageable (as opposed to, say, solving the vibration eigenvalue problem of structural dynamics on a structure with a million degrees of freedom). ## 物理代写|结构力学代写Structural Mechanics代考| Tensor invariants (43)的晅等式侍有助于证明这些性质的不变性。争论将变成䢒样: 让 $f\left(T_{i j}\right)$ 是张量分量的函数 $\mathbf{T}$. 在基数栾化的情况下,我们可 以将此函数编写为 $f\left(Q_{i k} Q_{j i} T_{k 1}\right)$. 如果函数具有以下属性: $$f\left(Q_{i k} Q_{j l} T_{k l}\right)=f\left(T_{i j}\right)$$ $$f_{1}(\mathbf{T}) \equiv T_{i i} \quad f_{2}(\mathbf{T}) \equiv T_{i j} T_{j i} \quad f_{3}(\mathbf{T}) \equiv T_{i j} T_{j k} T_{k i}$$ ## 物理代写|结构力学代写Structural Mechanics代考| Eigenvalues and eigenvectors of symmetric tensors $$\mathbf{T n}=\mu \mathbf{n}$$ ## MATLAB代写 MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中,其中问题和解决方案以熟悉的数学符号表示。典型用途包括:数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发,包括图形用户界面构建MATLAB 是一个交互式系统,其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题,尤其是那些具有矩阵和向量公式的问题,而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问,这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展,得到了许多用户的投入。在大学环境中,它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域,MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要,工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数(M 文件)的综合集合,可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。
1,533
4,320
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.59375
4
CC-MAIN-2024-30
latest
en
0.654437
https://www.backyardchickens.com/threads/temp-of-air-vs-temp-of-egg.955873/
1,506,082,975,000,000,000
text/html
crawl-data/CC-MAIN-2017-39/segments/1505818688940.72/warc/CC-MAIN-20170922112142-20170922132142-00026.warc.gz
777,886,844
17,740
# Temp of air vs. temp of egg Discussion in 'Incubating & Hatching Eggs' started by JulesFlock, Feb 28, 2015. 1. ### JulesFlockOut Of The Brooder 68 1 41 Jan 2, 2015 Hi everyone, I am on Day 1 of my first hatch. I'm so excited to see how many and who hatches. I set far more eggs than I ever planned but such is life and chicken math. My incubator is holding at a perfect 99 degrees and the thermostat has held that round the clock so far. My humidity is between 28 - 35% so I am happy with that as well. So far, knock on wood, so good. My only question is...I have a Brinsea thermometer w/ a probe in a water wiggler and that temp is holding around 98.6-98.8. It does not hit 99 but I'm thinking that is ok. Just as long as the temp in the incubator is 99, is that right?? Thank you so much! 2. ### RidgerunnerChicken Obsessed 20,118 3,322 496 Feb 2, 2009 Northwest Arkansas Which thermometer is the most accurate? Due to manufacturing tolerances not all thermometers read exactly correctly. There are different methods to calibrate a thermometer but I really don’t trust any until I have calibrated them. Some thermometers are accurate to within 1 to 2 degrees. Some are made to be accurate to within 0.1 degree. I’m not talking about them reading the temperature correctly, I’m talking about precision. If the temperature is truly 99 degrees, some thermometers may read 98 degrees one time and 100 degrees another even if they are reading the right temperature. For incubation you need one that is precise to within 0.1 degree. You are on Day 1. The egg itself is a lot denser than air. It can take a long time for the temperature inside the egg to match the air temperature. That’s why peaks and valleys in air temperature inside an incubator isn’t a real disaster as long as they don’t last too long. It’s the average temperature that is important. Has that water wiggler had time to stabilize? In the middle of an egg, the perfect temperature is 99.5 degrees Fahrenheit. You’re real close to that if those thermometers are accurate. If that water wiggler is really accurate you will still get a good hatch, it just may be a little late. I don’t see any real cause for concern as long as you trust your thermometers. I’d be real cautious about adjusting the temperature with it being that close. You don’t want to cook those eggs and some temperature adjustments can be really touchy. Too much heat is worse than too little, though if you are within a degree or so either way you are doing well. 3. ### JulesFlockOut Of The Brooder 68 1 41 Jan 2, 2015 Thank you for replying! That makes sense. Before setting I calibrated the hygrometer w another that is used in a meat curing room in a restaurant and am told it's accurate. They were the same temp and off by 3 on the humidity in my salt water test. I stuck the never used brinsea in my mouth (prob stupid) to see if it would hit 98.6. It didn't make it to 98 so I'm going w that one as off by almost a full degree. I hope I'm right. I decided to err on the side of being a few tenths below 99.5 or right on instead of being over the whole time. I'll have to wait and see if I'm right. Stressful!!! Thanks again. 4. ### JulesFlockOut Of The Brooder 68 1 41 Jan 2, 2015 After posting tonight I decided to candle. I know it's only day 4 but the suspense of this temp concern is KILLING me. I only spot checked since it's so early but the vast majority I checked had very visible veins, clear as day. Some were hard to tell on b/c of the shell color so I feel good leaving the temp as is. Thanks again! 5. ### AmyLynn2374Humidity Queen 15,019 2,498 416 Oct 11, 2014 Gouverneur, NY LOL. Never make excuses for your decisions. My rule #1. You've got a gut, go with it. Yes, sometimes it will be wrong, but then it's a learning experience and we know not to repeat that action. What someone may not think is right to do, may work perfectly fine for you. I spot check every night once they start developing. I'm now anal about keeping an eye on my air cells and I like to check progress of my chicks as well. I usually only do 3-4 eggs on a nightly basis, except the days 7/14/18 and then I candle all and mark air cells. I do take precautions such as hand washing, gentleness and I do not keep them out for long periods. (Yes, I am a candling addict.) I find no ill effects to my method. If I did, I would change it. There are people that have a hands off philosophy, and that's fine, they are doing what is comfortable for them, but I won't feel guilty for my methodology and choices I make because it's uncomfortable to them. We all have different comfort levels, but we need to accept the differences in others. There is aways that point where you make a decision that someone else has warned you against, and it doesn't work and you realize that they were probably right, but sometimes you just learn by making the mistakes, how else are you going to know what works for you? Yay for the developing eggs!! It's so exciting to know and see that life developing!
1,265
5,037
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.578125
3
CC-MAIN-2017-39
longest
en
0.937885
https://en.m.wikisource.org/wiki/Encyclop%C3%A6dia_Britannica,_Ninth_Edition/Feet
1,696,441,044,000,000,000
text/html
crawl-data/CC-MAIN-2023-40/segments/1695233511386.54/warc/CC-MAIN-20231004152134-20231004182134-00759.warc.gz
254,518,142
95,480
# Encyclopædia Britannica, Ninth Edition/Weights and Measures From volume XXIV of the work. WEIGHTS AND MEASURES THIS subject may be best divided for convenience of reference into three parts:— including the facts and data usually needed for scientific reference; including the principles of research and results in ancient metrology; and including the weights and measures of modern countries as used in commerce. I. Scientific. A unit of length is the distance between two points defined by some natural or artificial standard, or a multiple of that. For instance, in Britain the unit of the yard is defined by the distance between two parallel lines on gold studs sunk in a bar of bronze, when at 62° F., which bar is preserved in the Standards Office. There are other units, such as the inch, foot, mile, &c.; but, as these are aliquot parts or multiples of the yard, there is no separate standard provided for them. A unit is an abstract quantity, represented by a certain standard, and more or less perfectly by copies of the standard. A unit of mass is the matter of a standard of mass, or a multiple of that. For instance, in Britain the unit of the pound is defined by a piece of platinum preserved in the Standards Office. A unit of weight is the attractive force exerted between a unit of mass and some given body at a fixed distance,—this force being the weight of the unit in relation to the given body, or any other body of equal mass.[1] Usually the given body is the earth, and the distance a radius of the earth. For instance, the unit of weight in Britain is the attraction between the earth and the standard pound when that is placed at sea-level at London, in a vacuum. For astronomical comparisons the unit of mass is the sun, and a unit of weight is not needed. Standards of length are all defined on metal bars at present in civilized countries. Various natural standards have been proposed, such as the length of the polar diameter of the earth (inch), the circumference of the earth (metre) in a given longitude, a pendulum vibrating in one second at a fixed distance from the earth, a wave of light emitted by an incandescent gas, &c. But the difficulty of ascertaining the exact value of these lengths prevents any material standard being based upon them with the amount of accuracy that actual measurements, to be taken from the standard, require. A natural standard is therefore only a matter of sentiment. Standards of length are of two types, the defining points being either at a certain part of two parallel lines engraven in one plane (a line-standard), or else points on two parallel surfaces, which can only be observed by contact (an end-standard). The first type is always used for accurate purposes. Units of surface are always directly related to standards of length, without any separate standards. Volume is either determined by the lineal dimensions of a space or a solid, or, for accurate purposes, by the mass of water contained in a volume at a given temperature, which again is measured either by liquid measure, or, more accurately, by weight. The standard of volume in Britain is a hollow cylinder of bronze, with a plate glass cover, when at 62° (gallon), legally defined as 277·274 cubic inches, or containing 10 pounds of water at 62°F. Comparisons.—Lengths nearly equal are compared accurately by fixing two micrometer microscopes with their axes parallel, and at the required distance apart, on a massive support which will not quickly vary with temperature; then the two lengths to be compared, e.g., the standard yard and another, are alternately placed beneath the microscopes, and their lengths observed several times. The error of a single observation in the Standards Department is stated to be a 100,000th of an inch. For fractional lengths a divided bar is required, the accuracy of which is ascertained by a shorter measure, which can be compared with successive sections of the whole length by micrometers. For ordinary purposes, where not less than ·001 inch is to be observed, measures may be placed in contact if one is divided on the edge, and the comparison made with a magnifier. In large field-work the ends of a measure are transferred vertically to the ground by a small transit instrument or theodolite. End-measures between surfaces are read by means of a pair of contact pieces bearing line marks, the value of which is ascertained separately, or by a second end-measure; if both measures bear a line for observation, reversals then give the value of each measure. Volumes are always most accurately defined by their weight of water, as weighing can be more accurately done than measuring. If the volume is hollow, it may be filled with water and closed with a sheet of plate glass, or if solid the body may be weighed in water and out, the difference giving the weight of its volume of water. Unfortunately the relation between water-weight and absolute volume is not yet accurately known. Volumes of liquid are similarly ascertained by their weight. Volumes of gas are measured in a graduated glass vessel inverted over a liquid, or for commercial purposes by some form of registering flow-meter. Masses are compared by the Balance (q.v.), which may be made to indicate a 100,000,000th of the mass. They may also be estimated, not by their attractive force being balanced by an equal mass, but by the elasticity of a spring; this, which is the only true weighing-machine showing weight and not mass, is useful for rough purposes, owing to its quick indication; the most accurate form probably is that with angular readings.[2] Temperature and the Atmosphere.—All the serious difficulties of weighing and measuring result from these causes, the effects of which and their corrections we will briefly notice. In measurement, since all bodies expand by heat, the temperature at which any measure or standard bar represents the abstract unit requires to be accurately stated and observed, the accuracy of optical observation being about equal to 1100 of a degree F. of expansion in a standard. Great accuracy is therefore needed in the manufacture and reading of thermometers, and care that the standard and the thermometer shall be at the same temperature. Another method is to attach a parallel bar of very expansible metal to one end of the standard, and read its length on the standard at the other end; this ensures a more thorough uniformity of mean temperature between the standard and the heat-measurer. The most accurate method is by immersing the measures in a liquid, of which the temperature is read by several thermometers; but this is scarcely needful unless high or low temperatures are required to ascertain the rate of expansion. A room with thick walls, double windows, and the temperature regulated by a gas stove is practically sufficiently equable for comparisons. The temperature adopted for the standards is not the same in different countries. In Britain 62° F. has been adopted since the revision of the standards in 1822, as being a convenient average temperature for work; but, as it is purely a temperature of convenience, the rather higher point of 68° F. would be better. In any case an aliquot part of the thermal unit from freezing to boiling of water should be adopted; 62° is 16 and 68° is 15 of this interval. Whether a much higher temperature would not be more conducive to accuracy is a question; 92°, or 13 of the thermal unit, would be so near the temperature of the observer's skin and breath that measures and balances could be approached with less production of error; and such a heat does not at all hinder accurate observing. The French temperature of 32° F. for standards has abandoned all other considerations in favour of readily fixing the temperature in practice by melting ice. This is a ready means of regulation, but a point so far from ordinary working temperatures has two great disadvantages: the observer's warmth produces more error, and the corrections for all observations not iced are so large that the rates of expansion require to be known very accurately for every substance employed. For water their standard temperature is 39°.2 F., when it is at its maximum density; this has the advantage that the density varies less with temperature than at any other point, but it is very doubtful if this is much used for actual work. No substance expands uniformly with temperature, most materials expanding more rapidly at higher temperatures. The expansion of rods of the following metals, of 100 inches long, is given in decimals of an inch for the 90° from 32° to 122° F. (0 to 50° C.), and from 122° to 212° F. (50° to 100° C.):[3] Platinum. Platino-iridium. Steel. Iron. Bronze. Brass. Zinc. 32° to 122° .0445 .0435 .0536 .0591 .0876 .0915 .1469 122° to 212° .0471 .0454 .0574 .0637 .0927 .0964 .1437 But variations of 3 or 4 per cent. may easily be found in the rates of different specimens apparently alike; hence the individual expansion of every important measure needs to be ascertained. Weighing is complicated by being done in a dense and variable atmosphere, unless—as in the most refined work—the whole balance is placed in a vacuum. When in the air all bodies placed in the balance must, for accurate purposes, have their volume known; and the weight of an equal volume of such air as they are weighed in must be added to their apparent weight to get their true weight. The weight of air displaced by a pound of the following materials is given in grains, at temperature 62° F., barometer 30 inches,—also with barometer 29 inches (temperature 62°), and with temperature 32° (barometer 30 inches), to illustrate the variation[4] (allowing for contraction of the material as well):— Platinum. Brass. GiltBronze. Iron, with LeadAdjustment. Quartz. Glass. Water. Sp. Gr. 21.157 8.143 8.283 7.127 2.650 2.518 1.000 62°, 30 .403 1.047 1.029 1.196 3.217 3.385 8.523 62°, 29 .390 1.012 .995 1.156 3.110 3.272 8.240 32°, 30 .429 1.112 1.093 1.271 3.422 3.600 9.056 The above is for London at sea-level; but where the force of gravity is less 30 inches height of mercury will weigh less, and will therefore balance a less weight of air; the air allowance must therefore be less for 30 inches of mercury barometer in lower latitudes and greater heights over sea-level. The change, for instance, in the allowance of air equal to the brass pound will make it, instead of 1.047 grains, become 1.046 when 15,000 feet above the sea, or 10° S. of London. Hence this reduction need rarely be noticed. The composition of the air also varies, and most seriously in the amount of aqueous vapour; the above is ordinary air, but if quite dry the 1.047 grains would become 1.052 grains; the change in carbonic acid is quite immaterial, unless in very close rooms, so that it may be concluded that the moisture of the air is the main point to be noted, after its temperature and pressure,—small errors in any of these three data making far more difference than any other compensation that can be made in the weight of air. The more complex allowances for the expansion of water in glass, brass, or other vessels we need not enter on here; the principles are simple, but the data require to be accurately determined for the material in question. The expansion of water is, however, so often in question, especially for taking specific gravities, that it is here given. A constant volume which contains or displaces 10,000 grains of water at 62° will contain[5] At 32° F. (0° C.), 10,009.84 grains. At 62° F. (1623° C.), 10,000.00 grains. .mw-parser-output .__ditto{display:inline-block;position:relative;text-indent:0}.mw-parser-output .__ditto_hidden{visibility:hidden;color:transparent;white-space:nowrap}.mw-parser-output .__ditto_text{display:inline-block;position:absolute;left:0;width:100%;white-space:nowrap}At„ 39°.2 F. (4° C.), 10,011.20 grains.„ At„ 68° F. (20° C.), 9,993.76 grains.„ At„ 50° F. (10° C.), 10,008.89 grains.„ At„ 86° F. (30° C.), 9,968.76 grains.„ Hence if a specific gravity is observed at any of these temperatures it must be × the corresponding weight ÷ 10,000 to reduce it to a comparison with water at 62°; the expansion of the body observed is another question altogether, and must be compensated also. The weight of a cubic inch, or other linearly measured volume, of water is not yet very accurately known. The observations have been made by weighing closed hollow metal cases in and out of water (thus obtaining the weight of an equal volume of water), and then gauging the size of the case with exactitude. Cubes, cylinders, and spheres have been employed. The results are:[6] Cubic Inch at 62° F. Cubic Foot at 62° F. Cubic Decimetre at 4° C. Grains. Ounces. Grammes. 1795 1. In France, by Lefevre-Gineau (legal French)................................................................................................................................................................................................................................................................................................................................................................................................ 252.603 997.70 1000.000 1797 ${\displaystyle \scriptstyle {\left.{\begin{matrix}\ \\\ \end{matrix}}\right\}\,}}$ 1. In England, by Shuckburgh and Kater (legal British)................................................................................................................................................................................................................................................................................................................................................................................................ ${\displaystyle \scriptstyle {\left.{\begin{matrix}\ \\\ \end{matrix}}\right\}\,}}$ 252.724 998.18 1000.480 1821 1825 ${\displaystyle \scriptstyle {\left\{{\begin{matrix}\ \\\ \end{matrix}}\right.}}$ 1. In Sweden, by Berzelius, Svanberg, and Akermann................................................................................................................................................................................................................................................................................................................................................................................................ ${\displaystyle \scriptstyle {\left.{\begin{matrix}\ \\\ \end{matrix}}\right\}\,}}$ 252.678 998.00 1000.296 1830 1. In Austria, by Stampfer................................................................................................................................................................................................................................................................................................................................................................................................ 252.515 997.35 999.653 1841 1. In Russia, by Kupffer................................................................................................................................................................................................................................................................................................................................................................................................ 252.600 997.69 999.989 National Standards and Copies.—Having now noticed the principles and constants involved, we will consider the British and metric standards, the only ones now used in scientific work. The imperial standard yard is a bronze bar 38 inches long, 1 inch square; the defining lines, 36 inches apart, are cut on gold studs, sunk in holes, so that their surface passes through the axis of the bar. Thus flexure does not tend to tip the engraved surfaces nearer or farther apart. This bar when in use rests on a lever frame, which supports it at 8 points, 4.78 inches apart, on rollers which divide the pressure exactly equally.[7] This standard is in actual use for all important comparisons at the Standards Office. Four copies, which are all equal to it, within 16° of temperature, are deposited in other places in case of injury or loss of the standard. The standard pound is a thick disk of platinum about 116 inches across, and 1 inch high, with a shallow groove around it near the top. Four copies are deposited with the above copies of the yard. For public use there are a series of end-standards exposed on the outer wall of Greenwich observatory; and a length of 100 feet, and another of 66 feet (1 chain), marked on brass plates let into the granite step along the back of Trafalgar Square. As this is a practically invariable earth-fast standard, and most convenient for reference, it is important to know the minute errors of it, as determined by the Standards Department.[8] Starting from 0 the errors are in inches— at 0 10 20 30 40 50 60 70 80 90 100 feet. error 0 − 0.007 − 0.019 − 0.022 − 0.015 − 0.008 − 0.007 + 0.011 + 0.021 + 0.17 − 0.008 inches. The mean uncertainty in these values is .003, and the greatest uncertainty .01. The total length of the chain standard is − .019 inch from the truth. There is also a public balance provided at Greenwich observatory, which shows the accuracy of any pound weight placed upon it, For important scientific standards comparisons are made gratuitously, as a matter of courtesy, by the officials of the Standards Office, 7 Old Palace Yard, Westminster. The most delicate weighings are all performed in a vacuum case with glass sides, which is so constructed that the weights can be exchanged from one arm to the other without opening the case, so as to obtain double weighings. The toleration of error in copies for scientific purposes, by the Standards Department, is .0005 inch on the yard or lesser lengths, about equal to 15 divisions of the micrometer; on the pound .0025 grain, about 12 a division of the official balances; on the ounce .001 grain; on the gallon 1 grain; and on the cubic foot 4 grains. The toleration for commercial copies is .005 on the yard, .001 on the foot and under, and .1 grain on weights of 1 ounce to 1 pound.[9] The Standards Commission of 1851 recommended a limit of 1 in 20,000. For practical work of moderate accuracy the most convenient forms of measures are—for lengths under a foot, feather-edge metal scales divided to 150 inch (finer divisions are only confusing, and 11000 of an inch can be safely read by estimation); for lengths of 1 to 10 feet, metal tubes or deep bars bearing line-divisions, and with permanent feet attached at 21 per cent. from either end, so that the deformation by flexure is always the same; for long distances a steel tape with fine divisions scratched across it and numbered by etching. The most accurate way of using such a tape is not to prepare a flat bed for it, but to support it at points not more than 50 feet apart; then by observing the distances and levels of these points, and knowing the weight of the tape, the correction for the sloping distance between the points and the difference of the catenary length of the tape from the straight distance can be precisely calculated; the corrections for stretching of the tape (best done by a lever arm with fixed weight), and for temperature, are all that are needful besides. The first French standard metre (of 1799) is a platinum bar end-standard of about 1 inch wide and 17 inch thick; the new standard of the International Metric Commission is a line-standard of platino-iridium, 40 inches long and .8 inches square, grooved out on all four sides so that its section is between × and ʜ form; this provides the greatest rigidity, and also a surface in the axis of the bar to bear the lines of the standard. The new standard kilogramme, like the old one, is a cylinder of platinum of equal diameter and height. These new standards are preserved in the International Metric Bureau at Paris, to which seventeen nations contribute in support and direction, and in which the most refined methods of comparison are adopted. For lineal comparisons the alternate substitution of the measures on a sliding bed beneath fixed micrometer microscopes is provided as in the British office, and a bath for the heating of one measure in a liquid to ascertain its expansion. For weighing four balances are provided, each with mechanism for the transposition of the weights, and the lowering of the balance into play on its bearings, so that weighings can be performed at 13 feet distance from the balance, thus avoiding the disturbance caused by the warmth of the observer. The readings of the balance scale are made by a fixed telescope, the motion being observed by the reflexion of a fixed scale from a mirror attached to the beam of the balance. In this bureau are also an equally fine hydrostatic balance for taking specific gravities by water weighing, a standard barometer, and an air thermometer, with all subsidiary apparatus. The special work of the bureau is the construction and comparison of metric standards for all the countries supporting it, and for scientific work of all kinds. The legal theory of the British system of weights and measuresis—(A) the standard yard, with all lineal measures and their squares and cubes based upon that; (B) the standard pound of 7000 grains, with all weights based upon that, with the troy pound of 5760 grains for trade purposes; (C) the standard gallon (and multiples and fractions of it), declared to contain 10  of water at 62° F., being in volume 277.274 cubic inches, which contain each 252.724 grains of water in a vacuum at 62°, or 252.458 grains of water weighed with brass weights in air of 62° with the barometer at 30 inches. The legal theory of the metric system of weights and measures is—(A) the standard metre, with decimal fractions and multiples thereof; (B) the standard kilogramme, with decimal fractions and multiples thereof; (C) the litre (with decimal fractions and multiples), declared to be a cube of 116 metre, and to contain a kilogramme of water at 4° C. in a vacuum. No standard litre exists, all liquid measures being legally fixed by weight. The metre was supposed, when established in 1799, to be a ten-millionth of the quadrant of the earth through Paris; it differs from this theoretical amount by about 1 in 4000. The legal equivalents between the British and French systems are—metre = 39.37079 inches; kilogramme = 15432.34874 grains. By the more exact comparisons of Captain Clarke (1866) the metre (at 0° C., 32° F.) is equal to 39.37043196 inches of the yard at 62° F.; but Rogers in 1882 compared the metre as 39.37027. It must always be remembered that a French metre of perfect legal exactitude will, by expanding from 32° to 62° F., become equal to a greater number of inches when the two measures are placed together; thus a brass metre is equal to 39.382 inches when compared with British measures at the same temperature, and this is its true commercial equivalent. The kilogramme determination above is that of Professor Miller (1844), against the kilogramme des Archives, but in 1884 the international kilogramme yielded 15432.35639. For further details, see H. W. Chisholm, Weighing and Measuring (Nature series), 1877, and Reports of the Warden of the Standards, subsequently of the Standards Department (all in British Museum Newspaper Room), for all practical details,—especially reports on metre (1868–9), errors of grain weights (1872), principles of measuring (long paper of German Standards Commission, translated 1872), Trafalgar Square standards (1876), density of water (1883), toleration of error, British and international (1883), standard wire and plate gauges in inches (1883), besides numerous practical tables, mainly in the earlier numbers before the wardenship was merged in the Board of Trade. II. Historical. Though no line can be drawn between ancient and modern metrology, yet, owing to neglect, and partly to the scarcity of materials, there is a gap of more than a thousand years over which the connexion of units[10] of measure is mostly guess-work. Hence, except in a few cases, we shall not here consider any units of the Middle Ages. A constant difficulty in studying works on metrology is the need of distinguishing the absolute facts of the case from the web of theory into which each writer has woven them,—often the names used, and sometimes the very existence of the units in question, being entirely an assumption of the writer. Therefore we shall here take the more pains to show what the actual authority is for each conclusion. Again, each writer has his own leaning: Böckh, to the study of water-volumes and weights, even deriving linear measures therefrom; Queipo, to the connexion with Arabic and Spanish measures; Brandis, to the basis of Assyrian standards; Mommsen, to coin weights; and Bortolotti to Egyptian units; but Hultsch is more general, and appears to give a more equal representation of all sides than do other authors. In this article the tendency will be to trust far more to actual measures and weights than to the statements of ancient writers; and this position seems to be justified by the great increase in materials, and the more accurate means of study of late. The usual arrangement by countries has been mainly abandoned in favour of following out each unit as a whole, without recurring to it separately for every locality. The materials for study are of three kinds. (1) Literary, both in direct statements in works on measures (e.g., Elias of Nisibis), medicine (Galen), and cosmetics (Cleopatra), in ready-reckoners (Didymus), clerk's (kátib's) guides, and like handbooks, and in indirect explanations of the equivalents of measures mentioned by authors (e.g., Josephus). But all such sources are liable to the most confounding errors, and some passages relied on have in any case to submit to conjectural emendation. These authors are of great value for connecting the monumental information, but must yield more and more to the increasing evidence of actual weights and measures. Besides this, all their evidence is but approximate, often only stating quantities to a half or quarter of the amount, and seldom nearer than 5 or 10 per cent.; hence they are entirely worthless for all the closer questions of the approximation or original identity of standards in different countries; and it is just in this line that the imagination of writers has led them into the greatest speculations, unchecked by accurate evidence of the original standards. (2) Weights and measures actually remaining. These are the prime sources, and, as they increase and are more fully studied, so the subject will be cleared and obtain a fixed basis. A difficulty has been in the paucity of examples, more due to the neglect of collectors than the rarity of specimens. The number of published weights did not exceed 600 of all standards a short time ago; but the collections in the last three years from Naucratis (28),[11] Defenneh (29), and Memphis (44) have supplied over six times this quantity, and of an earlier age than most other examples, while existing collections have been more thoroughly examined; hence there is need for a general revision of the whole subject. It is above all desirable to make allowances for the changes which weights have undergone; and, as this has only been done for the above Egyptian collections and that of the British Museum, conclusions as to the accurate values of different standards will here be drawn from these rather than Continental sources. (3) Objects which have been made by measure or weight, and from which the unit of construction can be deduced. Buildings will generally yield up their builder's foot or cubit when examined (Inductive Metrology, p. 9). Vases may also be found bearing such relations to one another as to show their unit of volume. And coins have long been recognized as one of the great sources of metrology,—valuable for their wide and detailed range of information, though most unsatisfactory on account of the constant temptation to diminish their weight, a weakness which seldom allows us to reckon them as of the full standard. Another defect in the evidence of coins is that, when one variety of the unit of weight was once fixed on for the coinage, there was (barring the depreciation) no departure from it, because of the need of a fixed value, and hence coins do not show the range and character of the real variations of units as do buildings, or vases, or the actual commercial weights. Principles of Study.—(1) Limits of Variation in Different Copies, Places, and Times.—Unfortunately, so very little is known of the ages of weights and measures that this datum—most essential in considering their history—has been scarcely considered. In measure, Egyptians of Dynasty IV. at Gizeh on an average varied 1 in 350 between different buildings (27). Buildings at Persepolis, all of nearly the same age, vary in unit 1 in 450 (25). Including a greater range of time and place, the Roman foot in Italy varied during two or three centuries on an average 1400 from the mean. Covering a longer time, we find an average variation of 1200 in the Attic foot (25), 1150 in the English foot (25), 1170 in the English itinerary foot (25). So we may say that an average variation of 1400 by toleration, extending to double that by change of place and time, is usual in ancient measures. In weights of the same place and age there is a far wider range; at Defenneh (29), within a century probably, the average variation of different units is 136, 160, and 167, the range being just the same as in all times and places taken together. Even in a set of weights all found together, the average variation is only reduced to 160 in place of 136 (29). Taking a wider range of place and time, the Roman libra has an average variation of 150 in the examples of better period (43), and in those of Byzantine age 135 (44). Altogether, we see that weights have descended from original varieties with so little intercomparison that no rectification of their values has been made, and hence there is as much variety in any one place and time as in all together. Average variation may be said to range from 140 to 170 in different units, doubtless greatly due to defective balances. 2. Rate of Variation.—Though large differences may exist, the rate of general variation is but slow,—excluding, of course, all monetary standards. In Egypt the cubit lengthened 1170 in some thousands of years (25, 44). The Italian mile has lengthened 1100 since Roman times (2) ; the English mile lengthened about 1300 in four centuries (31). The English foot has not appreciably varied in several centuries (25). Of weights there are scarce any dated, excepting coins, which nearly all decrease; the Attic tetradrachm, however, increased 150 in three centuries (28), owing probably to its being below the average trade weight to begin with. Roughly dividing the Roman weights, there appears a decrease of 140 from imperial to Byzantine times (43). 3. Tendency of Variation.—This is, in the above cases of lengths, to an increase in course of time. The Roman foot is also probably 1300 larger than the earlier form of it, and the later form in Britain and Africa perhaps another 1300 larger (25). Probably measures tend to increase and weights to decrease in transmission from time to time or place to place; but far more data are needed to study this. 4. Details of Variation.—Having noticed variation in the gross, we must next observe its details. The only way of examining these is by drawing curves (28, 29), representing the frequency of occurrence of all the variations of a unit; for instance, in the Egyptian unit—the katcounting in a large number how many occur between 140 and 141 grains, 141 and 142, and so on; such numbers represented by curves show at once where any particular varieties of the unit lie (see Naukratis, i. p. 83). This method is only applicable where there is a large number of examples; but there is no other way of studying the details. The results from such a study—of the Egyptian kat, for example—show that there are several distinct families or types of a unit, which originated in early times, have been perpetuated by copying, and reappear alike in each locality (see Tanis, ii. pl. 1.). Hence we see that if one unit is derived from another it may be possible, by the similarity or difference of the forms of the curves, to discern whether it was derived by general consent and recognition from a standard in the same condition of distribution as that in which we know it, or whether it was derived from it in earlier times before it became so varied, or by some one action forming it from an individual example of the other standard without any variation being transmitted. As our knowledge of the age and locality of weights increases these criteria in curves will prove of greater value; but even now no consideration of the connexion of different units should be made without a graphic representation to compare their relative extent and nature of variation. 5. Transfer of Units.—The transfer of units from one people to another takes place almost always by trade. Hence the value of such evidence in pointing out the ancient course of trade, and commercial connexions (17). The great spread of the Phœnician weight on the Mediterranean, of the Persian in Asia Minor, and of the Assyrian in Egypt are evident cases; and that the decimal weights of the laws of Manu (43) are decidedly not Assyrian or Persian, but on exactly the Phœnician standard, is a curious evidence of trade by water and not overland. If, as seems probable, units of length may be traced in prehistoric remains, they are of great value; at Stonehenge, for instance, the earlier parts are laid out by the Phœnician foot, and the later by the Pelasgo-Roman foot (26). The earlier foot is continually to be traced in other megalithic remains, whereas the later very seldom occurs (25). This bears strongly on the Phœnician origin of our prehistoric civilization. Again, the Belgic foot of the Tungri is the basis of the present English land measures, which we thus see are neither Roman nor British in origin, but Belgic. Generally a unit is transferred from a higher to a less civilized people; but the near resemblance of measures in different countries should always be corroborated by historical considerations of a probable connexion by commerce or origin (Head, Historia Numorum, xxxvii.). It should be borne in mind that in early times the larger values, such as minæ, would be transmitted by commerce, while after the introduction of coinage the lesser values of shekels and drachmæ would be the units; and this needs notice, because usually a borrowed unit was multiplied or divided according to the ideas of the borrowers, and strange modifications thus arose. 6. Connexions of Lengths, Volumes, and Weights.—This is the most difficult branch of metrology, owing to the variety of connexions which can be suggested, to the vague information we have, especially on volumes, and to the liability of writers to rationalize connexions which were never intended. To illustrate how easy it is to go astray in this line, observe the continual reference in modern handbooks to the cubic foot as 1000 ounces of water; also the cubic inch is very nearly 250 grains, while the gallon has actually been fixed at 10  of water; the first two are certainly mere coincidences, as may very probably be the last also, and yet they offer quite as tempting a base for theorizing as any connexions in ancient metrology. No such theories can be counted as more than coincidences which have been adopted, unless we find a very exact connexion, or some positive statement of origination. The idea of connecting volume and weight has received an immense impetus through the metric system, but it is not very prominent in ancient times. The Egyptians report the weight of a measure of various articles, amongst others water (6), but lay no special stress on it; and the fact that there is no measure of water equal to a direct decimal multiple of the weight-unit, except very high in the scale, does not seem as if the volume was directly based upon weight. Again, there are many theories of the equivalence of different cubic cubits of water with various multiples of talents (2, 3, 18, 24, 33); but connexion by lesser units would be far more probable, as the primary use of weights is not to weigh large cubical vessels of liquid, but rather small portions of precious metals. The Roman amphora being equal to the cubic foot, and containing 80 libræ of water, is one of the strongest cases of such relations, being often mentioned by ancient writers. Yet it appears to be only an approximate relation, and therefore probably accidental, as the volume by the examples is too large to agree to the cube of the length or to the weight, differing ${\displaystyle \scriptstyle {\frac {1}{20}}}$, or sometimes even ${\displaystyle \scriptstyle {\frac {1}{12}}}$. All that can be said therefore to the many theories connecting weight and measure is that they are possible, but our knowledge at present does not admit of proving or disproving their exactitude. Certainly vastly more evidence is needed before we would, with Böckh (2), derive fundamental measures through the intermediary of the cube roots of volumes. Soutzo wisely remarks on the intrinsic improbability of refined relations of this kind (Étalons Ponderaux Primitifs, note, p. 4). Another idea which has haunted the older metrologists, but is still less likely, is the connexion of various measures with degrees on the earth's surface. The lameness of the Greeks in angular measurement would alone show that they could not derive itinerary measures from long and accurately determined distances on the earth. 7. Connexions with Coinage.—From the 7th century B.C. onward, the relations of units of weight have been complicated by the need of the interrelations of gold, silver, and copper coinage; and various standards have been derived theoretically from others through the weight of one metal equal in value to a unit of another. That this mode of originating standards was greatly promoted, if not started, by the use of coinage we may see by the rarity of the Persian silver weight (derived from the Assyrian standard), soon after the introduction of coinage, as shown in the weights of Defenneh (29). The relative value of gold and silver (17, 21) in Asia is agreed generally to have been 13${\displaystyle \scriptstyle {\frac {1}{3}}}$ to 1 in the early ages of coinage; at Athens in 434 B.C. it was 14 : 1; in Macedon, 350 B.C., 12${\displaystyle \scriptstyle {\frac {1}{2}}}$ : 1; in Sicily, 400 B.C., 15 : 1, and 300 B.C., 12 : 1; in Italy, in 1st century, it was 12 : 1, in the later empire 13·9 : 1, under Justinian 14·4 : 1, and in modern times 15·5 : 1, but at present 23 : 1,—the fluctuations depending mainly on the opening of large mines. Silver stood to copper in Egypt as 80 : 1 (Brugsch), or 120 : 1 (Revillout); in early Italy and Sicily as 250 : 1 (Mommsen), or 120 : 1 (Soutzo), under the empire 120 : 1, under Justinian 100 : 1; at present it is 150 : 1. The distinction of the use of standards for trade in general, or for silver or gold in particular, should be noted. The early observance of the relative values may be inferred from Num. vii. 13, 14, where silver offerings are 13 and 7 times the weight of the gold, or of equal value and one-half value. 8. Legal Regulation of Measures.—Most states have preserved official standards, usually in temples under priestly custody. The Hebrew "shekel of the sanctuary" is familiar; the standard volume of the apet was secured in the dromus of Anubis at Memphis (35); in Athens, besides the standard weight, twelve copies for public comparison were kept in the city; also standard volume measures in several places (2); at Pompeii the block with standard volumes cut in it was found in the portico of the forum (33); other such standards are known in Greek cities (Gythium, Panidum, and Trajanopolis) (11, 33); at Rome the standards were kept in the Capitol, and weights also in the temple of Hercules (2); the standard cubit of the Nilometer was before Constantine in the Serapæum, but was removed by him to the church (2). In England the Saxon standards were kept at Winchester before 950 A.D., and copies were legally compared and stamped; the Normans removed them to Westminster to the custody of the king's chamberlains at the exchequer; and they were preserved in the crypt of Edward the Confessor, while remaining royal property (9). The oldest English standards remaining are those of Henry VII. Many weights have been found in the temenos of Demeter at Cnidus, the temple of Artemis at Ephesus, and in a temple of Aphrodite at Byblus (44); and the making or sale of weights may have been a business of the custodians of the temple standards. 9. Names of Units.—It is needful to observe that most names of measures are generic and not specific, and cover a great variety of units. Thus foot, digit, palm, cubit, stadium, mile, talent, mina, stater, drachm, obol, pound, ounce, grain, metretes, medimnus, modius, hin, and many others mean nothing exact unless qualified by the name of their country or city. Also, it should be noted that some ethnic qualifications have been applied to different systems, and such names as Babylonian and Euboic are ambiguous; the normal value of a standard will therefore be used here rather than its name, in order to avoid confusion, unless specific names exist, such as kat and uten. All quantities stated in this article without distinguishing names are in British units of inch, cubic inch, or grain. Standards of Length.—Most ancient measures have been derived from one of two great systems, that of the cubit of 20.63 inches, or the digit of .729 inch; and both these systems are found in the earliest remains. 20.63 ins.—First known in Dynasty IV. in Egypt, most accurately 20.620 in the Great Pyramid, varying 20.51 to 20.71 in Dyn. IV. to VI. (27). Divided decimally in 100ths; but usually marked in Egypt into 7 palms or 28 digits, approximately; a mere juxtaposition (for convenience) of two incommensurate systems (25, 27). The average of several cubit rods remaining is 20.65, age in general about 1000 B.C. (33). At Philæ, &c., in Roman times 20.76 on the Nilometers (44). This unit is also recorded by cubit lengths scratched on a tomb at Beni Hasan (44), and by dimensions of the tomb of Ramessu IV. and of Edfu temple (5) in papyri. From this cubit, mahi, was formed the xylon of 3 cubits, the usual length of a walking-staff; fathom, nent, of 4 cubits, and the khet of 40 cubits (18); also the schœnus of 12,000 cubits, actually found marked on the Memphis-Faium road (44). Babylonia had this unit nearly as early as Egypt. The divided plotting scales lying on the drawing boards of the statues of Gudea (Nature, xxviii. 341) are 12 of 20.89, or a span of 10.44, which is divided in 16 digits of .653, a fraction of the cubit also found in Egypt. Buildings in Assyria and Babylonia show 20.5 to 20.6. The Babylonian system was sexagesimal, thus (18)— uban, 5=qat, 6=ammat, 6=qanu, 60=sos, 30=parasang, 2=kaspu. .69 inch 3.44 20.6 124 7430 223,000 446,000 Asia Minor had this unit in early times, in the temples of Ephesus 20.55, Samos 20.62; Hultsch also claims Priene 20.90, and the stadia of Aphrodisias 20.67, and Laodicea 20.94. Ten buildings in all give 20.63 mean (18, 25); but in Armenia it rose to 20.76 in late Roman times, like the late rise in Egypt (25). It was specially divided into 15th, the foot of 35ths being as important as the cubit.   12.45 ins. 35 of 20.75. This was especially the Greek derivative of the 20.63 cubit. It originated in Babylonia as the foot of that system (24), in accordance with the sexary system applied to the early decimal division of the cubit. In Greece it is the most usual unit, occurring in the Propykea at Athens 12.44, temple at Ægina 12.40, Miletus 12.51, the Olympic course 12.62, &c. (18); thirteen buildings giving an average of 12.45, mean variation .06 (25), = 35 of 20.75, m. var. .10. The digit = 14 palæste, = 14 foot of 12.4; then the system is foot, ${\displaystyle \scriptstyle {\left\{{\begin{matrix}\ \\\ \end{matrix}}\right.}}$ 112=cubit, 4=orguia. ........................................ 100= ${\displaystyle \scriptstyle {\left.{\begin{matrix}\ \\\ \end{matrix}}\right\}\,}}$ stadion. 10............. ............... =acæna, 10=plethron, 6= 12.4 inch 18.7 74.7 124.5 1245 7470 In Etruria it probably appears in tombs as 12.45 (25); perhaps in Roman Britain; and in mediæval England as 12.47 (25).   13.8 ins. 23 of 20.7. This foot is scarcely known monumentally. On three Egyptian cubits there is a prominent mark at the 19th digit or 14 inches, which shows the existence of such a measure (33). It became prominent when adopted by Philetærus about 280 B.C. as the standard of Pergamum (42), and probably it had been shortly before adopted by the Ptolemies for Egypt. From that time it is one of the principal units in the literature (Didymus, &c.), and is said to occur in the temple of Augustus at Pergamum as 13.8 (18). Fixed by the Romans at 16 digits (1313=Roman foot), or its cubit at 145 Roman feet, it was legally =13.94 at 123 B.C. (42); and 712 Philetærean stadia were = Roman mile (18). The multiples of the 20.63 cubit are in late times generally reckoned in these feet of 23 cubit. The name "Babylonian foot" used by Böckh (2) is only a theory of his, from which to derive volumes and weights; and no evidence for this name, or connexion with Babylon, is to be found. Much has been written (2, 3, 33) on supposed cubits of about 17–18 inches derived from 20.63,—mainly, in endeavouring to get a basis for the Greek and Roman feet, but these are really connected with the digit system; and the monumental or literary evidence for such a division of 20.63 will not bear examination.     17.30 56 of 20.76. There is, however, fair evidence for units of 17.30 and 1.730 or 112 of 20.76 in Persian buildings (25); and the same is found in Asia Minor as 17.25 or 56 of 20.70. On the Egyptian cubits a small cubit is marked as about 17 inches, which may well be this unit, as 56 of 20.6 is 17.2; and, as these marks are placed before the 23rd digit or 17.0, they cannot refer to 6 palms, or 17.7, which is the 24th digit, though they are usually attributed to that (33). We now turn to the second great family based on the digit. This has been so usually confounded with the 20.63 family, owing to the juxtaposition of 28 digits with that cubit in Egypt, that it should be observed how the difficulty of their incommensurability has been felt. For instance, Lepsius (3) supposed two primitive cubits of 13.2 and 20.63, to account for 28 digits being only 20.4 when free from the cubit of 20.63, the first 24 digits being in some cases made shorter on the cubits to agree with the true digit standard, while the remaining 4 are lengthened to fill up to 20.6.   .727 ins. In the Dynasties IV. and V. in Egypt the digit is found in tomb sculptures as .727 (27); while from a dozen examples in the later remains we find the mean .728 (25). A length of 10 digits is marked on all the inscribed Egyptian cubits as the "lesser span" (33). In Assyria the same digit appears as .730, particularly at Nimrud (25); and in Persia buildings show the 10-digit length of 7.34 (25). In Syria it was about .728, but variable; in eastern Asia Minor more like the Persian, being .732 (25). In these cases the digit itself, or decimal multiples, seem to have been used.     18.23 25 × .729. The pre-Greek examples of this cubit in Egypt, mentioned by Böckh (2), give 18.23 as a mean, which is 25 digits of .729, and has no relation to the 20.63 cubit. This cubit, or one nearly equal, was used in Judæa in the times of the kings, as the Siloam inscription names a distance of 1758 feet as roundly 1200 cubits, showing a cubit of about 17.6 inches. This is also evidently the Olympic cubit; and, in pursuance of the decimal multiple of the digit found in Egypt and Persia, the cubit of 25 digits was 14 of the orguia of 100 digits, the series being old digit, ${\displaystyle \scriptstyle {\left\{{\begin{matrix}\ \\\ \end{matrix}}\right.}}$ 25=cubit, 4= ${\displaystyle \scriptstyle {\left.{\begin{matrix}\ \\\ \end{matrix}}\right\}\,}}$ orguia, 10=amma, 10=stadion. 100................= .729 inch 18.2 72.9 729 7296 Then, taking 23 of the cubit, or 16 of the orguia, as a foot, the Greeks arrived at their foot of 12.14; this, though very well known in literature, is but rarely found, and then generally in the form of the cubit, in monumental measures. The Parthenon step, celebrated as 100 feet wide, and apparently 225 feet long, gives by Stuart 12.137, by Penrose 12.165, by Paccard 12.148, differences due to scale and not to slips in measuring. Probably 12.16 is the nearest value. There are but few buildings wrought on this foot in Asia Minor, Greece, or Roman remains. The Greek system, however, adopted this foot as a basis for decimal multiplication, forming foot, 10=acæna 10=plethron, 12.16 inches 121.6 1216 which stand as 16th of the other decimal series based on the digit. This is the agrarian system, in contrast to the orguia system, which was the itinerary series (33). Then a further modification took place, to avoid the inconvenience of dividing the foot in 1623 digits, and a new digit was formed—longer than any value of the old digitof 116 of the foot, or .760, so that the series ran digit ${\displaystyle \scriptstyle {\left\{{\begin{matrix}\ \\\ \end{matrix}}\right.}}$ 10=lichas 96............. =orguia 10=amma, 10=stadion. ·76 inch 7·6 72·9 729 7290 This formation of the Greek system (25) is only an inference from the facts yet known, for we have not sufficient information to prove it, though it seems much the simplest and most likely history. 11·62 16 × ·726. Seeing the good reasons for this digit having been exported to the West from Egypt—from the presence of the 18·23 cubit in Egypt, and from the ·729 digit being the decimal base of the Greek long measures—it is not surprising to find it in use in Italy as a digit, and multiplied by 16 as a foot. The more so, as the half of this foot, or 8 digits, is marked off as a measure on the Egyptian cubit rods (33). Though Queipo has opposed this connexion (not noticing the Greek link of the digit), he agrees that it is supported by the Egyptian square measure of the plethron, being equal to the Roman actus (33). The foot of 11·6 appears probably first in the prehistoric and early Greek remains, and is certainly found in Etrurian tomb dimensions as 11·59 (25). Dörpfeld considers this as the Attic foot, and states the foot of the Greek metrological relief at Oxford as 11·65 (or 11·61, Hultsch). Hence we see that it probably passed from the East through Greece to Etruria, and thence became the standard foot of Rome; there, though divided by the Italian duodecimal system into 12 unciæ, it always maintained its original 16 digits, which are found marked on some of the foot-measures. The well-known ratio of 25 : 24 between the 12·16 foot and this we see to have arisen through one being ${\displaystyle \scriptstyle {\frac {1}{6}}}$ of 100 and the other 16 digits,—16${\displaystyle \scriptstyle {\frac {2}{3}}}$ : 16 being as 25 : 24, the legal ratio. The mean of a dozen foot-measures (1) gives 11·616±·008, and of long lengths and buildings 11·607±·01. In Britain and Africa, however, the Romans used a rather longer form (25) of about 11·68, or a digit of ·730. Their series of measures was digitus, 4=palmus, 4=pes, 5=passus, 125=stadium, 8=milliare; ·726 inch 2·90 11·62 58·1 7262 58,100 also uncia ·968=${\displaystyle \scriptstyle {\frac {1}{12}}}$ pes, palmipes 14·52=5 palmi, cubitus 17·43=6 palmi. Either from its Pelasgic or Etrurian use or from Romans, this foot appears to have come into prehistoric remains, as the circle of Stonehenge (26) is 100 feet of 11·68 across, and the same is found in one or two other cases. 11·60 also appears as the foot of some mediæval English buildings (25). We now pass to units between which we cannot state any connexion. 25·1.—The earliest sign of this cubit is in a chamber at Abydos (44) about 1400 B.C.; there, below the sculptures, the plain wall is marked out by red designing lines in spaces of 25·13±·03 inches, which have no relation to the size of the chamber or to the sculpture. They must therefore have been marked by a workman using a cubit of 25·13. Apart from mediæval and other very uncertain data, such as the Sabbath day's journey being 2000 middling paces for 2000 cubits, it appears that Josephus, using the Greek or Roman cubit, gives half as many more to each dimension of the temple than does the Talmud; this shows the cubit used in the Talmud for temple measures to be certainly not under 25 inches. Evidence of the early period is given, moreover, by the statement in 1 Kings (vii. 26) that the brazen sea held 2000 baths; the bath being about 2300 cubic inches, this would show a cubit of 25 inches. The corrupt text in Chronicles of 3000 baths would need a still longer cubit; and, if a lesser cubit of 21·6 or 18 inches be taken, the result for the size of the bath would be impossibly small. For other Jewish cubits see 18·2 and 21·6. Oppert (24) concludes from inscriptions that there was in Assyria a royal cubit of ${\displaystyle \scriptstyle {\frac {7}{6}}}$ the U cubit, or 25·20; and four monuments show (25) a cubit averaging 25·28. For Persia Queipo (33) relies on, and develops, an Arab statement that the Arab hashama cubit was the royal Persian, thus fixing it at about 25 inches; and the Persian guerze at present is 25, the royal guerze being 1${\displaystyle \scriptstyle {\frac {1}{2}}}$ times this, or 37${\displaystyle \scriptstyle {\frac {1}{2}}}$ inches. As a unit of 1·013, decimally multiplied, is most commonly to be deduced from the ancient Persian buildings, we may take 25·34 as the nearest approach to the ancient Persian unit. 21·6.—The circuit of the city wall of Khorsabad (24) is minutely stated on a tablet as 24,740 feet (U), and from the actual size the U is therefore 10·806 inches. Hence the recorded series of measures on the Senkereh tablet are valued (Oppert) as susi, ${\displaystyle \scriptstyle {\left\{{\begin{matrix}\ \\\ \end{matrix}}\right.}}$ 20=(palm), 3=U, 6=qanu, 2=sa, 5=(n), 12=us, 30=kasbu. 60.................... =U, 60......................... =(n). ·18 inch 3·6 .mw-parser-output .wst-running-header{display:flex;width:100%;text-align:center;justify-content:space-between}.mw-parser-output .wst-running-header-4 .wst-running-header-cell:nth-child(2){text-align:left}.mw-parser-output .wst-running-header-4 .wst-running-header-cell:nth-child(3){text-align:right}.mw-parser-output .wst-running-header-cell:first-child{text-align:left}.mw-parser-output .wst-running-header-cell:last-child{text-align:right}.mw-parser-output .wst-running-header-cell>p{margin:0}   10·80 64·8 129·6 648 7,776 223,280 Other units are the suklum or ${\displaystyle \scriptstyle {\frac {1}{2}}}$U=5·4, and cubit of 2U=21·6, which are not named in this tablet. In Persia (24) the series on the same base was— vitasti, 2=arasni 360=asparasa, 30=parathañha, 2=gãv; 10·7 inches 21·4 7704 231,120 462,240 probably yava, 6=angusta, 10=vitasti; and gama =${\displaystyle \scriptstyle {\frac {3}{5}}}$ arasnl; also bazu = 2 arasni. ·18 inch 1·07 10·7 12·8 21·4 42·8 21·4 The values here given are from some Persian buildings (25), which indicate 21·4, or slightly less; Oppert's value, on less certain data, is 21·52. The Egyptian cubits have an arm at 15 digits or about 10·9 marked on them, which seems like this same unit (33). This cubit was also much used by the Jews (33), and is so often referred to that it has eclipsed the 25·1 cubit in most writers. The Gemara names 3 Jewish cubits (2) of 5, 6, and 7 palms; and, as Oppert (24) shows that 25·2 was reckoned 7 palms, 21·6 being 6 palms, we may reasonably apply this scale to the Gemara list, and read it as 18, 21·6, and 25·2 inches. There is also a great amount of mediæval and other data showing this cubit of 21·6 to have been familiar to the Jews after their captivity; but there is no evidence for its earlier date, as there is for the 25-inch cubit (from the brazen sea) and for the 18-inch cubit from the Siloam inscription. From Assyria also it passed into Asia Minor, being found on the city standard of Ushak in Phrygia (33), engraved as 21·8, divided into the Assyrian foot of 10·8, and half and quarter, 5·4 and 2·7. Apparently the same unit is found (18) at Heraclea in Lucania, 21·86; and, as the general foot of the South Italians, or Oscan foot (18), best defined by the 100 feet square being ${\displaystyle \scriptstyle {\frac {3}{10}}}$ of the jugerum, and therefore = 10·80 or half of 21·60. A cubit of 21·5 seems certainly to be indicated in prehistoric remains in Britain, and also in early Christian buildings in Ireland (25). 22·2.—Another unit not far different, but yet distinct, is found apparently in Punic remains at Carthage (25), about 11·16 (22·32), and probably also in Sardinia as 11·07 (22·14), where it would naturally be of Punic origin. In the Hauran 22·16 is shown by a basalt door (British Museum), and perhaps elsewhere in Syria (25). It is of some value to trace this measure, since it is indicated by some prehistoric English remains as 22·4. 20·0.—This unit may be that of the pre-Semitic Mesopotamians, as it is found at the early temple of Muḳayyir (Ur); and, with a few other cases (25), it averages 19·97. It is described by Oppert (24), from literary sources, as the great U of 222 susi or 39·96, double of 19·98; from which was formed a reed of 4 great U or 159·8. The same measure decimally divided is also indicated by buildings in Asia Minor and Syria (25). 19·2.—In Persia some buildings at Persepolis and other places (25) are constructed on a foot of 9·6, or cubit of 19·2; while the modern Persian arish is 38·27 or 2×19·13. The same is found very clearly in Asia Minor (25), averaging 19·3; and it is known in literature as the Pythic foot (18, 33) of 9·75, or ${\displaystyle \scriptstyle {\frac {1}{2}}}$ of 19·5, if Censorinus is rightly understood. It may be shown by a mark (33) on the 26th digit of Sharpe's Egyptian cubit = 19·2 inches. 13·3.—This measure does not seem to belong to very early times, and it may probably have originated in Asia Minor. It is found there as 13·35 in buildings. Hultsch gives it rather less, at 13·1, as the "small Asiatic foot." Thence it passed to Greece, where it is found (25) as 13·36. In Romano-African remains it is often found, rather higher, or 13·45 average (25). It lasted in Asia apparently till the building of the palace at Mashita (620 A.D.), where it is 13·22, according to the rough measures we have (25). And it may well be the origin of the dirá‘ Stambuli of 26·6, twice 13·3. Found in Asia Minor and northern Greece, it does not appear unreasonable to connect it, as Hultsch does, with the Belgic foot of the Tungri, which was legalized (or perhaps introduced) by Drusus when governor, as ${\displaystyle \scriptstyle {\frac {1}{8}}}$ longer than the Roman foot, or 13·07; this statement was evidently an approximation by an increase of 2 digits, so that the small difference from 13·3 is not worth notice. Further the pertica was 12 feet of 18 digits, i.e., Drusian feet. Turning now to England, we find (25) the commonest building foot up to the 15th century averaged 13·22. Here we see the Belgic foot passed over to England, and we can fill the gap to a considerable extent from the itinerary measures. It has been shown (31) that the old English mile, at least as far back as the 13th century, was of 10 and not 8 furlongs. It was therefore equal to 79,200 inches, and divided decimally into 10 furlongs, 100 chains, or 1000 fathoms. For the existence of this fathom (half the Belgic pertica) we have the proof of its half, or yard, needing to be suppressed by statute (9) in 1439, as "the yard and full hand," or about 40 inches, evidently the yard of the most visual old English foot of 13·22, which would be 39·66. We can restore then the old English system of long measure from the buildings, the statute-prohibition, the surviving chain and furlong, and the old English mile shown by maps and itineraries, thus:— foot, 3=yard, 2=fathom, 10=chain, 10=furlong, 10=mile. 13·22 39·66 79·32 793 7932 79,320 Such a regular and extensive system could not have been put into use throughout the whole country suddenly in 1250, especially as it must have had to resist the legal foot now in use, which was enforced (9) as early as 950. We cannot suppose that such a system would be invented and become general in face of the laws enforcing the 12-inch foot. Therefore it must be dated some time before the 10th century, and this brings it as near as we can now hope to the Belgic foot, which lasted certainly to the 3d or 4th century, and is exactly in the line of migration of the Belgic tribes into Britain. It is remarkable how near this early decimal system of Germany and Britain is the double of the modern decimal metric system. Had it not been unhappily driven out by the 12-inch foot, and repressed by statutes both against its yard and mile, we should need but a small change to place our measures in accord with the metre. The Gallic leuga, or league, is a different unit, being 1.59 British miles by the very concordant itinerary of the Bordeaux pilgrim. This appears to be the great Celtic measure, as opposed to the old English, or Germanic, mile. In the north-west of England and in Wales this mile lasted as 1.56 British miles till 1500; and the perch of those parts was correspondingly longer till this century (31). The "old London mile" was 5000 feet, and probably this was the mile which was modified to 5280 feet, or 8 furlongs, and so became the British statute mile. Standards of Area.—We cannot here describe these in detail. Usually they were formed in each country on the squares of the long measures. The Greek system was— foot, 36= hexapodes 100= ................ acæna 25=aroura, 4=plethron. 1.027 sq. ft. 36.96 102.68 2567 10,268 The Roman system was— pes, 100=decempeda, 36=clima, 4=actus, 2=jugerum, 94 sq. ft. 94 3384 13,536 27,672 jugerum, 2=heredium, 100=centuria, 4=saltus. .6205 acre 1.241 124.1 496.4 Standards of Volume.—There is great uncertainty as to the exact values of all ancient standards of volume, the only precise data being those resulting from the theories of volumes derived from the cubes of feet and cubits. Such theories, as we have noticed, are extremely likely to be only approximations in ancient times, even if recognized then; and our data are quite inadequate for clearing the subject. If certain equivalences between volumes in different countries are stated here, it must be plainly understood that they are only known to be approximate results, and not to give a certain basis for any theories of derivation. All the actual monumental data that we have are alluded to here, with their amounts. The impossibility of safe correlation of units necessitates a division by countries. Egypt.—The hon was the usual small standard; by 8 vases which have contents stated in hons (8, 12, 20, 22, 33, 40) the mean is 29.2 cubic inches ± .6; by 9 unmarked pottery measures (30) 29.1±.16, and divided by 20; by 18 vases, supposed multiples of hon (1), 32.1±2. These last are probably only rough, and we may take 29.2 cubic inches ± .5. This was reckoned (6) to hold 5 utens of water (uten 1470 grains), which agrees well to the weight; but this was probably an approximation, and not derivative, as there is (14) a weight called shet of 4.70 or 4.95 uten, and this was perhaps the actual weight of a hon. The variations of hon and uten, however, cover one another completely. From ratios stated before Greek times (35) the series of multiples was ro, 8=hon, 4=honnu, 10=apet ${\displaystyle \scriptstyle {\left\{{\begin{matrix}\ \\\ \end{matrix}}\right.}}$ ............ 10=(Theban), 10=sa. or besha 4=tama 3.65 cub. in. 29.2 116.8 1168 4672 11,680 116,800 (Theban) is the "great Theban measure." In Ptolemaic times the artaba (2336.), modified from the Persian, was general in Egypt, a working equivalent to the Attic metretes,—value 2 apet or 12 tama; medinmus = tama or 2 artabas, and fractions down to 1400 artaba (35). In Roman times the artaba remained (Didymus), but 16 was the usual unit (name unknown), and this was divided down to 124 or 1144 artaba (35),—thus producing, by 172 artaba a working equivalent to the xestes and sextarius (35). Also a new Roman artaba (Didymus) of 1540. was brought in. Beside the equivalence of the hon to 5 utens weight of water, the mathematical papyrus (35) gives 5 besha=23 cubic cubit (Revillout's interpretation of this as 1 cubit³ is impossible geometrically, see Rev. Eg., 1881, for data); this is very concordant, but it is very unlikely for 3 to be introduced in an Egyptian derivation, and probably therefore only a working equivalent. The other ratio of Revillout and Hultsch, 320 hons = cubit³ is certainly approximate. Syria, Palestine, and Babylonia.—Here there are no monumental data known; and the literary information does not distinguish the closely connected, perhaps identical, units of these lands. Moreover, none of the writers are before the Roman period, and many relied on are mediæval rabbis. A large number of their statements are rough (2, 18, 33), being based on the working equivalence of the bath or epha with the Attic metretes, from which are sometimes drawn fractional statements which seem more accurate than they are. This, however, shows the bath to be about 2500 cubic inches. There are two better data (2) of Epiphanius and Attic medinmus=112 baths, and saton (13 bath) =138 modii; these give about 2240 and 2260 cubic inches. The best datum is in Josephus (Ant., iii. 15, 3), where 10 baths =41 Attic or 31 Sicilian medimni, for which it is agreed we must read modii (33); hence the bath =2300 cubic inches. Thus these three different reckonings agree closely, but all equally depend on the Greek and Roman standards, which are not well fixed. The Sicilian modius here is 1031, or slightly under 13, of the bath, and so probably a Punic variant of the 13 bath or saton of Phœnicia. One close datum, if trustworthy, would be log of water = Assyrian mina bath about 2200 cubic inches. The rabbinical statement of cubic cubit of 21.5 holding 320 logs puts the bath at about 2250 cubic inches; their log-measure, holding six hen's eggs, shows it to be over rather than under this amount; but their reckoning of bath = 12 cubit cubed is but approximate; by 21.5 it is 1240, by 25.1 it is 1990 cubic inches. The earliest Hebrew system was— (log, 4=kab) .............. 3=hin, 6 ${\displaystyle \scriptstyle {\left.{\begin{matrix}\ \\\ \end{matrix}}\right\}\,}}$ = ${\displaystyle \scriptstyle {\left\{{\begin{matrix}\ \\\ \end{matrix}}\right.}}$ bath, or ${\displaystyle \scriptstyle {\left.{\begin{matrix}\ \\\ \end{matrix}}\right\}\,}}$, 10= ${\displaystyle \scriptstyle {\left\{{\begin{matrix}\ \\\ \end{matrix}}\right.}}$ homer—wet. ‘issarón .......... 10 epha or kor—dry. 32 cub. in. 128 230 283 2300 23,000 ‘Issarón ("tenth-deal") is also called gomer. The log and kab are not found till the later writings; but the ratio of hin to issaron is practically fixed in early times by the proportions in Num. xv. 4–9. Epiphanius stating great hin=18 xestes, and holy hin=9, must refer to Syrian xestes, equal to 24 and 12 Roman; this makes holy hin as above, and great hin a double hin, i.e., seah or saton. His other statements of saton=56 or 50 sextaria remain unexplained, unless this be an error for bath =56 or 50 Syr. sext. and =2290 or 2560 cubic inches. The wholesale theory of Revillout (35) that all Hebrew and Syrian measures were doubled by the Ptolemaic revision, while retaining the same names, rests entirely on the resemblance of the names apet and epha, and of log to the Coptic and late measure lok. But there are other reasons against accepting this, besides the improbability of such a change. The Phœnician and old Carthaginian system was (18)— log, 4=kab, 6=saton, 30=corus, 31 cub. in. 123 740 22,200 valuing them by 31 Sicilian = 41 Attic modii (Josephus, above). The old Syrian system was (18)— cotyle, 2=Syr. xestes, 18=sabitha or saton, 112=collathon, 2=bath-artaba; 21 cub. in. 41 740 1110 2220 also {{center|1= Syr. xestes, 45=maris, 2=metretes or artaba. 41 1850 3700 The later or Seleucidan system was (18) cotyle, 2=Syr. xestes, 90=Syr. metretes, 22 44 4000 the Syrian being 113 Roman sextarii. The Babylonian system was very similar (18)— (14), 4=capitha, ${\displaystyle \scriptstyle {\left\{{\begin{matrix}\ \\\ \end{matrix}}\right.}}$ 15=maris 18=.........epha, 10=homer, 6=achane 33 cub. in. 132 1980 2380 23,800 142,800 The approximate value from capitha=2 Attic chœnices (Xenophon) warrants us in taking the achane as fixed in the following system, which places it closely in accord with the preceding. In Persia Hultsch states— capetis........... 48=artaba, 40 ${\displaystyle \scriptstyle {\left.{\begin{matrix}\ \\\ \end{matrix}}\right\}\,}}$ =achane. maris.................. 72 74.4 cub. in. 1983 3570 142,800 the absolute values being fixed by artaba = 51 Attic chœnices (Herod., i. 192). The maris of the Pontic system is 12 of the above, and the Macedonian and Naxian maris 110 of the Pontic (18). By the theory of maris = 15 of 20.6³ it is ???.; by maris = Assyrian talent, 1850, in place of 1850 or 1980 stated above; hence the more likely theory of weight, rather than cubit, connexion is nearer to the facts. Æginetan System.—This is so called from according with the Æginetan weight. The absolute data are all dependent on the Attic and Roman systems, as there are no monumental data. The series of names is the same as in the Attic system (18). The values are 113 × the Attic (Athenæus, Theophrastus, &c.) (2, 18), or more closely 11 to 12 times 18 of Attic. Hence, the Attic cotyle being 17.5 cubic inches, the Æginetun is about 25.7. The Bœotian system (18) included the achane; if this = Persian, then cotyle = 24.7. Or, separately through the Roman system, the mnasis of Cyprus (18)=170 sextarii; then the cotyle=24.8. By the theory of the metretes being 112 talents Æginetan, the cotyle would be 23.3 to 24.7 cubic inches by the actual weights, which have tended to decrease. Probably then 25.0 is the best approximation. By the theory (18) of 2 metretes = cube of the 18.67 cubit from the 12.45 foot, the cotyle would be about 25.4, within .4; but then such a cubit is unknown among measures, and not likely to be formed, as 12.4 is 35 of 20.6. The Æginetan system then was— cotyle, 4 =chœnix ${\displaystyle \scriptstyle {\left\{{\begin{matrix}\ \\\ \end{matrix}}\right.}}$ 3=chous......................................... 16 ${\displaystyle \scriptstyle {\left.{\begin{matrix}\ \\\ \end{matrix}}\right\}\,}}$ =medimnus. 8=.......... hecteus, 4=metretes, 1 12 25 cub. in. 100 300 800 3200 4800 This was the system of Sparta, of Bœotia (where the aporryma=4 chœnices, the cophinus=6 chœnices, and saites or saton or hecteus=2 aporryma, while 30 medimni=achane, evidently Asiatic connexions throughout), and of Cyprus (where 2 choes=Cyprian medimnus, of which 5=medimnus of Salamis, of which 2=mnasis) (18). Attic or Usual Greek System.—The absolute value of this system is far from certain. The best data are three stone slabs, each with several standard volumes cut in them (11, 18), and two named vases. The value of the cotyle from the Naxian slab is 15.4 (best, others 14.6–19.6); from a vase about 16.6; from the Panidum slab 17.1 (var. 16.2-18.2) ; from a Capuan vase 17.8 ; from the Ganus slab 17.8 (var. 17-18). From these we may take 17.5 as a fair approximation. It is supposed that the Panathenaic vases were intended as metretes; this would show a cotyle of 14.4-17.1. The theories of connexion give, for the value of the cotyle, metretes = Æginetan talent, 15.4-16.6 ; metretes 43 of 12.16 cubed, 16.6; metretes=${\displaystyle \scriptstyle {\frac {27}{20}}}$ of 12·16 cubed, 16·8; medimnus=2 Attic talents, hecteus=20 minæ, chœnix=2${\displaystyle \scriptstyle {\frac {1}{2}}}$ minæ, 16·75; metretes=3 cubic spithami (${\displaystyle \scriptstyle {\frac {1}{2}}}$ cubit=9·12), 17·5; 6 metretes=2 feet of 12·45 cubed, 17·8 cubic inches for cotyle. But probably as good theories could be found for any other amount; and certainly the facts should not be set aside, as almost every author has done, in favour of some one of half a dozen theories. The system of multiples was for liquids— cyathus 1${\displaystyle \scriptstyle {\frac {1}{2}}}$=oxybaphon, 4=cotyle, 12=chous, 12=metretes 2·9 cub. in. 4·4 17·5 210 2520 with the tetarton (8·8), 2=cotyle, 2=xestes (35·), introduced from the Roman system. For dry-measure— cyathus, 6=cotyle, 4=chœnix, 8=hecteus, 6=medimnus, 2·9 cub. in. 17·5 70 500 3360 with the xestes, and amphoreus (1680)=${\displaystyle \scriptstyle {\frac {1}{2}}}$ medimnus, from the Roman system. The various late provincial systems of division are beyond our present scope (18). System of Gythium.—A system differing widely both in units and names from the preceding is found on the standard slab of Gythium in the southern Peloponnesus (Rev. Arch., 1872). Writers have unified it with the Attic, but it is decidedly larger in its unit, giving 19·4 (var. 19·1–19·8) for the supposed cotyle. Its system is— cotyle, 4=hemihecton, 4=chous, 3 = (n). 58 cub. in. 232 932 2796 And with this agrees a pottery cylindrical vessel, with official stamp on it (ΔΗΜΟΣΙΟΝ, &c.), and having a fine black line traced round the inside, near the top, to show its limit; this seems to be probably very accurate, and contains 58 5 cubic inches, closely agreeing with the cotyle of Gythium. It has been described (Rev. Arch., 1872) as an Attic chœnix. Gythium being the southern port of Greece, it seems not too far to connect this 58 cubic inches with the double of the Egyptian hon = 58 - 4, as it is different from every other Greek system. Roman System.—The celebrated Farnesian standard congius of bronze of Vespasian, “mensuræ exactæ in Capitolio P. X.,” contains 206 7 cubic inches (2), and hence the amphora 1654. By the sextarius of Dresden (2) the amphora is 1695; by the congius of Ste Genevieve (2) 1700 cubic inches ; and by the ponderarium measures at Pompeii (33) 1540 to 1840, or about 1620 for a mean. So the Farnesian congius, or about 1650, may best be adopted. The system for liquid was— quartarius, 4=sextarius, 4=congius, 4=urna, 2=amphora; 8·6 cub. ins. 34·4 206 825 1650 for dry measure 16 sextarii = modius, 550 cubic inches; and to both systems were added from the Attic the cyathus (2 87), acetabulum (4 "3), and hemina (17 "2 cubic inches). The Roman theory of the amphora being the cubic foot makes it 1569 cubic inches, or decidedly less than the actual measures; the other theory of its containing 80 libræ of water would make it 1575 by the commercial or 1605 by the monetary libra, again too low for the measures. Both of these theories therefore are rather working equivalents than original derivations; or at least the interrelation was allowed to become far from exact. Indian and Chinese Systems.—On the ancient Indian system see Numismata Orientalia, new ed., i. 24; on the ancient Chinese, Nature, xxx. 565, and xxxv. 318. Standards of Weight.—For these we have far more complete data than for volumes or even lengths, and can ascertain in many cases the nature of the variations, and their type in each place. The main series on which we shall rely here are those (1) from Assyria (38) about 800 B.C. ; (2) from the eastern Delta of Egypt (29) (Defenneh) ; (3) from western Delta (28) (Nancratis) ; (4) from Memphis (44), all these about the 6th century B.C., and therefore before much interference from the decreasing coin standards ; (5) from Cnidus ; (6) from Athens ; (7) from Corfu ; and (8) from Italy (British Museum) (44). As other collections are but a fraction of the whole of these, and are much less completely examined, little if any good would be done by including them in the combined results, though for special types or inscriptions they will be mentioned. 146 grains. The Egyptian unit was the kat, which varied between 138 and 155 grains (28, 29). There were several families or varieties within this range, at least in the Delta, probably five or six in all (29). The original places and dates of these cannot yet be fixed, except for the lowest type of 138-140 grains; this belonged to Heliopolis (7), as two weights (35) inscribed of "the treasury of An" show 139 9 and 140 - 4, while a plain one from there gives 138 8; the variety 147-149 may belong to Hermopolis (35), accord ing to an inscribed weight. The names of the kat and tcina are fixed by being found on weights, the uteu by inscriptions ; the series was 00, 10 = kat, 10 = uten, 10 = tcma. 14 -G gi-s. 140 1460 14,000 The tema is the same name as the large wheat measure (35), which was worth 30,000 to 19,000 grains of copper, according to Ptolemaic receipts and accounts (Rev. Eg., 1881, 150), and therefore very likely worth 10 utens of copper in earlier times when metals were scarcer. The kat was regularly divided into 10; but another divi sion, for the sake of interrelation with another system, was in ^ and , scarcely found except in the eastern Delta, where it is common (29) ; and it is known from a papyrus (38) to bo a Syrian weight. The uten is found-f-6 = 245, in Upper Egypt (rare) (44). Another division (in a papyrus) (38) is a silver weight of j-%- kat = about 88, perhaps the Babylonian siglus of 86. The uten was also binarily divided into 128 peks of gold in Ethiopia; this may refer to another standard (see 129) (33). The Ptolemaic copper coinage is on two bases, the uten, binarily divided, and the Ptolemaic five shekels (1050), also binarily divided. (This result is from a larger number than other students have used, and study by diagrams.) The theory (3) of the derivation of the uten from TsVff cubic cubit of water would fix it at 1472, which is accordant; but there seems no authority either in volumes or weights for taking 1500 utens. .Another theory (3) derives the uten from T^-J- of the cubic cubit of 24 digits, or better f of 20 63; that, however, will only fit the very lowest variety of the uten, while there is no evidence of the existence of such a cubit. The kat is not unusual in Syria (44), and among the haematite weights of Troy (44) are nine examples, average 144, but not of extreme varieties. 12Q cr- 258 f ^ no S rea t standard of Babylonia became the y^Vk . -i r r nk. parent of several other systems; and itself AC F i n and its derivatives became more widely 46o,000. n ,1 ,T i i TX spread than any other standard. It was known in two forms, one system (24) of urn, C0=sikhir, 6 = shekel, 10 = stone, G=maneh, 60=talent,; 30 grs. 21-5 129 1290 7750 465,000 and the other system double of this in each stage except the talent. These two systems are distinctly named on the weights, and are known now as the light and,heavy Assyrian systems (19, 24). (It is better to avoid the name Babylonian, as it has other meanings also.) There are no weights dated before the Assyrian bronze lion weights (9, 17, 19, 38) of the llth to 8th centuries B.C. Thirteen of this class average 127 2 for the shekel ; 9 haematite barrel-shaped weights (38) give 128 2 ; 16 stone duck-weights (38), 126 5. A heavier value is shown by the precious metals, the gold plates from Khorsabad (18) giving 129, and the gold daric coinage (21, 35) of Persia 129 2. Nine weights from Syria (44) average 128 8. This is the system of the " Babylonian " talent, by Herodotus = 70 minse Euboic, by Pollux =70 mina; Attic, by .(Elian = 72 miiife Attic, and therefore about 470,000 grains. In Egypt this is found largely at Naucratis (28, 29), and less commonly at Defenneh (29). In both places the distribution, a high type of 129 and a lower of 127, is like the monetary and trade varieties above noticed ; while a smaller number of examples are found, fewer and fewer, down to 118 grains. At Memphis (44) the shekel is scarcely known, and a J mina weight was there converted into another standard (of 200). A few barrel weights are found at Karnak, and several egg-shaped shekel weights at Gebelen (44) ; also two cuboid weights from there (44) of 1 and 10 utens are marked as 6 and 60, which can hardly refer to any unit but the heavy shekel, giving 245. Hultsch refers to Egyptian gold rings of Dynasty XVIII. of 125 grains. That this unit penetrated far to the south in early times is shown by the tribute of Kush (34) in Dynasty XVIII.; this is of 801, 1443, and 23,741 kats, or 15 and 27 manehs and 7J talents when reduced to this system. And the later Ethiopia gold unit of the pek (7), or y-^- of the uten, was 10 - 8 or more, and may therefore be the sikhir or obolos of 21 "5. But the fraction T ^, or a continued binary division repeated seven times, is such a likely mode of rude subdivision that little stress can be laid on this. In later times in Egypt a class of large glass scarabs for funerary purposes seem to be adjusted to the shekel (30). Whether this system or the Phoenician on 224 grains was that of the Hebrews is uncertain. There is no doubt but that in the Maccabcan times and onward 218 was the shekel ; but the use of the word darkemon by Ezra and Nehemiah, and the probabilities of their case, point to the darag-maneh, ^ manch, or shekel of Assyria ; and the mention of 3 shekel by Nehemiah as poll tax nearly proves that the 129 and not 218 grains is intended, as 218 was never--- 3. But the Maccabean use of 218 may have been a reversion to the older shekel ; and this is strongly shown by the fraction shekel (1 Sam. ix. 8), the continual mention of large decimal numbers of shekels in the earlier books, and the certain fact of 100 shekels being = mina. This would all be against the 129 or 258 shekel, and for the 218 or 224. There is, however, one good datum if it can be trusted : 300 talents of silver (2 Kings xviii. 14) are 800 talents on Sennacherib s cylinder (34), while the 30 talents of gold is the same in both accounts. Eight hundred talents on the Assyrian silver standard would be 267 or roundly 300 talents on the heavy trade or gold system, which is therefore probably the Hebrew. Probably the 129 and 224 systems coexisted in the country ; but on the whole it seems more likely that 129 or rather 258 grains was the Hebrew shekel before the Ptolemaic times, especially as the 100 shekels to the mina is parallelled by the following Persian system (Hultsch)— 120 gls. Cl.X) 7750 the Hebrew system being cerah 20 = shekel, 100 = maneh, 30 = talent, l-."9 Krs. 25S? 25,800 774,000 and, considering that the two Hebrew cubits are the Babylonian and Persian units, and the volumes are also Babylonian, it is the more likely that the weights should have come with these. From the cast this unit passed to Asia Minor ; and six multiples of 2 to 20 shekels (av. 127) are found among the haematite weights of Troy (44), including the oldest of them. On the ^Egean coast it often occurs in early coinage (17), at Lampsacus 131-129, Phocrea 256-4, Cyzicus 252-247, Methymna 124 "6, &c. In later times it was a main unit of North Syria, and also on the Euxine, leaden weights of Antioch (3), Callatia, and Tomis being known (38). The mean of these eastern weights is 7700 for the mina, or 128. But the leaden weights of the west (44) from Corfu, &c., average 7580, or 126 3 ; this standard was kept up at Cyzicus in trade long after it was lost in coinage. At Corinth the unit was evidently the Assyrian and not the Attic, being 129 6 at the earliest (17) (though modified to double Attic, or 133, later) and being-f 3, and not into 2 drachms. And this agrees with the mina being repeatedly found at Corcyra, and with the same standard passing to the Italian coinage (17) similar in weight, and in division into ^, the heaviest coinages (17) down to 400 B.C. (Terina, Velia, Sybaris, Posidonia, Metapontum, Tarentum, &c. ) being none over 126, while later on many were adjusted to the Attic, and rose to 134. Six disk weights from Carthage (44) show 126. It is usually the case that a unit lasts later in trade than in coinage; and the prominence of this standard in Italy may show how it is that this mina (18 uncife = 7400) was known as the "Italic" in the days of Galen and Dioscorides (2). -i no A variation on the main system was made by forming a r^nrf 8 m i na f 50 shekels. This is one of the Persian series (gold), and the of the Hebrew series noted above. But it is most striking when it is found in the mina form which distinguishes it. Eleven weights from Syria and Cnidus (44) (of the curious type with two breasts on a rectangular block) show a mina of 6250 (125 "0); audit is singular that this class is exactly like weights of the 224 system found with it, but yet quite distinct in standard. The same passed into Italy and Corfu (44), averaging 6000, divided in Italy into uncife (^), and scripulfe (^-j), and called litra (in Corfu ?). It is known in the coinage of Hatria (18) as 6320. And a strange division of the shekel in 10 (probably therefore connected with this decimal mina) is shown by a series of bronze weights (44) with four curved sides and marked with circles (British Museum, place unknown), which may be Romano-Gallic, averaging 125-4-10. This whole class seems to cling to sites of Phoenician trade, and to keep clear of Greece and the north, perhaps a Phoenician form of the 129 system, avoiding the sexagesimal multiples. If this unit have any connexion with the kat, it is that a kat of gold is worth 15 shekels or mina of silver ; this agrees well with the range of both units, only it must be remembered that 129 was used as gold unit, and another silver unit deduced from it. More likely then the 147 and 129 units originated independently in Egypt and Babylonia. o fi From 129 grains of gold was adopted an equal value ^ s ^ vcr= 1720, on the proportion of 1 : 13j, and this was Divided n 10 = 172, which was used either in Such a proportion is indicated in Num. vii., where the gold spoon of 10 shekels is equal in value to the bowl of 130 shekels, or double that of 70, i.e., the silver vessels were 200 and 100 sigli. The silver plates at Khorsabad (18) we find to be 80 sigli of 84 6. The Persian silver coinage shows about 86 ; the danak was g of this, or 287. Xcnophon and others state it at about 84. As a monetary weight it seems to have spread, perhaps entirely, in consequence of the Persian dominion; it varies from 174 downwards, usually 167, in Aradus, Cilicia, and on to the ^Egean coast, in Lydia and in Macedonia (17). The silver bars found at Troy averaging 2744, 01 , j mina of 8232, have been attributed to this unit (17) ; but no division of the mina in ^ is to be expected, and the average i rather low. Two hfematite weights from Troy (44) show 86 and 87 2. The mean from leaden weights of Chios, Tenedos (44), &c., is 8430. A duck-weight of Camirus, probably early, gives 8480 the same passed on to Greece and Italy (17), averaging 8610 ; bui in Italy it was divided, like all other units, into uncife and scripuLc (44). It is perhaps found in Etrurian coinage as 175-172 (17). By the Romans it was used on the Danube (18), two weights of the first legion there showing 8610 ; and this is the mina of 20 uncia? (8400) named by Roman writers. The system was obol, 6 = siglus, 100 = mina, C0 = talent. 14-3 grs. 80 8COO 516,000 A derivation from this was the ,^ of 172, or 57 3, the so-callei Phocfean drachma, equal in silver value to the fa of the gold 258 grains. It was used at Phocaea as 58 5, and passed to the colonie of Posidonia and Velia as 59 or 118. The colony of Massilia Drouht it into Gaul as 58 2-54 9.|1}} That tnis unit (commonly called Phoenician) is derived S24- doubted, both being 11 I ono^ from tllc 129 s y steni can nart lly Vo nnii so intimately associated in Syria and Asia Minor. /-,uuu. The re i at i on j 3 258 : 229 : : 9 : 8 ; but the exact form in which the descent took place is not settled : ^V of 129 of gold is worth 57 of silver or a drachm, of 230 (or by trade weights 127 and 226) ; otherwise, deriving it from the silver weight of 86 already formed, the drachm is of the stater, 172, or double of the Persian danak of 287, and the sacred unit of Didyma in Ionia was this half drachm, 27 ; or thirdly, what is indicated by the Lydian coinage (17), 86 of gold was equal to 1150 of silver, 5 shekels, or y 1 ^ mina. Other proposed derivations from the kat or pek are not satisfactory. In actual use this unit varied greatly: at Naucratis (29) there are groups of it at 231, 223, and others down to 208 ; this is the earliest form in which we can study it, and the corre sponding values to these are 130 and 126, or the gold and trade varieties of the Babylonian, while the lower tail down to 208 cor responds to the shekel down to 118, which is just what is found. Hence the 224 unit seems to have been formed from the 129, after the main families or types of that had arisen. It is scarcer at Defenneh (29) and rare at Memphis (44). Under the Ptolemies, however, it became the great unit of Egypt, and is very prominent in the later literature in consequence (18, 35). The average of coins (21) of Ptolemy; I. gives 219 6, and thence they gradually diminish to 210, the average (33) of the whole series of Ptolemies being 218. The "argenteus" (as Revillout transcribes a sign in the papyri) (35) was of 5 shekels, or 1090 ; it arose about 440 B.C., and became after 160 B.C. a weight unit for copper. In Syria, as early as the 15th century B.C., the tribute of the Rutennu, of Naliaraina, Megiddo, Anaukasa, &c. (34), is on a basis of 454-484 kats, or 300 shekels ( T V talent) of 226 grains. The commonest weight at Troy (44) is the shekel, averaging 224. In coinage it is one of the commonest units in early times ; from Phoenicia, round the coast to Macedonia, it is predominant (17) ; at a maximum of 230 (lalysus), it is in Macedonia 224, but seldom exceeds 220 else where, the earliest Lydian of the 7th century being 219, and the general average of coins 218. The system was (1) 8= drachm, 4= shekel, 25=mina, 120=talent. 7 gvs. 56 224 5COO 672,000 From the Phoenician coinage it was adopted for the Maccabean. It is needless to give the continual evidences of this being the later Jewish shekel, both from coins (max. 223) and writers (2, 18, 33) ; the question of the early shekel we have noticed already under 129. In Phoenicia and Asia Minor the mina was specially made in the form with two breasts (44), 19 such weights averaging 5600 (= 224) ; and thence it passed into Greece, more in a double value of 11,200 (= 224). From Phoenicia this naturally became the main Punic unit ; a bronze weight from lol (18), marked 100, gives a drachma of 56 or 57 (224-228) ; and a Punic inscription (18) names 28 drachma; = 25 Attic, and . . 57 to 59 grains (228-236); while a prob ably later series of 8 marble disks from Carthage (44) show 208, but vary from 197 to 234. In Spain it was 236 to 216 in different series (17), and it is a question whether the Massiliote drachmae of 58-55 are not Phoenician rather than Phocaic. In Italy this mina became naturalized, and formed the "Italic mina" of Hero, Priscian, &c. ; also its double, the mina of 26 uncife or 10,800, =50 shekels of 216; the average of 42 weights gives 5390 (= 215 6), and it was divided both into 100 drachma;, and also in the Italic mode of 12 uncife and 288 scripulse (44). The talent was of 120 minre of 5400, or 3000 shekels, shown by the talent from Hercu- laneum, TA, 660,000 and by the weight inscribed PONDO cxxv. (i.e., 125 librne) TALKNTUM SICLOUVM. iii., i.e., talent of 3000 shekels (2) (the M being omitted ; just as Epiphanius describes this talent as 125 libra;, or 6 ( = 9) nomismata, for 9000). This gives the same approximate ratio 96 : 100 to the libra as the usual drachma reckon ing. The Alexandrian talent of Festus, 12,000 denarii, is the same talent again. It is. believed that this mina-f-12 uncife by the Romans is the origin of the Arabic ratl of 12 iikiyas, or 5500 grains (33), which is said to have been sent by Harun al-Rashid to Charlemagne, and so to have originated the French monetary pound of 5666 grains. But, as this is probably the same as the English monetary pound, or tower pound of 5400, which was in use earlier (see Saxon coins), it seems more likely that this pqund (which is common in Roman weights) was directly inherited from the Roman civilization. f.f) Another unit, which has scarcely been recognized in aTon"- 8 metrology hitherto, is prominent in the weights from 400 000 ^" v r )t &gt; some ^ we S nts f rom Naucratis and 15 from Defenneh plainly agreeing on this and on no other basis. Its value varies between 76 5 and 81 5, mean 79 at Naucratis (29) or 81 at Defenneh (29). It has been connected theoretically with a binary division of the 10 shekels or "stone " of the Assyrian systems (28), 1290 -f 16 being 80 6 ; this is suggested by the most usual mul tiples being 40 and 80 = 25 and 50 shekels of 129; it is thus akin to the mina of 50 shekels previously noticed. The tribute of the Asi, Rutennu, Khita, Assam, &e., to Thothmes III. (34), though in uneven numbers of kats, comes out in round thousands of units when reduced to this standard. That this unit is quite distinct from the Persian 86 grains is clear in the Egyptian weights, which maintain a wide gap between the two systems. Next, in Syria three inscribed weights of Antioch and Berytus (18) show a mina of about 16,400, or 200 x 82. Then at Abydus, or more probably from Babylonia, there is the large bronze lion-weight, stated to have been origin ally 400,500 grains; this has been continually 60 by different writers, regardless of the fact (Rev. Arch., 1862, 30) that it bears the numeral 100 ; this therefore is certainly a talent of 100 minse of 4005 ; and as the mina is generally 50 shekels in Greek systems it points to a weight of 80 - 1. Farther west the same unit occurs in several Greek weights (44) which show a mina of 7800 to 8310, mean 8050-rlOO = 80 5. Turning to coinage, we find this often, but usually overlooked as a degraded form of the Persian 86 grains siglos. But the earliest coinage in Cilicia, before the general Persian coinage (17) about380 B.C., is Tarsus, 164 grains ; Soli, 169, 163,158; Nagidus, 158, 161-153 later; Issus,166; Mallus, 163-154, all of which can only by straining be classed as Persian ; but they agree to this standard, which, as we have seen, was used in Syria, in earlier times by the Khita, &c. The Milesian or "native" system of Asia Minor (18) is fixed by Hultsch at 163 and 81 "6 grains, the coins of Miletus (17) showing 160, 80, and 39. Coming down to literary evidence, this is abundant. Bb ckh decides that the "Alexandrian drachma" was of the Solonic 67, or = 80 5, and shows that it was not Ptolemaic, or Rhodian, or JEginetan, being distinguished from these in inscriptions (2). Then the "Alexandrian mina" of Dioscorides and Galen (2) is 20 uncia; = 8250 ; in the "Analecta" (2) it is 150 or 158 drachms -81 00. Then Attic : Euboic or jEginetan : : 18 : 25 in the metrologists (2), and the Euboic talent = 7000 "Alexandrian" drachma;; the drachma therefore is 80 0. The "Alexandrian" wood talent : Attic talent:: 6 : 5 (Hero, Didymus), and . . 480,000, which is 60 minre of 8000. Pliny states the Egyptian talent at 80 libra = 396, 000 ; evidently = the Abydus lion talent, which is-=-100, and the mina is . . 3960, or 50x79 "2. The largest weight is the "wood" talent of Syria (i8) = 6 Roman talents, or 1,860,000, evidently 120 Antioch minse of 15,500 or 2 x 7750. This evidence is too distinct to be set aside; and, exactly confirming as it does the Egyptian weights and coin weights, and agreeing with the early Asiatic tribute, it cannot be overlooked in future. The system was drachm, 2 = stater, 50 = mina, 1GO = talcnt. 80 grs. 1GO 8000 400,000 480,000 on 1 ? t ion This system, the AEginetan, one of the most important to the Greek world, has been thought to Jjn ftnft be a degradation of the Phoenician (17, 21), supposing 220 grains to have been reduced in primitive Greek usage to 194. But we are now able to prove that it was an independent system (1) by its not ranging usually over 200 grains in Egypt before it passed to Greece; (2) by its earliest example, perhaps before the 224 unit existed, not being over 208; and (3) by there being no intermediate linking on of this to the Phoenician unit in the large number of Egyptian weights, nor in the Ptolemaic coinage, in which both standards are used. The first example (30) is one with the name of Amenhotep I. (17th century B.C.) marked as " gold 5," which is 5 x 207 6. Two other marked weights are from Memphis (44), showing 201 8 and 196 - 4, and another Egyptian 191 4. The range of the (34) Naucratis weights is 186 to 199, divided in two groups averaging 190 and 196, equal to the Greek monetary and trade varieties. Ptolemy I. and II. also struck a series of coins (32) averaging 199. In Syria hrematite weights are found (30) averaging 198 5, divided into 99 2, 49 6, and 24 8 ; and the same division is shown by gold rings from Egypt (38) of 24 9. In the medical papyrus (38) a weight of kat is used, which is thought to be Syrian; now f kat = 92 to 101 grains, or just this weight which we have found in Syria ; and the weights of f and |- kat are very rare in Egypt except at Defenneh (29), on the Syrian road, where they abound. So we have thus a weight of 207-191 in Egypt on marked weights, joining therefore completely with the .ffiginetan unit in Egypt of 199 to 186, and coinage of 199, and strongly connected with Syria, where a double mina of Sidon (18) is 10,460 or 50 x 209 2. Probably before any Greek coinage we find this among the haematite weights of Troy (44), ranging from 208 to 193 2 (or 104- 96 6), i.e., just covering the range from the earliest Egyptian down to the early AEginetan coinage. Turning now to the early coinage, we see the fuller weight kept up (17) at Samos (202), Miletus (201), Calymna (100, 50), Methymna and Scepsis (99, 49),[12] Ionia (197); while the coinage of AEgina (17, 12), which by its wide diffusion made this unit best known, though a few of its earliest staters go up even to 207, yet is characteristically on the lower of the two groups which we recognize in Egypt, and thus started what has been considered the standard value of 194, or usually 190, decreasing afterwards to 184. In later times, in Asia, however, the fuller weight, or higher Egyptian group, which we have just noticed in the coinage, was kept up (17) into the scries of cistophori (196-191), as in the Ptolemaic series of 199. At Athens the old mina was fixed by Solon at 150 of his drachmae (18) or 9800 grains, according to the earliest drachma?, showing a stater of 196 ; and this continued to be the trade mina in Athens, at least until 160 B.C., but in a reduced form, in which it equalled only 138 Attic drachmae, or 9200. The Greek mina weights show (44), on an average of 37, 9650 ( = stater of 193), varying from 186 to 199. In the Hellenic coinage it varies (18) from a maximum of 200 at Pharae to 192, usual full weight; this unit occupied (17) all central Greece, Peloponnesus, and most of the islands. The system was— obol, C = drachm, 2 = stater, 50 = mina, C0=talent. 16 grs. 96 192 9600 576,000 It also passed into Italy, but in a smaller multiple of 25 drachmae, or | of the Greek mina; 12 Italian weights (44) bearing value marks (which cannot therefore be differently attributed) show a libra of 2400 or 4 of 9600, which was divided in uncire and sextulae, and the full-sized mina is known as the 24 uncia mina, or talent of 120 libra; of Vitruvius and Isidore (i8) = 9900. Hultsch states this to be the old Etruscan pound. 412 4950 grs. With the trade mina of 9650 in Greece, and recognized AQKO in Italy, we can hardly doubt that the Roman libra is OU grs. the ]m]f of thig mina At Athens it was 2 x 4900; an( | on the average of all the Greek weights it is 2 x 4825, so that 4950 the libra is as close as we need expect. The division by 12 does not affect the question, as every standard that came into Italy was similarly divided. In the libra, as in most other standards, the value which happened to be first at hand for the coinage was not the mean of the whole of the weights in the country ; the Phoenician coin weight is below the trade average, the Assyrian is above, the .ZEginetan is below, but the Roman coinage is above the average of trade weights, or the mean standard. Rejecting all weights of the lower empire, the average (44) of about 100 is 4956; while 42 later- Greek weights (nomisma, &c. ) average 4857, and 16 later Latin ones (solidus, &c. ) show 4819. The coinage standard, however, was always higher ( 1 8) ; the oldest gold shows 5056, the Campanian Roman 5054, the consular gold 5037, the aurei 5037, the Constantine solidi 5053, and the Justinian gold 4996. Thus, though it fell in the later empire, like the trade weight, yet it was always above that. Though it has no exact relation to the congius or amphora, yet it is closely = 4977 grains, the ^ of the cubic foot of water. If, however, the weight in a degraded form, and the foot in an unde- graded form, come from the East, it is needless to look for an exact relation between them, but rather for a mere working equivalent, like the 1000 ounces to the cubic foot in England. Bb ckh has re marked the great diversity between weights of the same age, those marked "Ad August! Temp" ranging 4971 to 5535, those tested by the fussy prefect Q. Junius Rusticus vary 4362 to 5625, and a set in the British Museum (44) belonging together vary 4700 to 5168. The series was— siliqua, 6= scripulum, 4 = suxtula, C = uncia, 12 = libra, 2·87 grs. 17·2 68·7 412 4950 the greater weight being the centumpondium of 495,000. Other weights were added to these from the Greek system— obolus, 6 = dniehnia, 2 = sicilicus, 4 = uncia; 8 6 grs. 51-5 103 412 and the sextula after Constantine had the name of solidus as a coin weight, or nomisma in Greek, marked N on the weights. A beautiful set of multiples of the scripulum was found near Lyons (38). from 1 to 10 × 17·28 grains, showing a libra of 4976. In Byzantine times in Egypt glass was used for coin-weights (30), averaging 68 for the solidus = 4896 for the libra. The Saxon and Norman ounce is said to average 416·5 (Num. Chron., 1871, 42), apparently the Roman uncia inherited. 67 grs. 6700; 402,000 chalcous, 8=obolus, 6=drachma, 100=mina, 60=talanton. 1·4 grs. 11·17 67 6700 402,000 Turning now to its usual trade values in Greece (44), the mean of 113 gives 67·15; but they vary more than the Egyptian examples, having a sub-variety both above and below the main body, which itself exactly coincides with the Egyptian weights. The greater part of those weights which bear names indicate a mina of double the usual reckoning, so that there was a light and a heavy system, a mina of the drachma and a mina of the stater, as in the Phœnician and Assyrian weights. In trade both the minæ were divided in ${\displaystyle \scriptstyle {\frac {1}{2}}}$, ${\displaystyle \scriptstyle {\frac {1}{4}}}$, ${\displaystyle \scriptstyle {\frac {1}{8}}}$, ${\displaystyle \scriptstyle {\frac {1}{3}}}$, and ${\displaystyle \scriptstyle {\frac {1}{6}}}$, regardless of the drachmæ. This unit passed also into Italy, the libra of Picenum and the double of the Etrurian and Sicilian libra (17); it was there divided in unciæ and scripulæ (44), the mean of 6 from Italy and Sicily being 6600; one weight (bought in Smyrna) has the name “Leitra” on it. In literature it is constantly referred to; but we may notice the “general mina” (Cleopatra), in Egypt, 16 unciæ=6600; the Ptolemaic talent, equal to the Attic in weight and divisions (Hero, Didymus); the Antiochian talent, equal to the Attic (Hero); the treaty of the Romans with Antiochus, naming talents of 80 libræ, i.e., mina of 16 unciæ; the Roman mina in Egypt, of 15 unciæ, probably the same diminished; and the Italic mina of 16 unciæ. It seems even to have lasted in Egypt till the Middle Ages, as Jabarti and the “kátib's guide” both name the raṭl misri (of Cairo) as 144 dirhems=6760. We have now ended our outline of ancient metrology, omitting all details that were not really necessary to a fair judgment on the subject, but trying to make as plain as possible the actual bases of information, to trust to no opinions apart from facts, and to leave what is stated as free as possible from the influence of theories. Theoretical values have nowhere been adopted here as the standards, contrary to the general practice of metrologists; but in each case the standard value is stated solely from the evidence in hand, quite regardless of how it will agree with the theoretical deduction from other weights or measures. Great refinement in statements of values is needless, looking to the uncertainties which beset us. There are innumerable theories unnoticed here; only those are explained which seem to have a reasonable likelihood, and others are only mentioned where it is needful to show that they have not been overlooked. In many cases fuller detail is given of less important points, when they have not been published before, and no other information can be referred to elsewhere; when any point is abundantly proved and known, it has been passed with a mere mention. Finally, to indicate where further information on different matters may be found reference is frequently made by a number to the list of works given below, some early and other works being omitted, of which all the data will be found in later books. For historical reference we may state the following units legally abolished. English Weights and Measures Abolished.—The yard and handful, or 40 inch ell, abolished in 1439. The yard and inch, or 37 inch ell (cloth measure), abolished after 1553; known later as the Scotch ell=37·06. Cloth ell of 45 inches, used till 1600. The yard of Henry VII.=35·963 inches. Saxon moneyers pound, or Tower pound, 5400 grains, abolished in 1527. Mark, ${\displaystyle \scriptstyle {\frac {2}{3}}}$ pound=3600 grains. Troy pound in use in 1415, established as monetary pound 1527, now restricted to gold, silver, and jewels, excepting diamonds and pearls. Merchant's pound, in 1270 established for all except gold, silver, and medicines=6750 grains, generally superseded by avoirdupois in 1303. Merchant's pound of 7200 grains, from France and Germany, also superseded. (“Avoirdepois” occurs in 1336, and has been thence continued; the Elizabethan standard was probably 7002 grains.) Ale gallon of 1601=282 cubic inches, and wine gallon of 1707=231 cubic inches, both abolished in 1824. Winchester corn bushel of 8×268·8 cubic inches and gallon of 274${\displaystyle \scriptstyle {\frac {1}{4}}}$ are the oldest examples known (Henry VII.), gradually modified until fixed in 1826 at 277·274, or 10 pounds of water. French Weights and Measures Abolished.—Often needed in reading older works. ligne, 12=pouce, 12=pied, 6=toise, 2000=lieue de poste. ·08883 in. 1·0658 12·7892 76·735 2·42219 miles. grain, 72=gros, 8=ouce???, 8=marc, 2=poids de marc. ·8197 gr. 59·021 472·17 3777·33 1·0792 ℔. Rhineland foot, much used in Germany, = 12·357 inches = the foot of the Scotch or English cloth ell of 37·06 inches, or 3×12·353. (1) A. Aurès, Métrologie Égyptienne, 1880; (2) A. Böckh, Metrologische Untersuchungen, 1838 (general); (3) P. Bortolotti, Del Primitivo Cubito Egizio, 1883; (4) J. Brandis, Münz-, Mass-, und Gewicht-Wesen, 1866 (specially Assyrian); (5) H. Brugsch, in Zeits. Aeg. Sp., 1870 (Edfu); (6) M. F. Chabas, Détermination Métrique, 1867 (Egyptian volumes); (7) Id., Recherches sur les Poids, Mesures, et Monnaies des anciens Égyptiens; (8) Id., Ztschr. f. Aegypt. Sprache, 1867, p. 57, 1870, p. 122 (Egyptian volumes); (9) H. W. Chisholm, Weighing and Measuring, 1877 (history of English measures); (10) Id., Ninth Rep. of Warden of Standards, 1875 (Assyrian); (11) A. Dumont, Mission en Thrace (Greek volumes); (12) Eisenlohr, Ztschr. Aeg. Sp., 1875 (Egyptian hon); (13) W. Golénischeff, in Rev. Eg., 1881, 177 (Egyptian weights); (14) C. W. Goodwin, in Ztschr. Aeg. Sp., 1873, p. 16 (shet); (15) B. V. Head, in Num. Chron., 1875; (16) Id., Jour. Inst. of Bankers, 1879 (systems of weight) ; (17) Id., Historia Numorum, 1887 (essential for coin weights and history of systems); (18) F. Hultsch, Griechische und Romische Metrologie, 1882 (essential for literary and monumental facts); (19) Ledrain, in Rev. Eg., 1881, p. 173 (Assyrian); (20) Leemans, Monumens Égyptiens, 1838 (Egyptian hon); (21) T. Mommsen, Histoire de la Moinaie Romaine; (22) Id., Monuments Divers (Egyptian weights); (23) Sir Isaac Newton, Dissertation upon the Sacred Cubit, 1737; (24) J. Oppert, Étalon des Mesures Assyriennes, 1875; (25) W. M. F. Petrie, Inductive Metrology, 1877 (principles and tentative results); (26) Id., Stonehenge, 1880 ; (27) Id., Pyramids and Temples of Gizeh, 1883; (28) Id., Naukratis, i., 1886 (principles, lists, and curves of weights); (29) Id., Tanis, ii., 1887 (lists and curves); (30) Id., , 1883, 419 (weights, Egyptian, &c.); (31) Id., Proc. Roy. Soc. Edin., 1883–84, 254 (mile); (32) R. S. Poole, Brit. Mus. Cat. of Coins, Egypt; (33) Vazquez Queipo, [{lang|fr|Essai sur les Systèmes Metriques}}, 1859 (general, and specially Arab and coins); (34) Records of the Past, vols. i., ii., vi. (Egyptian tributes, &c.); (35) E. Revillout, in Rev. Eg., 1881 (many papers on Egyptian weights, measures, and coins); (36) E. T. Rogers, Num. Chron., 1873 (Arab glass weights); (37) M. H. Sauvaire, in Jour. As. Soc., 1877, translation of Elias of Nisibis, with notes (remarkable for history of balance); Schillbach (lists of weights, all in next): (38) M. C. Soutzo, Étalons Pondéraux Primitifs, 1884 (lists of all weights published to date); (39) Id., Systèmes Monétaires Primitifs, 1884 (derivation of units); (40) G. Smith, in Zeits. Aeg. Sp., 1875; (41) L. Stern, in Rev. Eg., 1881, 171 (Egyptian weights); (42) P. Tannery, Rev. Arch., xli., 152; (43) E. Thomas, Numismata Orientalia, pt. i. (Indian weights). Many isolated papers in Revue Archéologique, Hellenic Journal, &c., are not specified above; and (44) a great amount of material is yet unpublished of weighings of weights of Troy (supplied through Dr Schliemann's kindness), Memphis, at the British Museum, Turin, &c., which may probably appear before long, and which has been utilized in this article. III. Commercial. In this section we shall only refer to such measures as are in actual use at the present time; the various systems of the Continental towns have been superseded by the metric system now in force, and are therefore not needed now except for historical purposes. Length:— inch, 12=foot, 3=yard, 5${\displaystyle \scriptstyle {\frac {1}{2}}}$=pole, 4=chain 10=furlong 8=mile. 1 in. 12 36 198 792 7920 63360 Hand, 4 inches; fathom, 2 yards; knot or geographical mile = 1′ = 1·1507 miles. The chain is divided in 100 links for land measure; link = 7·92 inches. Terms of square measure are squares of the long measures. Volume: dry:— pint, 2=quart, 4=gallon, 2=peck, 4=bushel, 8=quarter. cub. in. 34·659 69·318 277·274 554·548 2218·19 17745·6 Gill=${\displaystyle \scriptstyle {\frac {1}{4}}}$ pint; pottle=2 quarts; 5 quarters=wey or load; 2 weys=last. Volume: wet:— Pint and quart } gallon, 9=firkin, { 4=barrel or } 2=pipe, butt, or as above. hogshead, puncheon. cub. ins. 277·274 2495·5 9981·9 19963·8 Avoirdupois weight, for everything not excepted below:— drachm, 16=ounce, 16=pound, 14=stone, 2=quarter, 4=hundred, 20=ton. 27·3 grains. 437·5 7000 98,000 196,000 grs. 112 ℔ 2240 ℔ . Troy weight (gold, silver, platinum, and jewels, except diamonds and pearls):— grain, 24=pennyweight, 20=ounce, 12=pound. 1 grain 24 480 5760 Diamond and pearl weight:— grain, 4=carat, 150=ounce Troy. ·8 grain 3·2 480 Apothecaries' dispensing weight, for prescriptions only:— grain, 20=scruple, 3=drachm, 8=ounce, 12=pound. 1 grain 20 60 480 5760 Apothecaries' fluid measure:— minim, 60=drachm, 8=ounce, 20=pint, 8=gallon. 91 gr., water 54·7 437·5 8,750 70,000 ·036 cub. in. ·216 1·733 34·659 277·274 Metric System.—The report to the French National Assembly proposing this system was presented 17th March 1791, the meridian measurements finished and adopted 22d June 1799, an intermediate system of division and names tolerated 28th May 1812, abolished and pure decimal system enforced 1st January 1840. Since then Netherlands, Spain (1850), Italy, Greece, Austria (legalized 1876), Germany, Norway and Sweden (1878), Switzerland, Portugal, Mexico, Venezuela, Argentine Republic, Hayti, New Grenada, Mauritius, Congo Free State, and other states have adopted this system. The use of it is permissive in Great Britain, India, Canada, Chili, &c. The theory of the system is that the metre is a 10,000,000th of a quadrant of the earth through Paris; the litre is a cube of 116 metre; the gramme is 11000 of the litre filled with water at 4° C.; the franc weighs 5 grammes. The multiples are as follows:— British. France. Netherlands. Other Names. ·039 inch millimetre streep strich ·394 ,, centimetre duim zentimeter 3·937 ,, decimetre palm ... 39·370 ,, metre elle or aune metre, stab ·62138 mile kilometre mijle kilometer, stadion ·176 pint decilitre maatje ... 1·761 ,, litre kop liter or kanne 17·608 ,, decalitre schepel ... 88·038 ,, (50 litres) ... scheffel 176·077 ,, hectolitre mudde { hektoliter, fasse, ⁠kilot ·015 grain milligramme ... ... 15·43 ,, gramme wigtije dr??? 154·32 ,, decagramme lood loi??? 1543·23 ,, hectogramme ons ... 7716·17 ,, (500 grammes) ... pfund, livre 15432·35 ,, kilogramme pond ... 110·23 ℔ (50 kilogr.) ... { zentner, zollcent- ⁠ner 220·46 ,, 100 kilogr. ... { centner métri- ⁠que, quintal 2204·62 ,, 1000 kilogr. ... tonne, tonneau In land measure the unit is the are (10 metres square)=119·60 square yards; and the hectare=2·4736 acres. Other multiples of the units are merely nominal and not practically used. Table for Conversion of British and Metric Units. Inches. Milli-metres. Metres. Feet. CubicInches. CubicCentimetres. CubicMetres. CubicFeet. 1 25·399 1 3·2809 1 16·386 1 35·316 2 50·799 2 6·5618 2 32·772 2 70·633 3 76·199 3 9·8427 3 49·168 3 105·950 4 101·598 4 13·1230 4 85·545 4 141·266 5 126·998 5 16·4045 5 81·931 5 176·583 6 152·397 6 19·6854 6 98·317 6 211·S99 7 177·797 7 22·9663 7 114·703 7 247·216 8 203·196 8 26·2472 8 131·089 8 282·533 9 228·596 9 29·5281 9 147·476 9 317·849 Pints. Litres. Litres. Gallons. Grains. Grammes. Kilos. Pounds. 1 ·56755 1 ·22024 1 ·064799 1 2·6792 2 1·13310 2 ·44049 2 ·129598 2 5·3584 3 1·70265 3 ·66073 3 ·194397 3 8·0377 4 2·27020 4 ·88098 4 ·259196 4 10·7169 5 2·83775 5 1·10122 5 ·323994 5 13·3961 6 3·40530 6 1·32146 6 ·388794 6 16·0754 7 3·97286 7 1·34171 7 ·453593 7 18·7546 8 4·54041 8 1·76195 8 ·518392 8 21·4338 9 5·10796 9 1·98220 9 ·583190 9 24·1130 For approximate conversion either way use the following ratios:—8 metres = 315 inches (to 110000); 8 kilometres = 5 miles (to 1170); 4 litres = 7 pints (to 1100); 7 grammes = 108 grains (to 14000). Burmah.—Paulgaut 1 inch, taim 18 inches, saundaung 22 inches, dha 154 inches, dain 2·43 miles. Kait 251 grains, vis 3·59 , sait 14·36, ten 57·36, candy 533 . Candia.Pic 25·11 inches, carga (corn) 4·19 bushels, rottolo 1·165 , cantaro 116·5 , okka 2 65 .}} Ceylon.Seer 1·86 pints; 10 parrahs, 1 amomam, 5·6 bushels. China.Fau ·141 inches, tsun 1·41, chik 14·1, cheung 141, yan 1410 = 117·5 feet. Other chiks—itinerary 12·17, imperial 12·61, surveyor's 12·70, Peking 13·12, Canton 14·70 inches. Li = 1800 chiks of 12·17, 13·12, or 14·1. Kop 3·3 cubic inches; 10 = shing tsong ·96 pint, tau 9·6 (12 catties of water), hwuh 96 pints. Tael 580·3 grains; 16 = catty, 9328 grains or 1·333 ; picul 133·3 . Denmark.Tomme 1·03 inches, fod 12·357, aln 24·714; mül 4·6807 miles. Pott ·2126 gallons; 2 = kande, 2 = stübchen, 2 = viertel 1·7008 gallons; anker 8·2914 gallons. Pot ·02657 bushel, skieppe ·47835, fjerding ·9567, tonne 3·8268, last 84·188 bushels. Ort 15·1 grains, quintin 60·3, lod 241·2, unze 482·5, mark 3860, pund 7720 = 1·103  = 12 kilogramme. Lispund 17·646 , skippund 3·151 cwt. Tönde (land) 1·25 acres, (coal) 4·6775 bushels. Egypt.Dirá' of Nilometer 21·3 inches, dirá' beledi 22·7, dirá' handasi 25·13, pic or dirá Stambuli 26·65 inches; pic of land, 29·53, ḳaṣab 139·8 inches. Feddán 1120 acre. Rub'a 6·6 pints or quarts, wébe 6·6 gallons, ardeb 39·6 or 46·4 gallons. Dirhem 60·65 grains, raṭl 1·0131 , oḳḳa 2·7274 , ḳanṭár 101·31 . India.gaz = yard. gaz 27 inches, háth 18 inches. covid 18·6 inches. cottah (ḳaṭṭhá) 80 square yards; 20 = biggah, 1600 square yards. 24 mauris = cawri, 6400 square yards. Seer, 40 = maund, 20 = candy. Equivalents of Indian and other weights are as follows:— Commercial Weights, &c. Avoirdupois. BengalFactory. Madras. Bombay. ℔ oz. dr. mds. s. ch. mds. vls. pol. mds. s. plce. Acheen bahar of 200 } 423 6 13 5 26 13 16 7 10 15 4 27 ⁠catties of 2·117 ℔. Acheen guncha of 10 } 220 0 0 2 37 13 ·7 8 6 16 7 34 8 ·6 ⁠nelly. Anjengo candy of 20 } 560 0 0 7 20 0 22 3 8 20 0 0 ⁠maunds. Bencoolen bahar. 560 0 0 7 20 0 22 3 8 20 0 0 Bengal factory maund. 74 10 10 ·7 1 0 0 2 7 35 ·7 2 26 20 Bengal bazaar maund 82 2 2 ·1 1 4 0 3 2 11 ·3 2 37 10 Bombay candy of 20 } 560 0 0 7 20 0 22 3 8 20 0 0 ⁠maunds. Bussorah maund of 76 } 90 4 0 1 8 5 ·6 3 4 35 ·2 3 8 27 ·9 ⁠vakias. Bussorah maund of 24 } 28 8 0 0 15 4 ·3 1 1 4 ·8 1 0 21 ·4 ⁠vakias. Calicut maund of 100 } 30 0 0 0 16 1 ·1 1 1 24 1 2 25 ·7 ⁠pools. Cochin candy of 20 } 543 8 0 7 11 2 ·6 21 5 36 ·8 19 16 12 ·9 ⁠maunds. Gombroon bazaar candy. 7 8 0 0 4 0 0 2 16 0 10 21 ·4 Goa candy of 20 maunds. 495 0 0 6 25 2 ·9 19 6 16 17 27 4 ·3 Jonkceylon bahar of 8 } 485 5 5 ·3 6 20 0 19 3 12 17 13 10 ⁠capins. Madras candy of 20 } 500 0 0 6 28 0 20 0 0 17 34 8 ·6 ⁠maunds. Mocha bahar of 15 frazils 450 0 0 6 0 1 18 0 0 16 2 25 ·7 } 8 12 0 0 4 11 0 2 32 0 12 15 ⁠maund. Mysore candy of 7 morahs. 560 0 0 7 20 0 22 3 8 20 0 0 Pegu candy of 150 vis. 500 0 0 6 28 0 20 0 0 17 34 8 ·6 Penang pecul of 100 } 133 5 5 ·3 1 31 6 5 2 26 4 30 14 ·3 ⁠catties. Surat maund of 40 seers. 37 5 5 ·3 0 20 0 1 3 37 ·9 1 13 10 Surat pucca maund. 74 10 10 ·7 1 0 0 2 7 35 ·7 2 26 20 Tillycherry candy of } 600 0 0 8 0 2 24 0 0 21 17 4 ·3 ⁠20 maunds. Bengal bazaar weights are 110 greater than factory weights. Grain and native liquids are usually weighed. Japan.Boo, 10 = sun, 10 = shaku (11·948 inches = 1033 metre), 6 = ken, 60 = cho, 39 = ri (2·647 miles). Go (11·1 cubic inches), 10 = sho, 10 = to, 10 = koku (5·011 bushels). Momme 57·97 grains, 160 = kin or catty (1·325 ), 1000 momme = kuamme (8·281 ). Java.Ell 2734 inches. Kanne ·3282 gallon; rand, 396 = leaguer of arrack, 13313 gallons; 360 rands = leaguer of wine. Rice-sack, 2 = picul 13558 , 5 = timbang; coyang 3581 . Malacca.Covid 1818 inches; buncal 832 grains (gold and silver); kip 41  (tin); 100 catties = picul, 135 ; 3 piculs = bahar; 40 piculs = coyau of salt or rice; 500 gantons = 50 measures = 1 last, nearly 29 cwts.; chupah = 214 ; gautang (=ganton?), 9  of water at 62°. Malta.Palmo 10·3 inches, foot 7·2 and 11·17 inches, canna 82·4 inches; 16 tumoli = salma, 4·44 acres. Caffiso (oil) 4·58 gallons; barile (wine) 9·16 gallons; salma (corn) 7·9 bushels. Ounce 407·2 grains; rottolo 1·745 ; cantaro 174·5 . Mexico.—Vara 32·97 inches. Fanega 1·50 bushels. Libra 1·0142 . Morocco.—Canna 21 inches, commercial rottolo 119 , 100 = quintal; market rottolo 1·785 . Persia.—Gaz (gueze), 6 handbreadths or 25 inches. Royal gaz 3712 inches. Arash 38·27 inches. Parasang or farsakh 3 geographical miles (an hour's walk for a horse). Sextario (21 cubic inches), 4 = chenica, 2 = capicha (2·4 quarts), 25 = artaba (1·9 bushels). (These, as the native names show, are not native measures. As Chardin remarks, there are no true measures of capacity; even liquids are sold by weight.) Dirhem, 143 (Bussora), 147·8 (Tabriz), or 150 grs. Mescal, 2 = dirhem, 50 = ratl (7300 grains), 6 = man or batman. Russia.—Vershok (1·75 inches), 16 = archine, 3 = sagene (7 feet British, legally), 500 = verst (·663 mile); 2400 square sagenes = deciatine (2·7 acres). Tcharkey (·216 pint), 100 = vedro, 3 = anker, 40 vedros = sarakovaia (324·6 gallons). Garnietz 2 = tchetverka, 4 = tchetverik, 2 = paiak, 2 = osmin, 2 = tchetvert (5·77 bushels). Dola (·68578 grains); 96 = zolotnic; 8 = laua; 12 = funt (6319·7279 grains); 40 = pud (36·112 ); 10 = berkovitz; 3 = packen. Also 3 zolotnices = 1 loth. Siam.—Nin, 12 = küb (10 inches), 2 = sok, 4 = wa (79·999 inches on silver bar at 85° Fahr.). Thangsat, 10×10×20 nin, actual standard 1159·8477 cubic inches at 85° = 2·08 pecks. Thanan, 5×5×4 nin, standard 57·8800 at 85° = 1·67 pints. Tical or bat (234·04 grains), 4 = tael, 4 = catty or chang, 50 = picul or hap (133·738 ). Turkey.—Pic 26·8 or 27·06 inches; larger pic 27·9. Almud 1·15 gallons (of oil = 8 okas); 4 killows = fortin of 3·7 bushels (killow of rice=10 okas). Dram (49·5 grains), 100=chequi, 4=oka (2·8286 ); dram (49·5 grains), 180=rotl, 100=kintal or kantar (127·29 ). United States.—Inch=1·000049 British inch, and other measures in proportion. Gallon=·83292 British gallon. Bushel=;·96946 British bushel. Weight, as Great Britain. As weights of grain are often needed we add pounds weight in cubic feet. Wheat,Usual. Pease,American. IndianCorn. Oats,Russian. Beans,Egyptian. Barley,English. Rice. Loose....... 49 50 44 28 46 39 } 56 Close........ 53 12 54 47 33 50 44 See Report of Standards Department, 1884. WEIMAR, the capital of the grand-duchy of SaxeWeimar-Eisenach, the largest of the Thuringian states, is situated in a pleasant valley on the lira, 50 miles south west of Leipsic and 136 miles south-west of Berlin. Containing no very imposing edifices, and plainly and irregularly built, the town presents at first a somewhat unpretending and even dull appearance; but there is an air of elegance in its quiet and clean streets, which recalls the fact that it is the residence of a grand-duke and his court, and it still retains an indescribable atmosphere of refinement, dating from its golden age, when it won the titles of "poets city" and "the German Athens." Weimar has now no actual importance, though it will always remain a literary Mecca. It is a peaceful little German town, abounding in excellent educational, literary, artistic, and benevolent institutions; its society is cultured, though perhaps a little narrow; while the even tenour of its existence is undisturbed by any great commercial or manufacturing activity. The population in 1885 was 21,565; in 1782, six years after Goethe's arrival, it was about 7000; and in 1834, two years after his death, it was 10,638. Plan of Weimar. The reign of Goethe's friend and patron, the grand-duke Charles Augustus (1775-1828), represents accurately enough the golden age of Weimar; though even during the duke's minority, his mother, the duchess Amalia, had begun to make the little court a focus of light and leading in Germany. The most striking building in Weimar is the extensive palace, erected for Charles Augustus under the superintendence of Goethe 1789-1803, in place of one burned down in 1770. This building, with the associations of its erection, and its "poets' rooms," dedicated to Goethe, Schiller, Herder, and Wieland, epitomizes the characteristics of the town. The main interest of Weimar centres in these men and their more or less illustrious contemporaries; and, above all, Goethe, whose altar to the "genius hujus loci" still stands in the ducal park, is himself the genius of the place, just as Shakespeare is of Stratford-on-Avon, or Luther of Wittenberg. Goethe's residence from 1782 to 1832 (now opened as a "Goethe museum," with his collections and other reminiscences), the simple "garden-house" in the park, where he spent many summers, Schiller's humble abode, where he lived from 1802 till his death in 1805, and the grand-ducal burial vault, where the two poets rest side by side, are among the most frequented pilgrim resorts in Germany. Rietschel's bronze group of Goethe and Schiller (unveiled in 1857) stands appropriately in front of the theatre (much altered in 1868) which attained such distinction under their combined auspices. Not far off is the largo and clumsy parish church, built about 1400, of which Herder became pastor in 1776; close to the church is his statue, and his house is still the parsonage. Within the church are the tombs of Herder and of Duke Bernhard of Weimar, the hero of the Thirty Years War. The altar-piece—a Crucifixion—is said to be the masterpiece of Lucas Cranach, whose house is pointed out in the market-place. Wieland, who came to Weimar in 1772 as the duke's tutor, is also commemorated by a statue, and his house is indicated by a tablet. Among the other prominent buildings in Weimar are the library, containing 200,000 volumes and a valuable collection of portraits, busts, and literary and other curiosities; the museum, built in 1863-68 in the Renaissance style; the ancient church of St James, with the tombs of Lucas Cranach and Musæus; and the townhouse, built in 1841. Various points in the environs of Weimar are also interesting from their associations. Separated from the town by the park, laid out in the so-called English style by Goethe, is the chateau of Belvedere, built in 1724. To the north-east is Tiefurt, often the scene of al-fresco plays, in which the courtiers were the actors and the rocks and trees the scenery; and to the north-west is the chateau of Ettersburg, another favourite resort of Charles Augustus and his court. The history of Weimar, apart from its brilliant record at the end of the 18th and the beginning of the 19th century, is of comparatively little interest. The town is said to have existed in the 9th century, and in the 10th to have belonged to a collateral branch of the family of the counts of Orlamunde. About 1376 it fell to the landgraves of Thuringia, and in 1440 it passed to the electors of Saxony. In 1806 it was visited by Napoleon, whose half-formed intention of abolishing the duchy was only averted by the tact and address of the duchess Luise. The Muses have never left Weimar. Since 1860 it has been the seat of a good school of painting, repre sented by the landscape painters Preller, Kalckreuth, and Max Schmidt, and the historical painters Pauwels, Heumann, and Verlat. The frequent residence here also of the Abbe Liszt, from 1848 till his death in 1886, has preserved for Weimar quite an important place in the musical world. WEISSENFELS, an industrial town in the province of Saxony, Prussia, is situated on the Saale, 1812 miles south west of Leipsic and 19 miles south of Halle. It contains three churches, a spacious market-place, and various educational and benevolent institutions. The former palace, called the Augustusburg, built in 1664-90, occupies a site on a sandstone eminence near the town; this spacious edifice is now used as a military school. Weissenfels manufactures machinery, sugar, pasteboard, paper, leather goods, pottery, and gold and silver wares. It contains also an iron-foundry, and carries on trade in timber and grain. In the neighbourhood are large deposits of sandstone and lignite. Weissenfels is a place of considerable antiquity, and from 1657 till 1746 it was the capital of the dukes of Saxe-Weissenfels, a branch of the electoral house of Saxony. The body of Gustavus Adolphus was embalmed at Weissenfels after the battle of Liitzen. The population of the town in 1885 was 21,766. WEKA, or Weeka. See Ocydrome. WELLESLEY, Richard Wesley (or Wellesley), Marquis of (1760-1842), eldest son of the first earl of Mornington, an Irish peer, and eldest brother of the duke of Wellington, was born June 20, 1760. He was sent to Eton, where he was distinguished as an excellent classical scholar, and to Christ Church, Oxford. By his father's death in 1781 he became earl of Mornington, taking his seat in the Irish House of Peers. In 1784 he entered the English House of Commons as member for Beeralston. Soon afterwards he was appointed a lord of the treasury by Pitt, with whom he rapidly grew in favour. In 1793 he became a member of the board of control over Indian affairs; and, although he was best known to the public by his speeches 1. The word weight has in common use two meanings,—(1) the force exerted between the earth and a body, and (2) a mass which is weighed against other bodies. In scientific use, however, weight means only a property of matter by which it is most convenient to compare the relative amounts of masses. 2. See Nature, xxx. 205. 3. Computed from Fizeau, Ann. Bur. Long., 1878. 4. Computed from Chisholm, Weighing and Measuring, 1877, p. 162; also see p. 158. 5. Computed from Report of Standards Department, 1883. 6. Computed from Chisholm, op. cit., p. 112. 7. See Chisholm, op. cit., pp. 188, 189. For less refined purposes measuring bars should be supported on two points, 21 per cent, of the whole length from the ends. This equalizes the strains in the curves, and makes a minimum distortion.
36,554
129,969
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 48, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.296875
3
CC-MAIN-2023-40
latest
en
0.943895
http://mathhomeworkanswers.org/51123/how-many-tenths-are-equal-to-a-one-fifth-b-three-fifths
1,406,100,380,000,000,000
text/html
crawl-data/CC-MAIN-2014-23/segments/1405997877306.87/warc/CC-MAIN-20140722025757-00043-ip-10-33-131-23.ec2.internal.warc.gz
109,870,777
18,844
how many tenths are equal to (a) one fifth (b) three fifths ? How many tenths are equal to (a) one fifth (b) three fifths ? (1/5)=(2/10) =0.2 =20% (3/5)=3*(1/5)=0.6=60% answered Nov 23, 2012 by anonymous (a) two tenths, (b)six tenths answered Mar 24, 2013 by anonymous 35 views 69 views 126 views 38 views 328 views 21 views 36 views 21 views 28 views 48 views 76 views 329 views 36 views 1,115 views 20 views 25 views 64 views 386 views 34 views 33 views 151 views 270 views 122 views 261 views 578 views 80 views 307 views 405 views 91 views 52 views 838 views 55 views 35 views 11 views 43 views 31 views 77 views 157 views 254 views 194 views 75 views 165 views 51 views
237
678
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.25
3
CC-MAIN-2014-23
longest
en
0.915997
http://www.markedbyteachers.com/gcse/science/investigate-how-changing-the-length-of-a-piece-of-wire-affects-its-resistance.html
1,529,686,545,000,000,000
text/html
crawl-data/CC-MAIN-2018-26/segments/1529267864740.48/warc/CC-MAIN-20180622162604-20180622182604-00155.warc.gz
451,723,616
20,054
• Join over 1.2 million students every month • Accelerate your learning by 29% • Unlimited access from just £6.99 per month Page 1. 1 1 2. 2 2 3. 3 3 4. 4 4 # Investigate how changing the length of a piece of wire affects its resistance. Extracts from this document... Introduction Chris Caulfield – 11K125th October 2003 Wire Resistance Investigation Aim:        To investigate how changing the length of a piece of wire affects its resistance. Prediction:        I already know that a short piece of wire has fewer atoms for the electrons to flow through, using the domino effect.  This causes there to be less resistance in the circuit.    This means that as the length of the wire gets longer the resistance in the circuit increases. I predict that the longer the wire the higher the resistance.  This is because the electrons go slower because they meet more atoms.  Therefore the lower the current the higher the resistance. Equipment:        This is the apparatus I will use for my experiment: • Two batteries • Connecting wires • Volt meter • Ammeter • Heat proof mat • Wire • Crocodile clips Diagram: Plan:        We will be testing 5 different lengths of wire (20 cm, 30 cm, 40 cm, 50 cm and 60 cm) to find how this affects the resistance.  To do this we will do the following: 1. Set up all equipment (as seen in above diagram – an ammeter, a volt meter, 2 batteries and cut a length of wire, at least 60 cm long) Middle 30cm 20cm Test One Test Two Test Three Average Resistance(Ω) Length of wire Current(I) Potential difference (V) Current(I) Potential difference(V) Current(I) Potential difference(V) Current(I) Potential difference (V) 60cm 1 2.7 1 2.7 1 2.7 1 2.7 2.7 20cm 2.3 2.3 2.3 2.8 2.3 2.5 2.3 2.54 1.1 My prediction appears to be correct as I stated above, that the longer the length of wire the higher the resistance.  The results in my results table also support my prediction and I will not therefore be changing my original plan as there is no need to do so.  I have followed the plan exactly as I have written it down. Results:        This is the results table I am going to use for my final results so far. Conclusion Evaluation: My prediction was correct.  I did find that the shortest length of wire had the least resistance and that the longest piece had the highest reading.  My original prediction therefore appears to be accurate.  Also my preliminary work backed up my results from my experiment and has given me reliable results.   To make sure that the longest wire does have the greater resistance I would like to test longer pieces of wire to see whether the gradual increase that I found in my graph would continue. To improve the experiment I could use more pieces of wire, e.g. up to 1 metre of wire to see whether it behaves as the other wire lengths did.  Also I could increase the wire lengths by 5 cm, not 10 cm as in this experiment.  I could test their resistance and get more accurate results of how resistance behaves in various wire lengths.  This will provide more results to base a firmer conclusion on. This student written piece of work is one of many that can be found in our GCSE Electricity and Magnetism section. ## Found what you're looking for? • Start learning 29% faster today • 150,000+ documents available • Just £6.99 a month Not the one? Search for your essay title... • Join over 1.2 million students every month • Accelerate your learning by 29% • Unlimited access from just £6.99 per month # Related GCSE Electricity and Magnetism essays 1. ## Investigating how the length of a Wire affects its resistance. Although most of the points fit quite well into the line of best fit, it becomes obvious that as the resistance and length goes up, the point are further and further off the line of best fit. This can be argued to be because of many reasons. 2. ## To investigate how the length (mm) and the cross-sectional (mm2) area of a wire ... My method enabled me to correctly connect all the wires in the right places and place all the meters in their correct positions. All the results were accurate because although the procedures in my method were highly laborious, they were in order to avoid as many anomalies as possible. 1. ## &amp;quot;Are rechargeable batteries more economical than alkaline batteries?&amp;quot; follows this reaction, NH+4 (aq) --> NH3 (aq) + H+(aq) There are three common dry-cell batteries, and they usually are classified by different electrolyte used in the battery. The battery mentioned at the top is mildly acidic dry-cell battery. In an acid based battery usually sulphuric acid (H2SO4) 2. ## Resistance in a wire - investigate whether or not the length of a piece ... Crocodile Clips - To attach the leads to the nichrome wire and allow the circuit to be complete. Variable Resistor - To control the current and keep it the same at every length in order to make my experiment a fair test. 1. ## Investigation: How length affects the resistance in a wire. Therefore using cm means that this problem can be overridden. I will now test the formula to find the approximate resistance on a 0.8m (80cm) wire. It is predicted that the resistance will be 2.16 for this length, in accordance with my graph R = 0.27 x 80cm/10 This all amounts to R= 2.16 . 2. ## What affects the resistance of a piece of wire If we know the resistivity, the cross-sectional area and the length of the wire we can then calculate the resistance (as shown above). * The resistance of a wire is proportional to its length. There is also the fact that considering the change of current due to more length is 1. ## Find out how the length of a piece of wire affects is resistance. The units of volts are the same as joules per coulomb. Therefore, Ohms law says the more resistance means more energy used to pass electrons through the wire. Resistance is a measure of how much energy is needed to push the current through something. 2. ## The aim of my investigation is to investigate how length affects the resistance of ... Stretch out the wire onto the bench and measure with a metre rule. The power supply was put on 2V and the gradually increased the 4V, to find a suitable current for my main method. The slider on the resistor was moved to allow me to do five repeats. • Over 160,000 pieces of student written work • Annotated by experienced teachers • Ideas and feedback to
1,523
6,398
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.640625
4
CC-MAIN-2018-26
latest
en
0.850346
https://community.qlik.com/t5/New-to-QlikView/Sum-spend-in-the-right-currency/td-p/1381100
1,568,641,040,000,000,000
text/html
crawl-data/CC-MAIN-2019-39/segments/1568514572556.54/warc/CC-MAIN-20190916120037-20190916142037-00160.warc.gz
442,412,732
43,365
# New to QlikView Discussion board where members can get started with QlikView. New Contributor III ## Sum spend in the right currency Hi, I have a problem with defining the right expression in my tables. I am currently working on an application that shows our spend on different sites. My problem is that i want to show the spend in the right currency. If fx the application is used in the US, everything will be shown in USD. Im extracting my data from a table with this information: In this example, the cost is shown in DKK even though the buy is in EUR. The problem with my current expression (in red) is this: =SUM(If(DATAAREAID_Trans='sdx',COSTAMOUNTPOSTED,COSTAMOUNTPOSTED*(EXCHRATE/100))) DATAAREAID tells me from which site the transactions is coming from. In this example 'sdx' is located in the US, so I want all the numbers to be converted to USD. COSTAMOUNTPOSTED(Cost in tabel) is the amount of spend in the currency related to the DATAAREAID. So even though the buy is made in EUR the cost is shown in DKK if the site is located in Denmark and so on. I have a currencytabel but because the currency is originally made in EUR the spend is calculated from EUR to USD instead of DKK to USD. I need something that says that: if DATAAREAID_Trans = 'sdx' then COSTAMOUNTPOSTED else if DATAAREAID_Trans = 'rme' then COSTAMOUNTPOSTED*(EXCHRATE/100) where EXCHRATE = 'DKK' I just have no idea on how to get this done so every help would be much appreciated! Kind regards Nicolai Tags (3) 18 Replies ## Re: Sum spend in the right currency Try this? If(DATAAREAID_Trans = 'sdx', Sum(COSTAMOUNTPOSTED), If(DATAAREAID_Trans = 'rme', Sum({<EXCHRATE = {"DKK"}>} COSTAMOUNTPOSTED) * (Sum({<EXCHRATE = {"DKK"}>} EXCHRATE) / 100)) OR If(DATAAREAID_Trans = 'sdx', Sum(COSTAMOUNTPOSTED), Sum({<DATAAREAID_Trans = {"rme"}>} COSTAMOUNTPOSTED) * (Sum({<EXCHRATE = {"DKK"}, DATAAREAID_Trans = {'rme'}>} EXCHRATE) / 100)) Before develop something, think If placed (The Right information | To the right people | At the Right time | In the Right place | With the Right context) New Contributor III ## Re: Sum spend in the right currency Hi Anil, thank you for your fast reply. it seemed like a good idea but it doesn't work unfortunately. it just sums to zero ## Re: Sum spend in the right currency When you receive 0 that means, I believe your data model is wrong. Try these in your text boxes? 1) If(DATAAREAID_Trans = 'sdx', Sum(COSTAMOUNTPOSTED)) 2) Sum({<DATAAREAID_Trans = {"rme"}>} COSTAMOUNTPOSTED) * (Sum({<EXCHRATE = {"DKK"}, DATAAREAID_Trans = {'rme'}>} EXCHRATE) / 100) Then show me the images, Better if you provide sample Before develop something, think If placed (The Right information | To the right people | At the Right time | In the Right place | With the Right context) Contributor III ## Re: Sum spend in the right currency Hello, You can go with exchange rate table and can convert into single currency. Then go with selection of particular currency which you want to see. Like FromCurrency, ToCurrency,rate USD,USD,1 AUD,USD,0.56 GBP,USD,1.25 Based on selections or join you can divide with value to get value in respective currency or vice versa. Regards, ravi New Contributor III ## Re: Sum spend in the right currency Its the same unfortunately. I think its beacuse the exchangerate table is linked to the CURRENCYCODE ? Contributor III ## Re: Sum spend in the right currency Can you please try below code? RateTable: From,To,Rate USD,USD,1 AUD,USD,0.56 GBP,USD,1.25 ]; Fact: Product,Currency,Value A1,GBP,50 B1,GBP,20 A2,USD,30 B2,AUD,40 ]; left join ,Rate as CurrencyRate Resident RateTable; FactFinal: Value*CurrencyRate as Value_USD Resident Fact; Drop table Fact; Contributor III ## Re: Sum spend in the right currency Hello, Please find attached app for reference. Regards, Ravi New Contributor III ## Re: Sum spend in the right currency I can't see any attachments? New Contributor III
1,068
3,993
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.71875
3
CC-MAIN-2019-39
latest
en
0.882721
https://www.pinterest.com.au/explore/adding-integers/
1,516,136,903,000,000,000
text/html
crawl-data/CC-MAIN-2018-05/segments/1516084886739.5/warc/CC-MAIN-20180116204303-20180116224303-00477.warc.gz
975,159,332
56,558
### 12 Engaging Ways to Practice Adding Integers Check out these 12 ideas to help students become masters at adding positive and negative integers. ### Adding Integers Dot Game(s) - UPDATED to 4 Game Bundle Adding Integers Dot Game: make with no negative numbers ### Adding Integers (Foldable) This foldable provided students with practice modeling integer addition using integer chips and the number lines. There are a total of 12 problems, as well as space to write the "rule" which they will discover after completing the problems given. ### 123 Switch! (Game to Practice Adding/Subtraction Integers) (A Sea of Math) (Game to Practice Adding/Subtraction Integers) switch it up to figure missing integers to introduce equation/variable for those who may not have exposure to it or be comfortable. Use as warm up activity when working with other groups ### Face-Off! An Integer Card Game Middle School Algebra & Functions Activities: Face-Off! An Integer Card Game ### How to Make Adding Integers Stick Forever A step by step guide for breaking down the teaching of additive inverse and adding & subtracting integers using I Can Statements. This is a great addition to the interactive notebook, and you can download your own copy for FREE. ### Can you solve it? Working with Integers - Powerpoint Game This jeopardy game is on Powerpoint and contains 5 categories: Adding, Subtracting, Multiplying, Dividing, and Rules of Integers. There are 25 questions. Pinterest
312
1,480
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.828125
3
CC-MAIN-2018-05
latest
en
0.848935
https://www.physicsforums.com/threads/how-to-scale-this-expression.576262/
1,519,157,735,000,000,000
text/html
crawl-data/CC-MAIN-2018-09/segments/1518891813088.82/warc/CC-MAIN-20180220185145-20180220205145-00518.warc.gz
915,854,259
15,530
# How to scale this expression 1. Feb 10, 2012 ### pamparana Hello everyone, I have a bit of an issue regarding scaling of an expression. So, the scenario is as follows. I have a confidence value that can be associated with the solution given by an optimization routine and it is as follows: C = exp(-A)/(exp(-A) + exp(-B)) where A, B and C are some energy values returned by the optimization routine and C represents the confidence or the probability assigned to the solution A. Also, B is always greater than A. After some simple manipulation, the expression becomes: C = 1.0 / (1 + exp(A-B)) Now, in the beginning my issue was that the values A, B and C were usually quite large (in tens of thousands). So this expression was giving values of 0.5 when A and B were very close and when the difference was something a bit larger (in absolute numbers), then the expression would basically become 1. So, I realized I needed to do some normalization and the first thing I tried was divide everything by A. So, now the expression becomes: C = 1.0 / (1 + exp(1-B/A)) Now typically B/A is something from 1 to 1.01. So, now I have a similar problem: As exp(1-B/A) will basically be 0.5. So, what I would like to do is introduce some scaling, normalization on this expression that would help me basically capture the changes in my data range. I would be grateful for any suggestions that anyone might have. Thanks, Luca 2. Feb 10, 2012 ### coelho For some reason, for me this formula resembles Statistical Mechanics, more exactly the Partition Function. If this formula didnt come directly from it, maybe you could read about it, see if it applies to your problem, and see if there is already some technique for doing what you want. 3. Feb 11, 2012 ### pamparana Hello, it is sort of derived from the Ising model but actually it is just a coincidence that it looks like the partition function. My problem is actually much simpler: So, ok. C= 1 / (1 + e^(B/A)) for the moment assume B >= A, So C can range from 0.5 to 1. Typically, B/A ranges from 1 to 1.01. What I would like to do is have some sort of scaling so when B/A is "close" to 1, then C is close to 0.5 but when it starts to diverge, then C starts to get closer to 1. However, I do not want a linear scale as I want to exaggerate the differences. I can now a priori what the maximum B and A values will be. I was wondering what suitable function can I use. One thing that comes to mind is to actually use the exponential: So, C = 1 / (1 + e^(e^(factor)*B/A)). However, I am not sure how I can derive this "factor" term in a suitable way from B and A values that would make sense. Thanks, Luca 4. Feb 11, 2012 ### chiro Hey pamparana. Have you considered modelling your variables on derivative information? Basically what I am getting at is using derivative information to work backwards to get an empirical value for your 'factor' variable. So if you specify specific derivative behaviour at some given point, then you can use that to get equations for 'factor' and hence evaluate it for those given conditions.
768
3,102
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.578125
4
CC-MAIN-2018-09
longest
en
0.961859
https://manningbooks.medium.com/analyzing-stock-price-time-series-with-fortran-arrays-part-2-bf342b80b810?source=post_internal_links---------1-------------------------------
1,642,905,029,000,000,000
text/html
crawl-data/CC-MAIN-2022-05/segments/1642320303956.14/warc/CC-MAIN-20220123015212-20220123045212-00384.warc.gz
409,963,415
50,882
# Analyzing Stock Price Time Series with Fortran Arrays, Part 2 From Modern Fortran by Milan Curcic __________________________________________________________________ Take 37% off Modern Fortran. Just enter code fcccurcic into the discount box at checkout at manning.com. __________________________________________________________________ In this part of the article, we’ll dive deeper into explicit allocation and slicing of Fortran arrays, and demonstrate how to use these techniques to find best and worst performing stock. ## Part 2: Allocating, indexing, and slicing arrays for stock price analysis In Part 1, I covered the basics of declaring and initializing dynamic Fortran arrays. I also introduced the array constructors, which provide an easy way to allocate and initialize an array in a single line of code. If you haven’t yet read Part 1, I suggest that you go back and read it before moving on. In Part 2, we’ll learn how allocation and deallocation of Fortran arrays works in depth, allowing you to allocate arrays with arbitrary ranges (start and end indices). You’ll also learn how to slice the arrays in any direction and with custom stride to subset exactly the elements that you need. Finally, in this part we’ll apply this knowledge to implementing a solution to our first challenge — finding the best and worst performing stocks. ## Allocating arrays of certain size or range In Part 1, we’ve learned how to declare and initialize dynamic arrays. However, what if we need to assign values to individual array elements, one by one, in a loop? This will be the case as we load the data from CSV files into arrays — we’ll iterate over records in files, and assign values to array elements one at a time. However, we don’t really have a way to initialize the data arrays like we did before with `stock_symbols`. Note that implicitly allocating by assigning an empty array `[integer ::]` or `[real ::]` won’t work here because we may need to index elements of an array in some other order than just appending values. This calls for a more explicit mechanism to allocate the array without assigning known values to it: `real, allocatable :: a(:) ! declare a dynamic array a integer :: im = 5 ! allocate(a(im)) ! allocate memory for array a with im elements` The above statement tells the program to reserve memory for the array `a` of size `im`, in this case 5. When invoked like this, `a` will by default have the lower bound of `1`, and upper bound of `im`. The lower bound of `1` is the default, similar to Julia, R, or MATLAB. This is unlike C, C++, Python, or JavaScript, where array or list indices begin with `0`. However, Fortran doesn’t impose a constaint to the start index being 1, unlike Python, where the first index is always 0. You can specify the lower and upper bounds in the allocation statement: `integer :: is = -5, ie = 10 allocate(a(is:ie)) ! Allocate a with range from is to ie inclusive` Notice that I know used a colon (`:`) between `is` and `ie` to specify the range. This range is inclusive (unlike in Python!), so the size of `a` is now `ie - is + 1`, in this case 16. You can use intrinsic functions `lbound` and `ubound` to get the lower and upper bound, respectively, of any array. ## Allocating an array from another array It’s also possible to dynamically allocate an array based on the size of another array. The `allocate` statement accepts two optional arguments: For example, allocating using `a` from `b` using mold will reserve the space in memory for `a`, but will not initialize its elements: `real, allocatable :: a(:), b(:) allocate(b(10:20)) allocate(a, mold=b) ! allocate a with same range and size as b a = 0` However, if we allocate `a` from `b` using `source`, it will be allocated and initialized with values of `b`: `real, allocatable :: a(:), b(:) b = [1.0, 2.0, 3.0] allocate(a, source=b) ! allocate and initialize a from b` Style Tip: No matter how you choose to allocate your arrays, always initialize them immediately after allocation. This will minimize the chance of accidentally using uninitialized arrays in expressions. While Fortran will allow you to do this, you’ll likely end up with gibberish results. You may have noticed that when describing the array constructors earlier, I initialized arrays without explicitly allocating them with an `allocate` statement. Why did I do that? You may rightfully ask, do I need to explicitly allocate arrays or not? Since Fortran 2003, there has been a convenient feature of the language called allocation on assignment. Automatic allocation on assignment: If you assign an array to an allocatable array variable, the target array variable is automatically allocated with the correct size to match the array on the right-hand side. The array variable can be already allocated or not. If it is, it will be re-allocated if its current size differs from that of the source array. A great example to play with this is appending elements to an array on the fly: `integer, allocatable :: a(:) a = [integer ::] ! create an empty array [] a = [a, 1] ! append 1 to a, now [1] a = [a, 2] ! append 2 to a, now [1, 2] a = [a, 2 * a] ! [1, 2, 2, 4]` This feature is particularly useful when trying to assign an array that is a result of a function, and whose size is not known ahead of time. There is another important difference between explicit allocation with the `allocate` statement and allocation on assignment. The former will trigger a runtime error if issued twice, that is, if you issue an `allocate` statement on an object that is already allocated. On the other side, the latter will gracefully re-allocate the array if already allocated. To be able to effectively reuse dynamic arrays, Fortran gives as a counterpart to the `allocate` statement which allows us to explicitly clear the object from memory. ## Cleaning up after use When we are done working with the array, we can clean it from memory like this: `deallocate(a) ! clear a from memory` Automatic deallocation: An allocatable array is automatically deallocated when it goes out of scope. For example, if you declare and allocate an array inside of a function or subroutine, it will be deallocated on return. After issuing `deallocate`, array `a` must be allocated again before using it in the right-hand side of expressions. We’ll apply this mechanism to reuse arrays between different stocks. Much like it is an error to allocate an object that is already allocated, it is also an error to deallocate an object that is not allocated! In the next section I explain how you can check the allocation status so that you never erroneously allocate or deallocate and object twice. Otherwise, there is no restriction with regard to whether the array has been initialized or not. You are free to deallocate an uninitialized array, for example if you learn that the arrays is not of the expected size, or similar. Style Tip: Deallocate all allocatable variables when you are done working with them. Figure 2 illustrates a typical life cycle of a dynamic array. We first declare the array as `allocatable`. At this point, the array is not yet allocated in memory and its size is unspecified. When ready, we issue the `allocate` statement to reserve a chunk of memory to hold this array. This is also where we decide the size of the array or the start and end indices (in this example, 3 and 8). If not allocating from another source array, the values will be uninitialized. We thus need to initialize the array before doing anything else with it. Finally, once we’re done working with the array, we issue the `deallocate` statement to release the memory that was holding the array. The status of the array is now back to unallocated, and is available for allocation. A dynamic array can be reused like this any number of times, even with different sizes or start and end indices. This is exactly what we’ll do in our stock price analysis app. For each stock, we’ll allocate the arrays, use them to load the data from files, work on them, and then deallocate them before passing them on to the next stock. ## Checking for allocation status It will at times be useful to know the allocation status of a variable, that is, whether it’s currently allocated or not. To do this, we can use the intrinsic `allocated` function: `real, allocatable :: a(:) print *, allocated(a) ! will print “F” allocate(a(10)) print *, allocated(a) ! will print “T” deallocate(a) print *, allocated(a) ! will print “F”` Trying to allocate an already allocated variable, or to deallocate a variable that is not allocated, will trigger a run-time error. Style Tip: Always check for allocation status before explicitly allocating or deallocating a variable. ## Catching allocation and deallocation errors Your allocations and deallocations will occasionally fail. This can happen if you try to allocate more memory than available, if you try to allocate an object that is already allocated, or free an object that has been freed. When this happens, the program will abort. However, the `allocate` statement also comes with built-in exception handling if you want finer control over what happens when the allocation fails: `allocate(u(im), stat=stat, errmsg=err)` where `stat` and `errmsg` are optional arguments: By using the built-in exception handling, you get the opportunity to decide how should the program proceed if the allocation fails. For example, if there is not enough memory to allocate a large array, perhaps we can split the work into smaller chunks. Even if you want the program to stop on allocation failure, this approach lets you handle things gracefully and print a meaningful error message. Style Tip: If you want control over what happens if (de)allocation fails, use `stat` and `errmsg` in your `allocate` and `deallocate` statements to catch any errors that may come up. Of course, you’ll still need to tell the program what to do if an error occurs, for example, stop the program with a custom message, print a warning message and continue running, or try to recover is some other way. I think we should use the built-in exception handling in our stock analysis app. However, we’re going to need this for several arrays. This seems suitable to implement once in a subroutine, and then reuse it as needed. Explicitly allocating and deallocating arrays can be quite tedious. This is especially true if you decide to make use of the built-in exception handling. If you’re working with many different arrays at a time, this can quickly build up to a lot of boilerplate code. Let’s implement simple subroutines `alloc` and `free` that allocate and deallocate, respectively, an input array, and also handle exceptions. Both subroutines should use the `stat` and `errmsg` arguments to catch and report any errors if they occur. Once implemented you should be able to allocate and free your arrays like this: `call alloc(a, 5) ! do work with a call free(a)` Let’s start with the allocator subroutine `alloc`. For the key functionality to work our subroutine needs to: Here’s the implementation: `subroutine alloc(a, n) real, allocatable, intent(in out) :: a(:) integer, intent(in) :: n integer :: stat character(len=100) :: errmsg if (allocated(a)) call free(a) allocate(a(n), stat=stat, errmsg=errmsg) if (stat > 0) error stop errmsg end subroutine alloc` Now, let’s take a look at the implementation of the `free` subroutine: `subroutine free(a) real, allocatable, intent(in out) :: a(:) integer :: stat character(len=100) :: errmsg if (.not. allocated(a)) return deallocate(a, stat=stat, errmsg=errmsg) if (stat > 0) error stop errmsg end subroutine free` The code is very similar to `alloc` except that here, at the start of the executable section of the code, we check if `a` is already allocated. If not, our job here is done and we can return immediately. These subroutines are also part of the `stock-prices` repository. You can find them in `stock_prices/src/mod_alloc.f90`, and are used by the CSV reader in `stock_prices/src/mod_io.f90`. We’ll use these convenience subroutines to greatly reduce the boilerplate in the `read_stock` subroutine. ## Implementing the CSV reader subroutine Having covered the detailed mechanics of allocating and deallocating arrays including the built-in exception handling, we finally arrive to implementing the CSV file reader subroutine: `subroutine read_stock(filename, time, open, high,& low, close, adjclose, volume) ... integer :: fileunit integer :: n, nm nm = num_records(filename) - 1 if (allocated(time)) deallocate(time) allocate(character(len=10) :: time(nm)) call alloc(open, nm) call alloc(high, nm) call alloc(low, nm) call alloc(close, nm) call alloc(adjclose, nm) call alloc(volume, nm) open(newunit=fileunit, file=filename) ! open the file read(fileunit, fmt=*, end=1) ! use read() to skip the CSV header do n = 1, nm ! loop over records and store into array elements read(fileunit, fmt=*, end=1) time(n), open(n),& high(n), low(n), close(n), adjclose(n), volume(n) end do 1 close(fileunit) ! close file when done end subroutine read_stock` To find the length of the arrays before I allocate them, I inquire the length of the CSV file using a custom function `num_records`, defined in `stock-prices/src/mod_io.f90`. If you’re wondering what the number `1` means in the `1 close(fileunit)`, it’s just a line label that Fortran uses if and when it encounters an exception in the `read(fileunit, fmt=*, end=1)` statements. If you’re interested about how this function works, take a look inside `stock-prices/src/mod_io.f90`. I won’t spend much time on the I/O-specific code here, as we just need it get going with the array analysis. On every subroutine entry, the arrays `time`, `open`, `high`, `low`, `close`, `adjclose`, and volume will be allocated with size `nm`. The subroutine `alloc` now seamlessly reallocates the arrays for us. Notice that we still use the explicit way of allocating and deallocating the array of time stamps. This is because we implemented the convenience subroutines `alloc` and `free` that work on real arrays. Because of Fortran’s strong typing discipline, we can’t just pass an array of strings to a subroutine that expects an array of reals. We’ll learn later how to write generic procedures that can accept arguments of different type. For now, explicitly allocating the array of time stamps will do. Furthermore, we also need to specify the string length when allocating the `time` array. Having read the CSV files and loaded the stock price arrays with the data, we can move on the the actual analysis and fun with arrays. Getting the number of lines in a text file: If you’re curious how the `num_records` function is implemented, feel free to take a look in `stock-prices/src/mod_io.f90`. This function opens a file and counts the number of lines by reading it line by line. ## Indexing and slicing arrays Did you notice that the stock data in the CSV files are ordered from most recent to oldest? This means that when we read it into arrays from top-to-bottom, the first element will correspond to the most recent stock price. Let’s reverse the arrays so that they are oriented in a more natural way, going forward in time with the index number. If we express the reverse operation as a function, we could apply it to any array like this: `adjclose = reverse(adjclose)` The `reverse` function will prove useful for the other two objectives of the stock-prices app. Before implementing it, we need to know a few things about how array indexing and slicing works. To select a single element, we enclose an integer index inside the parentheses, for example `adjclose(1)` will refer to the first element of the array, `adjclose(5)` to the fifth, and so on. To select a range of elements, for example from fifth to tenth, use the start and end indices inside the parentheses, separated by a colon: `real, allocatable :: subset(:) ... subset = adjclose(5:10)` In this case, `subset` will be automatically allocated as an array with 6 elements, and values corresponding to those of `adjclose` from index 5 to 10. By default, the slice `adjclose(start:end)` will include all elements between indices `start` and `end`, inclusive. However, you can specify an arbitrary stride. For example, `adjclose(5:10:2)` will result in a slice with elements 5, 7, and 9. The general syntax for slicing an array `a` with a custom stride is: `a(start:end:stride)` where `start`, `end`, and `stride` are integer variables, constants, or expressions. `start` and `end` can have any valid integer value, including zero and negative values. `stride` must be a non-zero (positive or negative) integer. Similar rules apply as for `start`, `end`, and `stride` of `do`-loops: Furthermore, if `start` equals the lower bound of an array, it can be omitted, and same is true if `end` equals the upper bound of an array. For example, if we declare an array as `real :: a(10:20)`, then the following array references and slices all correspond to the same array: `a`, `a(:)`, `a(10:20)`, `a(10:)`, `a(:20)`, `a(::1)`. The last syntax from this list is particularly useful when you need to slice every `n`-th element of the whole array – it’s as simple as `a(::n)` If you have experience with slicing lists in Python, I bet this feels familiar. ## Intermezzo: Reversing an array Let’s write a function `reverse` that accepts a real 1-dimensional array as input argument, and returns the same array in reverse order. We’ll use array slicing rules to perform the reversal. We can solve this in only two steps and write it as a single-line function (not counting the declaration code). First, we know that since we are just reversing the order of elements, the resulting array will always be of same size as the input array. The size will also correspond to the end index of the array. Second, once we know the size, we can slice the input from last to first and use the negative stride to go backward: `pure function reverse(x) real, intent(in) :: x(:) ! assumed-size input array real :: reverse(size(x)) ! declare reverse with same size as x reverse = x(size(x):1:-1) ! use negative stride to copy backwards end function reverse` Notice that our input array doesn’t need to be declared `allocatable`. This is the so-called assumed-size array and it will take whatever size array is passed by the caller. We can use the information about the size directly when declaring the result array. You can test your new function by reversing the input array twice and comparing it to itself: `print *, all(a == reverse(reverse(a))) ! should always print “T”` You may be wondering why make this a separate function at all when we can just do `x(size(x):1:-1)` to reverse any array. There are two advantages to making this a dedicated `reverse()` function. First, if you need to reverse an array more than a few times, the slicing syntax above soon becomes unwieldy. Every time you read it, there is an extra step in the thought process to understand the intention behind the syntax. Second, the slicing syntax is allowed only when referencing an array variable, and you can’t use it on expressions, array constructors, or function results. In contrast, you can pass any of those as an argument to `reverse()`. This is why we can make such a test like `all(x == reverse(reverse(x)))`. Try it! Take some time to play with various ways to slice arrays. Try different values of start, end, and stride. What happens if you try to create a slice that is bigger than the array itself? In other words, can you reference an array out-of-bounds? Referencing array elements out of bounds: Be very careful to not reference array elements that are out of bounds! Fortran itself doesn’t forbid this, but you will either end up with an invalid value or with a segmentation fault, which can be particularly difficult to debug. By default, compilers don’t check if an out-of-bounds reference occurs during run-time, but you can enable it with a compiler flag. Use `gfortran -fcheck=bounds` and `ifort -check bounds` for GNU and Intel Fortran compilers respectively. Keep in mind that this can result in significantly slower programs, so it’s best if used during development and debugging, but not in production. Now that we understand how array indexing works, it’s straightforward to calculate the stock gain over the whole time series. It’s simply a matter of taking a difference between last and first element of the adjusted close price to calculate the absolute gain in USD: `adjclose = reverse(adjclose) gain = (adjclose(size(adjclose)) - adjclose(1))` Here, I am using the intrinsic `size` function, which returns the integer total number of elements, to reference the last element of the array. Like everything else we did before, `gain` must be declared, in this case a `real` scalar. The absolute gain, however, only tells us how much the stock grew over a period of time, but it doesn’t tell us anything about whether that growth is small or large relative to the stock price itself. For example, a gain from \$1 to \$10 per share is greater than the gain from \$100 to \$200 per share, assuming you invest \$100 in either stock. In the former case, you will come out with \$1000, whereas in the latter case you will come out with just \$200! To calculate the relative gain in percent, we can divide the absolute gain by the initial stock price, multiply by 100 to get to percent, that is, `gain / adjclose(1) * 100`. For brevity, I will also round the relative gain to the nearest integer using the intrinsic function `nint`: `print *, symbols(n), gain, nint(gain / adjclose(1) * 100)` The output of the program is: `2000-01-03 through 2018-05-14 Symbol, Gain (USD), Relative gain (%) --------------------------------- AAPL 184.594589 5192 AMZN 1512.16003 1692 CRAY 9.60000038 56 CSCO 1.71649933 4 HPQ 1.55270004 7 IBM 60.9193039 73 INTC 25.8368015 89 MSFT 59.4120979 154 NVDA 251.745300 6964 ORCL 20.3501987 77` From this output, we can see that Amazon had the largest absolute gain of \$1512.16 per share, and Hewlett-Packard had the smallest gain of only \$1.55 per share. However, the relative gain is more meaningful then the absolute amount per share because it tells us how much has the stock gained relative to its starting price. Looking at relative gain, Nvidia had a formidable 6864% growth, with Apple being the runner up with 5192%. The worst performing stock was that of Cisco Systems (CSCO), with only 4% growth over this time period. If you have cloned the stock-prices repo from Github, it’s straightforward to compile and run this program. From the `stock-prices` directory, type: `Make ./stock_gain` You can read the full program in `stock-prices/src/stock_gain.f90`. We have now covered a lot of the nitty-gritty of how arrays work. We’ll use this knowledge in Part 3 of this article sequence to implement solutions to the remaining two stock price analysis challenges. ## Summary In this part, we continued from where we left of at the end of Part 1, and dug deeper into the mechanics of allocation and deallocation of dynamic Fortran arrays. We covered allocating arrays with arbitrary index ranges, and saw how we can slice and dice through the arrays with custom strides. In this article we also implemented the solution to the first of the three challenges. In the third and final part of this sequence, we’ll use built-in Fortran functions and whole-array arithmetic to calculate metrics such as moving average and standard deviation. We’ll use these to tackle the remaining two challenges — identifying which stocks are riskier than others at any given time, and finding good times to buy or sell stock. If you want to learn more about the book, check it out on liveBook here and see this slide deck.
5,519
24,173
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.5625
3
CC-MAIN-2022-05
latest
en
0.861471
https://practice.geeksforgeeks.org/problems/max-sum-without-adjacents2430/1
1,624,538,983,000,000,000
text/html
crawl-data/CC-MAIN-2021-25/segments/1623488553635.87/warc/CC-MAIN-20210624110458-20210624140458-00334.warc.gz
417,185,153
25,794
Showing: Handle Score @Ibrahim Nash 6454 @mb1973 5714 @Quandray 5245 @akhayrutdinov 5111 @saiujwal13083 5046 @sanjay05 3762 @kirtidee18 3709 @mantu_singh 3556 @marius_valentin_dragoi 3523 @sushant_a 3459 Easy Accuracy: 58.27% Submissions: 4516 Points: 2 Given an array Arr of size N containing positive integers. Find the maximum sum of a subsequence such that no two numbers in the sequence should be adjacent in the array. Example 1: Input: N = 6 Arr[] = {5, 5, 10, 100, 10, 5} Output: 110 Explanation: If you take indices 0, 3 and 5, then Arr[0]+Arr[3]+Arr[5] = 5+100+5 = 110. Example 2: Input: N = 4 Arr[] = {3, 2, 7, 10} Output: 13 Explanation: 3 and 10 forms a non continuous subsequence with maximum sum. You don't need to read input or print anything. Your task is to complete the function findMaxSum() which takes the array of integers arr and n as parameters and returns an integer denoting the answer. Expected Time Complexity: O(N) Expected Auxiliary Space: O(1) Constraints: 1 ≤ N ≤ 106 1 ≤ Arri ≤ 107 ### Editorial We strongly recommend solving this problem on your own before viewing its editorial. Do you still want to view the editorial?
390
1,165
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.234375
3
CC-MAIN-2021-25
longest
en
0.693391
https://www.lotterypost.com/thread/239960
1,480,854,680,000,000,000
text/html
crawl-data/CC-MAIN-2016-50/segments/1480698541321.31/warc/CC-MAIN-20161202170901-00006-ip-10-31-129-80.ec2.internal.warc.gz
991,990,564
14,692
Welcome Guest You last visited December 4, 2016, 7:09 am All times shown are Eastern Time (GMT-5:00) # Seek a System Journey 12/11/11 Race work out For Pic 3 Topic closed. 5 replies. Last post 5 years ago by Idesire\$. Page 1 of 1 Ellenwood,Georgia United States Member #60512 April 21, 2008 471 Posts Offline Posted: December 11, 2011, 11:59 am - IP Logged To the number that appears on the 1st day of the week or month in pic 3 games. 011 and 110 these totals together with the number that appeared in the pic 3.They are to be set up accordingly directly beneath one another and added by the ancient Egyptian method. Example :(What If) 406 appeared in Pick 3 Plus 406 Plus 011 Plus 110 ___________ Total 527 Now take the appearing number again  406 Place Beneath the above                       527 Added together and you get                  923 Your probable numbers are selected from up and down corner to corner and the last 2 combinations across. 459---022---673---629---423---527---923 Good work United States Member #114679 August 5, 2011 467 Posts Offline Posted: December 11, 2011, 9:01 pm - IP Logged Hello Idesire,  have you tried this system on pick 4? LUCKYDOC, Ellenwood,Georgia United States Member #60512 April 21, 2008 471 Posts Offline Posted: December 11, 2011, 9:36 pm - IP Logged Hello Idesire,  have you tried this system on pick 4? Sorry no I haven't but if you do lt let me know New Mexico United States Member #86099 January 29, 2010 11116 Posts Offline Posted: December 12, 2011, 12:06 pm - IP Logged To the number that appears on the 1st day of the week or month in pic 3 games. 011 and 110 these totals together with the number that appeared in the pic 3.They are to be set up accordingly directly beneath one another and added by the ancient Egyptian method. Example :(What If) 406 appeared in Pick 3 Plus 406 Plus 011 Plus 110 ___________ Total 527 Now take the appearing number again  406 Place Beneath the above                       527 Added together and you get                  923 Your probable numbers are selected from up and down corner to corner and the last 2 combinations across. 459---022---673---629---423---527---923 Good work Ellenwood,Georgia United States Member #60512 April 21, 2008 471 Posts Offline Posted: December 13, 2011, 8:44 am - IP Logged Yes it just may work, less writing you would have to do. Ellenwood,Georgia United States Member #60512 April 21, 2008 471 Posts Offline Posted: December 14, 2011, 10:11 am - IP Logged Seek A system 12-14-11 Great info for Triples Page 1 of 1
772
2,591
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.546875
3
CC-MAIN-2016-50
latest
en
0.759629
https://stats.stackexchange.com/questions/tagged/permutation
1,653,427,501,000,000,000
text/html
crawl-data/CC-MAIN-2022-21/segments/1652662577259.70/warc/CC-MAIN-20220524203438-20220524233438-00469.warc.gz
613,977,793
68,551
# Questions tagged [permutation] The tag has no usage guidance. 32 questions Filter by Sorted by Tagged with 136 views ### What is the chance all choices are different? Suppose everyone (7 people) chooses independently and randomly out of 7 choices. What are the odds that no two people have picked the same option? I feel like the answer is $5040/823543,$ but I am ... • 43 22 views ### Besides computational feasibility, what kind of problems may occur when imputing missing data by exhaustive permutation? I don't know if "exhaustive permutation" is the correct term, but happy to be corrected if it isn't. Here's an example, which hopefully will clarify what I mean. Let's say I have the ... • 127 13 views ### How to understand embedding dimension in permutation entropy when checking predictabilty in electric load data? I am predicting electric load data with different deep learning models and I am trying to define the predictability of the data. So far I came across the permutation entropy (PE) as a measurement for ... • 111 23 views ### effect size for pairwise permutation I am reading a paper that is using permutations to compare the means of two different treatment groups (a nutrition study, took minimum - maximum) that have low sample sizes, and so the groups are ... • 13 37 views ### Random samples within the Central Limit Theorem - why select with permutation with repetition? "The central limit theorem states that if you have a population with mean μ and standard deviation σ and take sufficiently large random samples from the population with replacement, then the ... 33 views ### Given 9 coat hangers (3 groups), what's the probability If I had 9 coat hangers (for example: 4 green, 3 blue and 2 red), what would be the probability of the first four being [green, green, blue, blue], if I just chuck the coat hangers into the wardrobe ... 41 views ### Disagreement regarding the role of permutation/randomization/shuffling in permuation testing between two populations A colleague and I recently got into a conversation/disagreement regarding permutation testing (e.g. for comparing the mean value of two populations). While i thought i had a fairly good understanding ... 101 views ### Post-hoc test of two-way permutation ANOVA I have a 2x2 study design to test the interaction between two drugs, with a single variable outcome. The data are very much non-parametric and it seems that a good alternative to the traditional two-... • 1 1 vote 10 views ### How to correctly apply a PERMANOVA to a half-crossed experimental design I'm a biologist and conducted an experiment in which I took analytical samples of aphid infested plants and analyzed them using metabolomics. The plants were chosen based on their Genotype (... • 21 28 views ### Two equivalent ways of sampling with replacement The original context of this problem is for a derivation of the lookdown model https://projecteuclid.org/journals/annals-of-probability/volume-24/issue-2/A-countable-representation-of-the-Fleming-Viot-... 49 views ### Can I use a permutation test to test the null hypothesis ''The difference between two groups is X''? From what I read on permutation test, the null hypothesis is usually that there is no difference between the two groups. I want to test if the difference between the mean of the two groups is $\theta$ ... 26 views ### How many ways to order such that one group comes first? This question arose while playing a board game that uses cards. Imagine we have 11 cards total. 3 are red (R), 1 is blue (B), and 7 are yellow (Y). How many ways can we randomly order the 11 cards so ... • 103 75 views ### Schitt’s Creek David Ring Combination So I’ve been trying to figure this one out. On Schitt’s Creek, David wears various combinations of rings. He has four rings and he can arrange them on his ten fingers in various ways - each ring is ... 29 views ### Permuting data to seperate marginal and interaction effects I've read a bit on permutation tests for discovering interactions. Are there permutations I can perform on my dataset to try to separate the interactions from the marginals? In particular, I want to ... • 23 33 views ### Find an extreme point (of a convex polytope) with the minimum of a quadratic cost function For example, I am trying to do the following: Doubly stochastic matrices (i.e. matrices whose rows and columns sum to 1) form a polytope with permutation matrices as extreme points (Birkhoff–von ... • 261 53 views ### Quantitative/statistical comparison of two orderings/permutations There is a couple, say, Theresa and Robert. They assess their preferences on 5 books by ranking them from the most attractive to the least one. Theresa: [0,4,3,2,1]... • 2,483 32 views ### Probability of n persons picking multiple numbers from N numbers, that all choices are different Given $N$ distinct numbers, $n$ persons each picks $x_i > 0$ numbers, what is the probability $P$ of them all picking different numbers? The limiting case of $n > N$ is trivial, $P = 0$. Same ... • 33 37 views ### Intuition behind permutations + probability I'm new to stats so I'm struggling to grasp the intuition behind permutations + probability. Would anybody be able to help me with parts (b) and (c) of this question? Any help is greatly appreciated :)... 42 views ### What does it mean to "permute" a predictor in the context of random forest? I am reading the vignette for the R package randomForestExplainer. There the result accuracy_decrease (classification) is defined as mean decrease of prediction ... • 1,372 40 views ### Permutations - counting [closed] Four girls bought each a swimsuit , and they decided to share them . How many days are needed so that each girl will wear each swimsuit once ? 10 views ### One sample data with unusual structure, permutation test? I have N sets, each of them consisting of 1000 integer values. Typically, more than 90 % of values within a set is zero, thus median is zero. The nonzero elements are positive integers. We take N ... 160 views ### One sample permutation test on skewed data I have one sample of values, each of them is a z-score from mutually independent z-scored distributions. I aimed at testing that the mean value of the sample is larger than zero. Originally I wanted ... 10 views ### Distribution of a rate in a subsample after permutation of the total dataset I need to estimate the distribution of a rate in a subsample after permutation of the total dataset. I was wondering whether it's sufficient to use the binomial distribution with the global rate ... • 2,313 37 views ### Evaluate clustering accuracy based on an adjacency/similarity/connection matrix Description In the classification tasks, the classification accuracy is computed by accuracy=n_correct/n_total For example, if I have three samples, and the ... • 11 1 vote 65 views ### Interpretation of the adjusted rand-index I'm trying to understand the adjusted Rand index for a metric review that I'm doing. I found this question most helpful so far: Calculating the adjusted rand index? https://en.wikipedia.org/wiki/... • 805 335 views ### Adjusting P values for multiple comparisons using permutation tests I have a number of continuous predictors (biomarker measurements) which I would like to test for association with a binary outcome variable (disease status), adjusting for multiple comparisons. As ... • 31 1k views ### Finding most likely permutation [Hoping that this is the right Stackexchange site; inspired from a true story seen at work] Joe has a measuring instrument and $n$ objects to be measured (say, a scale and $n$ weights). He measures ... • 291 1 vote 76 views ### Probability that Secret Santa arrangement for couples will result in perfect pairings 4 couples, 8 people total, participate in a Secret Santa gift exchange. Call the people A, B, C, D, E, F, G, H. Assume A+B are a married couple. Likewise, assume C+D, E+F, and G+H are all couples. All ... • 2,180 99 views ### Probability that Secret Santa arrangement will result in perfect pairings for couples 4 couples, 8 people total, participate in a Secret Santa gift exchange. Call the people A, B, C, D, E, F, G, H. Assume A+B are a married couple. Likewise, assume C+D, E+F, and G+H are all couples. ... • 2,180 35 views ### Equal probability of random permutations For $n$ distinct elements $x_1, x_2, .., x_n$. If we generate a random permutation $\pi$ of length n (so of all elements), how to prove that for every $i$th element $\pi_i$ of it, the probability \$P(\... • 113 85 views ### All combinations for a King and Queen (coed) 2's Tournament Pool Sheet (N girls and N guys) Ok so I have N girls and N guys. I need to create a 2's beach volleyball coed tournament (also known as King and Queen style). I want a list (like Joe and Jill versus Donald and Melania...etc.) of all ... • 121
2,050
8,911
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.671875
3
CC-MAIN-2022-21
longest
en
0.927705
https://discuss.pytorch.org/t/simple-classifier-size-mismatch/94253
1,652,729,277,000,000,000
text/html
crawl-data/CC-MAIN-2022-21/segments/1652662512229.26/warc/CC-MAIN-20220516172745-20220516202745-00528.warc.gz
265,531,833
5,439
# Simple Classifier size mismatch Size mismatched. I am a beginner and I try to use simple classifier I found online and use my own custom image. I have 2 class of RGB image with size of 224x224. When I run below code I get an mismatched size. May I know how the number came from? RuntimeError: size mismatch, m1: [2 x 119072], m2: [1568 x 2] at /pytorch/aten/src/THC/generic/THCTensorMathBlas.cu:290 I know 10 came from batch size but how about 100352? Below is my code and my error Hi, can you next time please provide your code in a code block and not post a screenshot of it. Makes it easier to debug Your problem is most likely with the input dimension of your fully connected layer `self.fc = nn.Linear(7*7*32, num_classes)` It seams like this network was made to run with RGB images of size 28x28. But since you provide it with images of size 224x224 the feature maps that get get passed into the fc layer are not of size 7x7x32 but 56x56x32 (if my calculations are correct). So to make it work again change: ``````self.fc = nn.Linear(7*7*32, num_classes) `````` to ``````self.fc = nn.Linear(56*56*32, num_classes) `````` Hi @RaLo4, Noted on the code part. May I know how did (7732) derived from? or from my case (565632)? How did from conv2(16,32,5) become (565632). From my understanding the output of conv2 became input of fc. Or am I wrong? yes you are correct. well, there is still the batch norm, relu and maxpool in between but more or less, yes. it help to calculate and trace the shape of your image tensor throughout your network. At the point your image tensor leaves `self.layer2` your image tensor is of shape: batch_size x 32 x 56 x 56. In this shape you could just pass it through another conv layer. But if you want to pass it through an fully connected layer, you need to flatten this tensor first. Your ``````out = out.reshape(out.size(0), -1) `````` does this for you. So if you flatten your tensor it becomes batch_size x 32 * 56 * 56 or batch_size x 100352 I explained this recently using some example code. If you want to check this out here is a link. If you are still not quite getting it or have any more questions, please feel free to ask more!
592
2,190
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.5625
3
CC-MAIN-2022-21
latest
en
0.920811
https://vn.tradingview.com/script/KjD8ByIQ-Augmented-Dickey-Fuller-ADF-mean-reversion-test/
1,638,274,472,000,000,000
text/html
crawl-data/CC-MAIN-2021-49/segments/1637964358973.70/warc/CC-MAIN-20211130110936-20211130140936-00428.warc.gz
649,358,163
56,299
Augmented Dickey–Fuller (ADF) mean reversion test 9662 lượt xem The augmented Dickey-Fuller test ( ADF ) is a statistical test for the tendency of a price series sample to mean revert. The current price of a mean-reverting series may tell us something about the next move (as opposed, for example, to a geometric Brownian motion). Thus, the ADF test allows us to spot market inefficiencies and potentially exploit this information in a trading strategy. Mathematically, the mean reversion property means that the price change in the next time period is proportional to the difference between the average price and the current price. The purpose of the ADF test is to check if this proportionality constant is zero. Accordingly, the ADF test statistic is defined as the estimated proportionality constant divided by the corresponding standard error . In this script, the ADF test is applied in a rolling window with a user-defined lookback length. The calculated values ​​of the ADF test statistic are plotted as a time series. The more negative the test statistic, the stronger the rejection of the hypothesis that there is no mean reversion. If the calculated test statistic is less than the critical value calculated at a certain confidence level (90%, 95%, or 99%), then the hypothesis of a mean reversion is accepted (strictly speaking, the opposite hypothesis is rejected). Input parameters: • Source - The source of the time series being tested. • Length - The number of points in the rolling lookback window. The larger sample length makes the ADF test results more reliable. • Maximum lag - The maximum lag included in the test, that defines the order of an autoregressive process being implied in the model. Generally, a non-zero lag allows taking into account the serial correlation of price changes. When dealing with price data, a good starting point is lag 0 or lag 1. • Confidence level - The probability level at which the critical value of the ADF test statistic is calculated. If the test statistic is below the critical value, it is concluded that the sample of the price series is mean-reverting. Confidence level is calculated based on MacKinnon (2010). • Show Infobox - If True, the results calculated for the last price bar are displayed in a table on the left. More formal background: Formally, the ADF test is a test for a unit root in an autoregressive process. The model implemented in this script involves a non-zero constant and zero time trend. The zero lag corresponds to the simple case of the AR(1) process, while higher order autoregressive processes AR(p) can be approached by setting the maximum lag of p. The null hypothesis is that there is a unit root, with the alternative that there is no unit root. The presence of unit roots in an autoregressive time series is characteristic for a non-stationary process. Thus, if there is no unit root, the time series sample can be concluded to be stationary, i.e., manifesting the mean-reverting property. • It should be noted that the ADF test tells us only about the properties of the price series now and in the past. It does not directly say whether the mean-reverting behavior will retain in the future. • The ADF test results don't directly reveal the direction of the next price move. It only tells wether or not a mean-reverting trading strategy can be potentially applicable at the given moment of time. • The ADF test is related to another statistical test, the Hurst exponent . The latter is available on TradingView as implemented by balipour, QuantNomad and DonovanWall. • The ADF test statistics is a negative number. However, it can take positive values, which usually corresponds to trending markets (even though there is no statistical test for this case). • Rigorously, the hypothesis about the mean reversion is accepted at a given confidence level when the value of the test statistic is below the critical value. However, for practical trading applications, the values which are low enough - but still a bit higher than the critical one - can be still used in making decisions. Examples: The VIX volatility index is known to exhibit mean reversion properties ( volatility spikes tend to fade out quickly). Accordingly, the statistics of the ADF test tend to stay below the critical value of 90% for long time periods. The opposite case is presented by BTCUSD . During the same time range, the bitcoin price showed strong momentum - the moves away from the mean did not follow by the counter-move immediately, even vice versa. This is reflected by the ADF test statistic that consistently stayed above the critical value (and even above 0). Thus, using a mean reversion strategy would likely lead to losses. If you enjoy using my scripts, consider becoming a supporter: https://www.buymeacoffee.com/tbiktag A word of caution: always be aware of the risks and do not interpret data produced by the script or contained in the preview chart as trading advice. Mã nguồn mở Với tinh thần TradingView, tác giả của tập lệnh này đã xuất bản nó dưới dạng mã nguồn mở, vì vậy các nhà giao dịch có thể hiểu và xác minh nó. Chúc mừng tác giả! Bạn có thể sử dụng mã này miễn phí, nhưng việc sử dụng lại mã này trong một ấn phẩm chịu sự điều chỉnh của Nội quy nội bộ. Bạn có thể yêu thích nó để sử dụng nó trên biểu đồ. Bạn muốn sử dụng tập lệnh này trên biểu đồ?
1,139
5,361
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.15625
3
CC-MAIN-2021-49
latest
en
0.89557
https://de.mathworks.com/matlabcentral/cody/problems/60-the-goldbach-conjecture/solutions/1413148
1,606,149,712,000,000,000
text/html
crawl-data/CC-MAIN-2020-50/segments/1606141163411.0/warc/CC-MAIN-20201123153826-20201123183826-00067.warc.gz
254,801,020
17,049
Cody Problem 60. The Goldbach Conjecture Solution 1413148 Submitted on 10 Jan 2018 by RAHUL KOTTATH This solution is locked. To view this solution, you need to provide a solution of the same size or smaller. Test Suite Test Status Code Input and Output 1   Pass nList = 28:6:76; for i = 1:length(nList) n = nList(i); [p1,p2] = goldbach(n) assert(isprime(p1) && isprime(p2) && (p1+p2==n)); end p1 = 5 p2 = 23 p1 = 3 p2 = 31 p1 = 3 p2 = 37 p1 = 3 p2 = 43 p1 = 5 p2 = 47 p1 = 5 p2 = 53 p1 = 3 p2 = 61 p1 = 3 p2 = 67 p1 = 3 p2 = 73 2   Pass nList = [18 20 22 100 102 114 1000 2000 36 3600]; for i = 1:length(nList) n = nList(i); [p1,p2] = goldbach(n) assert(isprime(p1) && isprime(p2) && (p1+p2==n)); end p1 = 5 p2 = 13 p1 = 3 p2 = 17 p1 = 3 p2 = 19 p1 = 3 p2 = 97 p1 = 5 p2 = 97 p1 = 5 p2 = 109 p1 = 3 p2 = 997 p1 = 3 p2 = 1997 p1 = 5 p2 = 31 p1 = 7 p2 = 3593 Community Treasure Hunt Find the treasures in MATLAB Central and discover how the community can help you! Start Hunting!
429
988
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.578125
4
CC-MAIN-2020-50
latest
en
0.578675
https://optimization.cbe.cornell.edu/index.php?title=McCormick_envelopes&oldid=3273
1,670,495,009,000,000,000
text/html
crawl-data/CC-MAIN-2022-49/segments/1669446711286.17/warc/CC-MAIN-20221208082315-20221208112315-00207.warc.gz
491,087,248
13,412
McCormick envelopes Author: Susan Urban (smu29) (SYSEN 5800 Fall 2021) Introduction Optimization of a non-convex function f(x) is challenging since it may have multiple locally optimal solutions or no solution and it can take a significant amount of time, resources, and effort to determine if the solution is global or the problem has no feasible solution. According to Castro1, "Gradient based solvers [are] unable to certify optimality". Different techniques are used to address this challenge depending on the characteristics of the problem. One technique used is convex envelopes2: Given a non-convex function f(x), g(x) is a convex envelope of f(x) for X ${\displaystyle \in }$ S if: ·      g(x) is convex under-estimator of f(x) ·      g(x)>=h(x) for all convex under-estimators h(x) McCormick Envelopes In particular, for bilinear (e.g., x*y, x+y) non-linear programming (NLP) problems3, the McCormick Envelope4 is a type of convex relaxation used for optimization. In case of an NLP, a linear programming (LP) relaxation is derived by replacing each bilinear term with a new variable and adding four sets of constraints. In the case of a mixed-integer linear programming (MINLP), a MILP relaxation is derived. This strategy is known as McCormick relaxation. The LP solution gives a lower bound and any feasible solution gives an upper bound. As noted by Scott et al5, "McCormick envelopes are attractive due to their recursive nature of their definition, which affords wide applicability and easy implementation computationally.  Furthermore, these relaxations are typically stronger than those resulting from convexification or linearization procedures." Derivation of McCormick Envelopes The following is a derivation of the McCormick Envelopes:2 ${\displaystyle Let\ w=xy}$ ${\displaystyle x^{L}\leq x\leq x^{U}}$ ${\displaystyle y^{L}\leq y\leq y^{U}}$ where ${\displaystyle x^{L},x^{U},y^{L},y^{U}}$are   upper  and   lower  bound  values  for  ${\displaystyle x}$ and ${\displaystyle y}$, respectively. ${\displaystyle a=\left(x-x^{L}\right)}$ ${\displaystyle b=\left(y-y^{L}\right)}$ ${\displaystyle a*b\geq 0}$ ${\displaystyle a*b=\left(x-x^{L}\right)\left(y-y^{L}\right)=xy-x^{L}y-xy^{L}+x^{L}y^{L}\geq 0}$ ${\displaystyle w\geq x^{L}y+xy^{L}-x^{L}y^{L}}$ ${\displaystyle a=\left(x^{U}-x\right)}$ ${\displaystyle b=\left(y^{U}-y\right)}$ ${\displaystyle w\geq x^{U}y+xy^{U}-x^{U}y^{U}}$ ${\displaystyle a=\left(x^{U}-x\right)}$ ${\displaystyle b=\left(y-y^{L}\right)}$ ${\displaystyle w\leq x^{U}y+xy^{L}-x^{U}y^{L}}$ ${\displaystyle a=\left(x-x^{L}\right)}$ ${\displaystyle b=\left(y^{U}-y\right)}$ ${\displaystyle w\leq xy^{U}+x^{L}y-x^{L}y^{U}}$ The under-estimators of the function are represented by: ${\displaystyle w\geq x^{L}y+xy^{L}-x^{L}y^{L}}$ ${\displaystyle w\geq x^{U}y+xy^{U}-x^{U}y^{U}}$ The over-estimators of the function are represented by: ${\displaystyle w\leq x^{U}y+xy^{L}-x^{U}y^{L}}$ ${\displaystyle w\leq xy^{U}+x^{L}y-x^{L}y^{U}}$ Example: Convex Relaxation The following shows the relaxation of a non-convex problem:2 Original non-convex problem: ${\displaystyle \min Z=\textstyle \sum _{i=1}\sum _{j=1}c_{i,j}x_{i}x_{j}+g_{0}(x)}$ ${\displaystyle s.t.\sum _{i=1}\sum _{j=1}c_{i,j}^{l}x_{i}x_{j}+g_{l}(x)\leq 0,\forall l\in L}$ ${\displaystyle x^{L}\leq x\leq x^{U}}$ Replacing ${\displaystyle u_{i,j}=x_{i}x_{j}}$ we obtain a relaxed, convex problem: ${\displaystyle \min Z=\textstyle \sum _{i=1}\sum _{j=1}c_{i,j}u_{i,j}+g_{0}(x)}$ ${\displaystyle s.t.\sum _{i=1}\sum _{j=1}c_{i,j}^{l}u_{i,j}+g_{l}(x)\leq 0,\forall l\in L}$ ${\displaystyle u_{i,j}\geq x_{i}^{L}x_{j}+x_{i}x_{j}^{L}-{x_{i}}^{L}{x_{j}}^{L},\forall i,j\ }$ ${\displaystyle u_{i,j}\geq x_{i}^{U}x_{j}+x_{i}x_{j}^{U}-{x_{i}}^{U}{x_{j}}^{U},\forall i,j\ }$ ${\displaystyle u_{i,j}\geq x_{i}^{L}x_{j}+x_{i}x_{j}^{U}-{x_{i}}^{L}{x_{j}}^{U},\forall i,j\ }$ ${\displaystyle u_{i,j}\geq x_{i}^{U}x_{j}+x_{i}x_{j}^{L}-{x_{i}}^{U}{x_{j}}^{L},\forall i,j\ }$ ${\displaystyle x^{L}\leq x\leq x^{U},\ \ u^{L}\leq u\leq u^{U}}$ Good bounds are essential to focus and minimize the feasible solution space and to reduce the number of iterations to find the optimal solution. "Good bounds may be obtained either by inspection or solving the optimization problem to minimize (maximize) x, subject to the same constraints as the original problem."2 As discussed by Hazaji6, global optimization solvers implement bound contraction techniques and once the boundary is optimized, domain partitioning may become necessary. Spatial branch and bound schemes7,8 are among the most effective partitioning methods in global optimization. By dividing the domain of a given variable, the solver is able to divide the original domain into smaller regions, further tightening the convex relaxations of each partition. Example: Numerical ${\displaystyle min\ Z=xy+6x+y}$ ${\displaystyle s.t.\ xy\leq 18}$ ${\displaystyle 0\leq x\leq 10}$ ${\displaystyle 0\leq y\leq 2}$ ${\displaystyle Let\ w=xy}$ ${\displaystyle min\ Z=-w+6x+y}$ ${\displaystyle s.t.\ w\leq 18}$ ${\displaystyle w\geq 0}$ ${\displaystyle w\geq 10y+2x-20}$ ${\displaystyle w\leq 10y}$ ${\displaystyle w\leq 2x}$ Using GAMS, the solution is z= -24, x=6, y=2. GAMS code sample: variable z; positive variable x, y, w; equations  obj, c1, c2, c3, c4, c5 ; obj..    z =e= -w -2*x ; c1..     w =l= 12 ; c2..     w =g= 0; c3..     w =g= 6*y +3*x -18; c4..     w =l= 6*y; c5..     w =l= 3*x; x.up = 10; x.lo =0; y.up = 2; y.lo = 0; model course5800 /all/; option mip = baron; option optcr = 0; solve course5800 minimizing z using mip ; Application "Bilinear expressions are the most common non-convex components in mathematical formulations modeling problems in: 6 Chemical engineering 9 Process network problems10" Computer vision 11 Super resolution imaging 12 Conclusion McCormick Envelopes provide a relaxation technique for bilinear non-convex nonlinear programming problems3. Since non-convex NLPs are challenging to solve and may require a significant amount of time, resources, and effort to determine if the solution is global or the problem has no feasible solution. McCormick Envelopes provide a straightforward technique that may be applied to bilinear expressions from multiple application areas. References 1. Castro, Pedro. "A Tighter Piecewise McCormick Relaxation for Bilinear Problems." (n.d.): June 2014. Web. 6 June 2015. Retrieved from: <http://minlp.cheme.cmu.edu/2014/papers/castro.pdf 2. You, F. (2021). Notes for a lecture on Mixed Integer Non-Linear Programming (MINLP). Archives for SYSEN 5800 Computational Optimization (2021FA), Cornell University, Ithaca, NY. 3. Dombrowski, J. (2015, June 7). Northwestern University Open Text Book on Process Optimization, McCormick Envelopes Retrieved from https://optimization.mccormick.northwestern.edu/index.php/McCormick_envelopes 4. McCormick, Garth P.  Computability of Global Solutions To Factorable Nonconvex Solutions: Part I: Convex Underestimating Problems 5. Scott, J. K. Stuber, M. D. & Barton, P. I. (2011). Generalized McCormick Relaxations. Journal of Global Optimization, Vol. 51, Issue 4, 569-606 doi: 10.1007/s10898-011-9664-7 6. Hijazi, H., Perspective Envelopes for Bilinear Functions, unpublished manuscript, retrieved from: Perspective Envelopes for Bilinear Functions 7. Androulakis, I., Maranas, C., Floudas, C.: alphabb: A global optimization method for general constrained nonconvex problems. Journal of Global Optimization 7(4), 337{363 (1995) 8. Smith, E., Pantelides, C.: A symbolic reformulation/spatial branch-and-bound algorithm for the global optimisation of nonconvex fMINLPsg. Computers & Chemical Engineering 23(4), 457 { 478 (1999) 9. Geunes, J., Pardalos, P.: Supply chain optimization, vol. 98. Springer Science & BusinessMedia (2006) 10. Nahapetyan, A.: Bilinear programming: applications in the supply chain management bilinear programming: Applications in the supply chain management. In: C.A. Floudas, P.M. Pardalos (eds.) Encyclopedia of Optimization, pp. 282{288. Springer US (2009) 11. Chandraker, M. & Kriegman, D. (n.d.): Globally Optimal Bilinear Programming for Computer Vision Applications. University of San Diego, CA. Retrieved from: https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUKEwis69Say7H0AhXWp3IEHVGLBW4QFnoECC4QAQ&url=http%3A%2F%2Fvision.ucsd.edu%2F~manu%2Fpdf%2Fcvpr08_bilinear.pdf&usg=AOvVaw1h-cpWO81s41howVxYKq7F
2,741
8,500
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 47, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.25
3
CC-MAIN-2022-49
latest
en
0.849115
https://it.mathworks.com/matlabcentral/answers/1743920-speeding-up-integration-within-nested-for-loops
1,718,266,035,000,000,000
text/html
crawl-data/CC-MAIN-2024-26/segments/1718198861342.74/warc/CC-MAIN-20240613060639-20240613090639-00481.warc.gz
299,048,605
33,596
# Speeding up integration within nested for loops 14 visualizzazioni (ultimi 30 giorni) Aravindh Shankar il 20 Giu 2022 I have to find the elements of the following matrix, each element of which is where K itself involves an integral where f and q are known (I've left them out so as not to overload the question with math); they involve trigonometric functions of theta and square roots of theta,x,x'. To find J I tried two approaches: 1) Numerical integration: I defined K(x,x') as K = @(x,x') integral(@theta ..Integrand(theta,x,x').., 0, pi) and J = @(x,limit1,limit2) integral(@(x') K(x,x'), limit1, limit2) and evaluate them as for i = 1:numel(x) for j = 1:numel(x) J_arr(i,j) = (1/(x(j+1) - x(j)))*J(x(i),x_arr(j),x_arr(j+1)); end end This works but the issue is that it is too slow for my purpose - the array x has 2047 elements so this script is carrying out both integrals 2047*2047 times. Is there a faster way to do this integration numerically? (Have thought along the lines of integral2, arrayfun so far). 2) Symbolic integration: Define K and J as above, but try to symbolically integrate them before the loops so that inside the loop I would just be substituting values for x_i and the limits. The issue with this is that the functions f and q in the definition of K are sufficiently complicated that Matlab has trouble finding the analytic symbolic integral, even for the first inner (theta) integration. Any help in speeding up the numerical calculation/implementing a symbolic integration would be much appreciated! Thanks ##### 0 CommentiMostra -2 commenti meno recentiNascondi -2 commenti meno recenti Accedi per commentare. ### Risposta accettata Johan il 20 Giu 2022 One trick I know to increase speed of calculation is to vectorize your code. I'm not sure that would work for you but you could try using the ArrayValued option of the integral function. Also it looks like the constant factor in Jij can be computed as an array once and multiplied with the result of the integral afterward. x = rand(10,1); Kint = @(x1,x2) integral(@(theta) sin(theta+x2+x1), 0, pi,'ArrayValued',true); %I put in a filler function here Jint = @(x1,limit1,limit2) integral(@(x2) Kint(x1,x2), limit1, limit2,'ArrayValued',true); prefactor = ones(numel(x),1).*(1./(x(2:end) - x(1:end-1)))'; %Compute prefactor for all ij J_arr = zeros(numel(x),numel(x)-1); %initialize result array for iJ = 1:numel(x)-1 J_arr(:,iJ) = Jint(x, x(iJ), x(iJ+1)); end J_arr.*prefactor ans = 10×9 1.5154 1.2476 1.0782 1.3314 1.3181 0.7666 1.1930 1.3678 1.4834 0.9472 0.5979 0.3974 0.7129 0.6969 0.0473 0.5474 0.7552 0.9151 0.9196 0.5679 0.3666 0.6839 0.6678 0.0159 0.5177 0.7263 0.8877 0.5684 0.1938 -0.0125 0.3196 0.3030 -0.3635 0.1486 0.3628 0.5382 1.4513 1.1705 0.9956 1.2592 1.2454 0.6764 1.1160 1.2966 1.4191 0.5364 0.1604 -0.0459 0.2869 0.2703 -0.3964 0.1158 0.3300 0.5064 0.2857 -0.0973 -0.3023 0.0330 0.0164 -0.6450 -0.1375 0.0756 0.2576 1.4131 1.1251 0.9472 1.2165 1.2025 0.6240 1.0707 1.2545 1.3808 0.7013 0.3336 0.1283 0.4563 0.4399 -0.2241 0.2864 0.4994 0.6703 1.6477 1.4118 1.2564 1.4840 1.4719 0.9648 1.3574 1.5176 1.6167 Hope this helps, ##### 5 CommentiMostra 3 commenti meno recentiNascondi 3 commenti meno recenti Johan il 5 Lug 2022 The integral function seems to have trouble reaching its default tolerance value for your function, I reduced the tolerance to 1e-4 I don't know if that is satisfactory for you or not. With larger tolerance, your code is still faster for nested than for arrayed for small input size, however, I see the nested way seems to increase quadraticaly with input size whereas the arrayed increases somewhat linearly: A small optimisation I did was un-nest your functions in the Kint integration. I noticed Matlab tends to slow down then you have function calling function I'm not sure why. Its much less readable and you should be extra careful the equation is right but that leads to faster computation. step = [0.451,0.251,0.151,0.1]; timed = zeros(2,2); N = zeros(2,1); for i_step = 1:length(step) X = -0.999:step(i_step):1; %evaluate only once because of computation time limit tic nested(X); timed(i_step, 1) = toc; tic arrayed(X); timed(i_step, 2) = toc; % Version I used on my computer for the graph above % funnest = @() nested(X); % funarr = @() arrayed(X); % timed(i_step, 1) = timeit(funnest); % timed(i_step, 2) = timeit(funarr); N(i_step) = size(X,2); end plot(N,timed,'-*') legend({'nested','arrayed'}) xlabel('X array size') ylabel('Computing time (s)') axis square function nested(x) %Constants mstar = 1; kappa = 1; g_v = 1; el = 1.38*1e4; r_s = 5; n = mstar^2*el^4/(pi*kappa^2*r_s^2); k_F = sqrt(2*pi*n); E_F = (k_F^2)/(2*mstar); q_TF = 2*g_v*el^2*mstar/kappa; Kint = @(x_1,x_2) (integral(@(theta) ((mstar/(2*pi^2)).*... (2*pi*el^2./(kappa.*(sqrt((2*mstar*E_F*(x_1 + x_2 + 2)) - (4*mstar*E_F*sqrt(x_1 + 1)*sqrt(x_2 + 1).*cos(theta)))))).*... (1 - ((sqrt(2*pi*el^2*n*(sqrt((2*mstar*E_F*(x_1 + x_2 + 2)) -... (4*mstar*E_F*sqrt(x_1 + 1)*sqrt(x_2 + 1).*cos(theta))))/(kappa*mstar))).^2 ... ./(((sqrt(2*pi*el^2*n*(sqrt((2*mstar*E_F*(x_1 + x_2 + 2)) -... (4*mstar*E_F*sqrt(x_1 + 1)*sqrt(x_2 + 1).*cos(theta))))/(kappa*mstar))).*... sqrt(1 + ((sqrt((2*mstar*E_F*(x_1 + x_2 + 2)) - (4*mstar*E_F*sqrt(x_1 + 1)*sqrt(x_2 + 1).*cos(theta))))/q_TF))).*... (((sqrt(2*pi*el^2*n*(sqrt((2*mstar*E_F*(x_1 + x_2 + 2)) - (4*mstar*E_F*sqrt(x_1 + 1)*sqrt(x_2 + 1).*... cos(theta))))/(kappa*mstar))).*sqrt(1 + ((sqrt((2*mstar*E_F*(x_1 + x_2 + 2)) - ... (4*mstar*E_F*sqrt(x_1 + 1)*sqrt(x_2 + 1).*cos(theta))))/q_TF))) + E_F*abs(x_1) + E_F.*abs(x_2))))))... ,0,pi,'ArrayValued',true,'RelTol',0,'AbsTol',1e-4)); Jint = @(x_1,lim1,lim2) (integral(@(x_2) Kint(x_1,x_2),lim1,lim2,'ArrayValued',true,'RelTol',0,'AbsTol',1e-4)); J_arr = zeros(numel(x)-1); for i = 1:length(x)-1 x_i_bar = (x(i) + x(i+1))/2; for j = 1:numel(x)-1 J_arr(i,j) = (1./(x(j+1) - x(j))).*(Jint(x_i_bar,x(j),x(j+1))); end end end function arrayed(x) %Constants mstar = 1; kappa = 1; g_v = 1; el = 1.38*1e4; r_s = 5; n = mstar^2*el^4/(pi*kappa^2*r_s^2); k_F = sqrt(2*pi*n); E_F = (k_F^2)/(2*mstar); q_TF = 2*g_v*el^2*mstar/kappa; Kint = @(x_1,x_2) (integral(@(theta) ((mstar/(2*pi^2)).*... (2*pi*el^2./(kappa.*(sqrt((2*mstar*E_F*(x_1 + x_2 + 2)) - (4*mstar*E_F*sqrt(x_1 + 1)*sqrt(x_2 + 1).*cos(theta)))))).*... (1 - ((sqrt(2*pi*el^2*n*(sqrt((2*mstar*E_F*(x_1 + x_2 + 2)) -... (4*mstar*E_F*sqrt(x_1 + 1)*sqrt(x_2 + 1).*cos(theta))))/(kappa*mstar))).^2 ... ./(((sqrt(2*pi*el^2*n*(sqrt((2*mstar*E_F*(x_1 + x_2 + 2)) -... (4*mstar*E_F*sqrt(x_1 + 1)*sqrt(x_2 + 1).*cos(theta))))/(kappa*mstar))).*... sqrt(1 + ((sqrt((2*mstar*E_F*(x_1 + x_2 + 2)) - (4*mstar*E_F*sqrt(x_1 + 1)*sqrt(x_2 + 1).*cos(theta))))/q_TF))).*... (((sqrt(2*pi*el^2*n*(sqrt((2*mstar*E_F*(x_1 + x_2 + 2)) - (4*mstar*E_F*sqrt(x_1 + 1)*sqrt(x_2 + 1).*... cos(theta))))/(kappa*mstar))).*sqrt(1 + ((sqrt((2*mstar*E_F*(x_1 + x_2 + 2)) - ... (4*mstar*E_F*sqrt(x_1 + 1)*sqrt(x_2 + 1).*cos(theta))))/q_TF))) + E_F*abs(x_1) + E_F.*abs(x_2))))))... ,0,pi,'ArrayValued',true,'RelTol',0,'AbsTol',1e-4)); Jint = @(x_1,lim1,lim2) (integral(@(x_2) Kint(x_1,x_2),lim1,lim2,'ArrayValued',true,'RelTol',0,'AbsTol',1e-4)); J_arr = zeros(numel(x)-1); x_i_bar = (x(1:end-1) + x(2:end))/2; for j = 1:numel(x)-1 J_arr(:,j) = (1./(x(2:end)-x(1:end-1))).*Jint(x_i_bar,x(j),x(j+1)); end end Aravindh Shankar il 5 Lug 2022 Modificato: Aravindh Shankar il 5 Lug 2022 I agree with all your comments Johan, glad that it has been improved. Thanks for your help with optimizing the code, I'm really grateful for your effort! As a future step I will try parallelizing using parfor once I have server access to see if that improves speed as well. Accedi per commentare. ### Categorie Scopri di più su Special Values in Help Center e File Exchange R2021a ### Community Treasure Hunt Find the treasures in MATLAB Central and discover how the community can help you! Start Hunting! Translated by
3,083
7,919
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.375
3
CC-MAIN-2024-26
latest
en
0.891237
https://stackcodereview.com/project-euler-55-lychrel-numbers-2/
1,686,386,176,000,000,000
text/html
crawl-data/CC-MAIN-2023-23/segments/1685224657144.94/warc/CC-MAIN-20230610062920-20230610092920-00572.warc.gz
593,242,334
13,939
# Project Euler #55 – Lychrel numbers Posted on Problem If we take 47, reverse and add, 47 + 74 = 121, which is palindromic. Not all numbers produce palindromes so quickly. For example, 349 + 943 = 1292, 1292 + 2921 = 4213 4213 + 3124 = 7337 That is, 349 took three iterations to arrive at a palindrome. Although no one has proved it yet, it is thought that some numbers, like 196, never produce a palindrome. A number that never forms a palindrome through the reverse and add process is called a Lychrel number. Due to the theoretical nature of these numbers, and for the purpose of this problem, we shall assume that a number is Lychrel until proven otherwise. In addition you are given that for every number below ten-thousand, it will either (i) become a palindrome in less than fifty iterations, or, (ii) no one, with all the computing power that exists, has managed so far to map it to a palindrome. In fact, 10677 is the first number to be shown to require over fifty iterations before producing a palindrome: 4668731596684224866951378664 (53 iterations, 28-digits). Surprisingly, there are palindromic numbers that are themselves Lychrel numbers; the first example is 4994. How many Lychrel numbers are there below ten-thousand? This code works with NodeJS. It is also using ES6 features. So no SpiderMonkey nonsense now. ``````\$ node -v v0.12.7 \$ node --harmony p55.js 249 `````` Any and all feedback is appreciated. ``````// Like Python's range. function* range(start, stop, step){ if (arguments.length == 0){ start = 0; stop = null; } if (arguments.length == 1){ stop = start; start = 0; } if (arguments.length < 3) step = 1; start = parseInt(start); if (Number.isNaN(start)) start = 0; stop = parseInt(stop); if (Number.isNaN(stop)) stop = null; step = parseInt(step); if (Number.isNaN(step)) step = 1; if (stop === null){ while (true){ yield start; start += step; } }else if (step == 0){ while (true){ yield start; } }else{ if (step > 0){ for (var number = start; number < stop; number += step){ yield number; } }else{ for (var number = start - 1; number >= stop; number += step){ yield number; } } } } function is_palindrome(string){ var str_len = string.length - 1; for (var index of range(string.length / 2)){ if (string[index] != string[str_len - index]){ return false; } } return true; } function reverse(string){ var new_string = ''; for (var index of range(string.length, 0, -1)){ new_string += string[index]; } return new_string; } function is_lychrel(number, recursions){ if (recursions == 0) return false; if (arguments.length == 1) recursions = 50; number += parseInt(reverse(number.toString())); if (is_palindrome(number.toString())) return true; return is_lychrel(number, recursions - 1); } function main(){ var total = 0; for (var number of range(10000)){ if (! is_lychrel(number)){ total += 1; } } } console.log(main()); `````` Solution Typically in JavaScript, when calling `isNaN`, it is just typed: ``````isNaN(...); `````` Not as a call from the `Number` class: ``````Number.isNaN(...); `````` Also, by calling it the way that you did, you are being inconsistent. If you are going to call `isNaN` through `Number`, then why don’t you also call `parseInt` through `Number`? The function `parseInt` takes a second parameter `radix` which is the base of the first parameter. According to the MDN, you should always specify the radix parameter. Why do you call `parseInt` on the arguments of `range`? In every single point of your code where you are calling this function, you are passing numerical arguments. Calling this functions is just a waste. In places like these: ``````if (arguments.length == 0){ `````` and ``````if (arguments.length == 1){ `````` You should be using the `===` comparison operator rather than the `==` comparison operator. 1. It is good JavaScript practice. 2. It may be a little faster. As shown in this SO post, a much, much simpler way to reverse a string would be to do this: ``````return string.split("").reverse().join(""); `````` This could potentially be faster, too, as this is using JavaScript’s built in functions/methods/features. Your `is_palindrome` function confuses me a little. A palindrome is defined as a word that is spelled the exact same as it’s reverse. Therefore, why aren’t you using the `reverse` function that you already made? Now, your `is_palindrome` function becomes this: ``````return string == reverse(string); `````` This is much more simple than whatever you were doing. The naming case for JavaScript is `camelCase`, not `snake_case`. You need to change the name of these functions: ``````is_palindrome --> isPalindrome `````` and ``````is_lychrel --> isLychrel `````` And, make sure you change the name of some of your variables to `camelCase` too. You are putting too much work on your `isLychrel` function. Here, you check to see that if there is no `recursions` specified, set it to 50: ``````if (arguments.length == 1) recursions = 50; `````` However, this check is going to happen every single iteration (since the function is recursive). Is is really that difficult to just pass 50 to it in the initial time you call it in `main`? If you did this and removed the check from the function, this could greatly improve your performance.
1,364
5,286
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.5625
4
CC-MAIN-2023-23
latest
en
0.776007
https://www.jiskha.com/display.cgi?id=1237990707
1,503,257,732,000,000,000
text/html
crawl-data/CC-MAIN-2017-34/segments/1502886106984.52/warc/CC-MAIN-20170820185216-20170820205216-00010.warc.gz
920,148,255
4,053
Data Management posted by . The length of sunfish in Grenadier pond are normally distributed with a mean of 15cm. Find the standar deviation to one decimal place. If 91% of these fish have a length less than 17.7 cm. Similar Questions 1. Data Management - 12 The length of sunfish in Grenadier pond are normally distributed with a mean of 15cm. Find the standar deviation to one decimal place. If 91% of these fish have a length less than 17.7 cm. 2. statistics A manufacturer knows that their items have a normally distributed length, with a mean of 7.3 inches, and standard deviation of 0.6 inches. If one item is chosen at random, what is the probability that it is less than 5.5 inches long? 3. math Can I gt some help on these questions! 1.Research subjects that are divided up based on gender, age, etc. to be compared to people of the same group is what type of design experiment? 4. statistics The length of country and western songs is normally distributed and has a mean of 200 seconds and a standard deviation of 30 seconds. Find the probability that a random selection of 9 songs will have mean length of 186.30 seconds or … 5. math Using the data in the table, calculate the mean, range, variance, and standard deviation, and then answer questions e and f. Round the variance and standard deviation to one decimal place. a. Mean b. Range c. Variance d. Standard deviation … 6. Statistics Assume that adults have IQ scores that are normally distributed with a mean of 100 and a standard deviation of 15. Find the IQ score separating the top 37% from the others. Answer in one decimal place. 7. Statistics I am a big shot research scientist at Washington University where I am using 2ebrafish to study neuronal migration. My graduate students measured 40 fish and found that the mean length of the fish sampled was 3.7 cm with a standard … 8. fnu The length of country and western songs is normally distributed and has a mean of 200 seconds and a standard deviation of 30 seconds. Find the probability that a random selection of 9 songs will have mean length of 186.30 seconds or … 9. Statistics An automatic machine in a manufacturing process is operating properly if the length of an important subcomponent is normally distributed with a mean μ = 80 cm and a standard deviation σ = 2 cm. a) Find the probability that … 10. math Bass: The bass in Clear Lake have weights that are normally distributed with a mean of 2.3 pounds and a standard deviation of 0.6 pounds. (a) If you catch one random bass from Clear Lake, find the probability that it weighs less than … More Similar Questions
603
2,614
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.921875
3
CC-MAIN-2017-34
latest
en
0.913264
http://stackoverflow.com/questions/tagged/division?page=21&sort=active&pagesize=15
1,448,601,487,000,000,000
text/html
crawl-data/CC-MAIN-2015-48/segments/1448398447913.86/warc/CC-MAIN-20151124205407-00107-ip-10-71-132-137.ec2.internal.warc.gz
219,069,643
18,480
# Tagged Questions In mathematics, division (÷) is an arithmetic elementary operation. 968 views ### Java unsigned division without casting to long? I have written an interpreter that requires me to perform 32-bit division of unsigned integers. In Java, I can do this as: reg[a] = (int) ((reg[b] & 0xFFFFFFFFL) / (reg[c] & 0xFFFFFFFFL)); ... 19k views ### Sum and Division example (Python) >>> sum((1, 2, 3, 4, 5, 6, 7)) 28 >>> 28/7 4.0 >>> sum((1,2,3,4,5,6,7,8,9,10,11,12,13,14)) 105 >>> 105/7 15.0 >>> How do I automate this sum and division ... 2k views ### Java int division confusing me I am doing very simple int division and I am getting odd results. This code prints 2 as expected: public static void main(String[] args) { int i = 200; int hundNum = i / 100; ... 36k views ### SQL Server, division returns zero Here is the code I'm using in the example: PRINT @set1 PRINT @set2 SET @weight= @set1 / @set2; PRINT @weight Here is the result: 47 638 0 I would like to know why it's returning 0 instead ... 588 views ### Why does this division result in zero? I was writing this code in C when I encountered the following problem. #include <stdio.h> int main() { int i=2; int j=3; int k,l; float a,b; k=i/j*j; l=j/i*i; a=i/j*j; ... 52k views ### Python integer division yields float Python 3.1 (r31:73574, Jun 26 2009, 20:21:35) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> 2/2 1.0 Is this intended? I ... 6k views ### how to divide my asp.net web page into 2 webpages? I am writing a webPage using asp.net and c#. I want to divide my webpage into 2 columns such as in one I will have buttons that change the view in the other column, without "stepping on" the content ... 4k views ### Change displayable labels for a JSlider? I have a JSlider with a min of 0 and a max of 10,000. I have the major tick marks set at 1,000. If I were to paint the labels now they would show up as 0, 1000, 2000, 3000, 4000, etc. What I would ... 2k views ### I need a fast 96-bit on 64-bit specific division algorithm for a fixed-point math library I am currently writing a fast 32.32 fixed-point math library. I succeeded at making adding, subtraction and multiplication work correctly, but I am quite stuck at division. A little reminder for ... 11k views ### dividing results of two PL/SQL select statements The results of my two PL/SQL select statements are integers 27 & 50, I want their division (27/50) 0.54 at output...how to do that? I have tried select * from ((select....)/(select ...)) but it ... 842 views ### Inaccurate division of doubles (Visual C++ 2008) I have some code to convert a time value returned from QueryPerformanceCounter to a double value in milliseconds, as this is more convenient to count with. The function looks like this: double ... 573 views ### How does division work in MIX? Can someone explain to me how division in MIX (from TAOCP by Knuth) works on a byte-to-byte basis? rA = |-| . . . .0| rX = |+|1235|0|3|1| The memory location 1000 contains |-|0|0|0|2|0|. When ... 7k views ### Double value returns 0 Here's an example: Double d = (1/3); System.out.println(d); This returns 0, not 0.33333... as it should. Does anyone know?
946
3,249
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.859375
3
CC-MAIN-2015-48
latest
en
0.847308
https://lists.defectivebydesign.org/archive/html/help-octave/2003-01/msg00130.html
1,686,338,470,000,000,000
text/html
crawl-data/CC-MAIN-2023-23/segments/1685224656788.77/warc/CC-MAIN-20230609164851-20230609194851-00102.warc.gz
416,985,441
2,916
help-octave [Top][All Lists] ## RE: A Simple Matrix Construction Question From: David Pruitt Subject: RE: A Simple Matrix Construction Question Date: Thu, 23 Jan 2003 13:34:26 -0600 ```This is known as "Tony's trick" from the Matlab support website. My question is: why does this work? -----Original Message----- Sent: Thursday, January 23, 2003 1:28 PM To: Craig Stoudt Subject: Re: A Simple Matrix Construction Question I think the preferred way is X = Y(:,ones(M,1)); Heber On Thursday, January 23, 2003, at 12:35 PM, Craig Stoudt wrote: > There is probably a really simple way to do the > following, but I'm suffering from a mental block. > > I have a row array 'Y' of arbitrary length, 1xn. > > I want to create an mxn matrix where all of the rows > are the same as 'Y'. > > Of course, I want to do this without resorting to > loops. > > > __________________________________________________ > Do you Yahoo!? > http://mailplus.yahoo.com > > > > ------------------------------------------------------------- > Octave is freely available under the terms of the GNU GPL. > > Octave's home on the web: http://www.octave.org > How to fund new projects: http://www.octave.org/funding.html > Subscription information: http://www.octave.org/archive.html > ------------------------------------------------------------- > ------------------------------------------------------------- Octave is freely available under the terms of the GNU GPL. Octave's home on the web: http://www.octave.org How to fund new projects: http://www.octave.org/funding.html Subscription information: http://www.octave.org/archive.html ------------------------------------------------------------- ------------------------------------------------------------- Octave is freely available under the terms of the GNU GPL. Octave's home on the web: http://www.octave.org How to fund new projects: http://www.octave.org/funding.html Subscription information: http://www.octave.org/archive.html ------------------------------------------------------------- ```
478
2,050
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.828125
3
CC-MAIN-2023-23
latest
en
0.68539
https://www.flexiprep.com/Important-Topics/Physics/Heat-Transfer-Radiation-And-Conduction.html
1,638,968,011,000,000,000
text/html
crawl-data/CC-MAIN-2021-49/segments/1637964363510.40/warc/CC-MAIN-20211208114112-20211208144112-00236.warc.gz
808,341,103
6,987
# Heat Transfer – Radiation and Conduction, the Different Modes of Heat Transfer Include (For CBSE, ICSE, IAS, NET, NRA 2022) Glide to success with Doorsteptutor material for competitive exams : get questions, notes, tests, video lectures and more- for all subjects of your exam. # Heat Transfer – Radiation and Conduction ## Heat Transfer • The movement of heat across the border of the system due to a difference in temperature between the system and its surroundings. • So, when there is a temperature difference between two bodies, heat is transferred from the hot body to the colder body. • There are three common modes of heat transfer – conduction, convection, and radiation. • Convection is either the heat transfer due to bulk movement. • The process of heat transfer between a surface and a fluid flowing in contact with it is called convective heat transfer. • Conduction • Convection ## Conduction • The process in which heat flows from objects with higher temperature to objects with lower temperature • Heat transfers by conduction through some solid objects (metallic spoon) from one side to the other, Cooking pans are made up of copper & aluminium because they are good conductors of heat, When you put a metallic spoon in a cup of hot tea for a time, we feel the hotness of the spoon because heat is transferred through solids by conduction. • Transfer of heat by conduction is the transfer of heat through some solid objects from the part with higher temperature to that with lower temperature, You feel hot when you touch a hot metallic spoon because heat is transferred from the hot object (spoon) to the cold object by conduction ### Conduction Equation • The coefficient of thermal conductivity shows that a metal body conducts heat better when it comes to conduction. • The rate of conduction can be calculated by the following equation: Where, • Q is the transfer of heat per unit time • K is the thermal conductivity of the body • A is the area of heat transfer • is the temperature of the hot region • is the temperature of the cold region • d is the thickness of the body ### Conduction Examples • Ironing of clothes where the heat is conducted from the iron to clothes. • Heat is transferred from hands to ice cube resulting in melting of ice cube when held in hands. • Heat conduction through sand at the beaches. This can be experienced during summers. Sand is a good conductor of heat. • Thermal radiation is generated by the emission of electromagnetic waves. • These waves carry away the energy from the emitting body. • It takes place through a vacuum or transparent medium which can be either solid or liquid. • It is the result of the random motion of molecules in the matter. • Movement of charged electrons and protons is responsible for the emission of electromagnetic radiation. • The wavelengths in the spectra of the radiation emitted decreases and shorter wavelengths radiations are emitted as temperature rises. • Thermal radiation can be calculated by Stefan-Boltzmann law: Where, • P is the net power of radiation • A is the area of radiation • is the surrounding temperature • e is emissivity and is Stefan՚s constant
660
3,182
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.75
4
CC-MAIN-2021-49
latest
en
0.941505
https://numbas.mathcentre.ac.uk/question/15487/algebra-vi-solving-linear-equations-sarah.exam
1,603,560,342,000,000,000
text/plain
crawl-data/CC-MAIN-2020-45/segments/1603107884322.44/warc/CC-MAIN-20201024164841-20201024194841-00027.warc.gz
461,505,655
3,004
// Numbas version: exam_results_page_options {"name": "Algebra VI: Solving Linear Equations (Sarah)", "extensions": [], "custom_part_types": [], "resources": [], "navigation": {"allowregen": true, "showfrontpage": false, "preventleave": false}, "question_groups": [{"pickingStrategy": "all-ordered", "questions": [{"functions": {}, "ungrouped_variables": ["a", "b", "c", "ans1", "d", "f", "g", "ans2", "h", "j", "k", "ans3", "l", "m", "n", "ans4", "p", "q", "r", "ans5", "s", "t", "u", "ans6"], "name": "Algebra VI: Solving Linear Equations (Sarah)", "tags": [], "advice": " Part a) \n Given $\\var{a}x+\\var{b}=\\var{c}$, we can start by subtracting $\\var{b}$ from both sides to get $\\var{a}x = \\simplify{{c-b}}$. \n Dividing both sides by $\\var{a}$ gives us $x= \\simplify {{c-b}/{a}}$ \n \n Part b) \n We could start by subtracting $\\var{d}$ from both sides to get $-\\var{f}y = \\simplify{{g-d}}$. \n Or we could first add $\\var{f}y$ to both sides to get $\\var{d} = \\var{g} + \\var{f}y$. This avoids needing to divide by a negative number. \n Either way we should end up with $y= \\dfrac{\\simplify{{d- g}}}{\\var{f}}= \\simplify{{d-g}/{f}}$. \n \n \n Part c) \n $\\displaystyle{\\frac{z}{\\var{h}}}-\\var{j}=\\var{k}$ \n $\\displaystyle{\\frac{z}{\\var{h}}}=\\var{k+j}$ \n $z=\\var{ans3}$ \n \n \n Part d) \n $\\displaystyle{\\frac{a-\\var{l}}{\\var{m}}}=\\var{n}$ \n $a-\\var{l}=\\var{n*m}$ \n $a=\\var{ans4}$ \n \n \n Part e) \n $\\var{p}$$=$$\\var{q}(\\var{r}+b)$ \n $\\displaystyle{\\simplify{{p}/{q}}}$$=$$\\var{r}+b$ \n $\\displaystyle{\\simplify{{p-r*q}/{q}}}$$=$$b$ \n \n \n Part f) \n $\\displaystyle{\\frac{\\var{s}w}{\\var{t}}}$$=$$\\var{u}$ \n $\\var{s}w$$=$$\\var{u*t}$ \n $w$$=$$\\displaystyle{\\simplify{{u*t}/{s}}}$ \n \n For more help, check this video- \n \n \n ", "rulesets": {}, "parts": [{"stepsPenalty": 0, "prompt": " $\\var{a}x+\\var{b}=\\var{c}$ \n $x=$ [[0]]. ", "variableReplacements": [], "variableReplacementStrategy": "originalfirst", "gaps": [{"allowFractions": true, "variableReplacements": [], "maxValue": "{c-b}/{a}", "minValue": "{c-b}/{a}", "variableReplacementStrategy": "originalfirst", "correctAnswerFraction": true, "showCorrectAnswer": true, "scripts": {}, "marks": 1, "type": "numberentry", "showPrecisionHint": false}], "steps": [{"prompt": " \n • In order to solve for the unknown variable, you must isolate the variable. • \n • In the order of operation, multiplication and division are completed before addition and subtraction. • \n \n \n \n Example:   5 x - 6 = 3 x - 8 \n Subtract 3x from both sides of the equation: 2x - 6 = -8 \n Add 6 to both sides of the equation: 2x = -2 or x= -1 \n \n ", "variableReplacements": [], "variableReplacementStrategy": "originalfirst", "showCorrectAnswer": true, "scripts": {}, "marks": 1, "type": "extension"}], "marks": 0, "scripts": {}, "showCorrectAnswer": true, "type": "gapfill"}, {"stepsPenalty": 0, "prompt": " $\\var{d}-\\var{f}y=\\var{g}$ \n $y=$ [[0]]. ", "variableReplacements": [], "variableReplacementStrategy": "originalfirst", "gaps": [{"allowFractions": true, "variableReplacements": [], "maxValue": "{d-g}/{f}", "minValue": "{d-g}/{f}", "variableReplacementStrategy": "originalfirst", "correctAnswerFraction": true, "showCorrectAnswer": true, "scripts": {}, "marks": 1, "type": "numberentry", "showPrecisionHint": false}], "steps": [{"prompt": " \n • In order to solve for the unknown variable, you must isolate the variable. • \n • In the order of operation, multiplication and division are completed before addition and subtraction. • \n ", "variableReplacements": [], "variableReplacementStrategy": "originalfirst", "showCorrectAnswer": true, "scripts": {}, "marks": 0, "type": "information"}], "marks": 0, "scripts": {}, "showCorrectAnswer": true, "type": "gapfill"}, {"stepsPenalty": 0, "prompt": " $\\displaystyle{\\frac{z}{\\var{h}}}-\\var{j}=\\var{k}$ \n $z=$ [[0]] ", "variableReplacements": [], "variableReplacementStrategy": "originalfirst", "gaps": [{"allowFractions": false, "variableReplacements": [], "maxValue": "{ans3}", "minValue": "{ans3}", "variableReplacementStrategy": "originalfirst", "correctAnswerFraction": false, "showCorrectAnswer": true, "scripts": {}, "marks": 1, "type": "numberentry", "showPrecisionHint": false}], "steps": [{"prompt": " \n • In order to solve for the unknown variable, you must isolate the variable. • \n • In the order of operation, multiplication and division are completed before addition and subtraction. • \n ", "variableReplacements": [], "variableReplacementStrategy": "originalfirst", "showCorrectAnswer": true, "scripts": {}, "marks": 0, "type": "information"}], "marks": 0, "scripts": {}, "showCorrectAnswer": true, "type": "gapfill"}, {"stepsPenalty": 0, "prompt": " $\\displaystyle{\\frac{a-\\var{l}}{\\var{m}}}=\\var{n}$. \n $a=$ [[0]] ", "variableReplacements": [], "variableReplacementStrategy": "originalfirst", "gaps": [{"allowFractions": true, "variableReplacements": [], "maxValue": "{ans4}", "minValue": "{ans4}", "variableReplacementStrategy": "originalfirst", "correctAnswerFraction": false, "showCorrectAnswer": true, "scripts": {}, "marks": 1, "type": "numberentry", "showPrecisionHint": false}], "steps": [{"prompt": " \n • In order to solve for the unknown variable, you must isolate the variable. • \n • In the order of operation, multiplication and division are completed before addition and subtraction. • \n ", "variableReplacements": [], "variableReplacementStrategy": "originalfirst", "showCorrectAnswer": true, "scripts": {}, "marks": 0, "type": "information"}], "marks": 0, "scripts": {}, "showCorrectAnswer": true, "type": "gapfill"}, {"stepsPenalty": 0, "prompt": " $\\var{p}=\\var{q}(\\var{r}+b)$. \n $b=$ [[0]] ", "variableReplacements": [], "variableReplacementStrategy": "originalfirst", "gaps": [{"allowFractions": true, "variableReplacements": [], "maxValue": "{p-r*q}/{q}", "minValue": "{p-r*q}/{q}", "variableReplacementStrategy": "originalfirst", "correctAnswerFraction": true, "showCorrectAnswer": true, "scripts": {}, "marks": 1, "type": "numberentry", "showPrecisionHint": false}], "steps": [{"prompt": " \n • In order to solve for the unknown variable, you must isolate the variable. • \n • In the order of operation, multiplication and division are completed before addition and subtraction. • \n ", "variableReplacements": [], "variableReplacementStrategy": "originalfirst", "showCorrectAnswer": true, "scripts": {}, "marks": 0, "type": "information"}], "marks": 0, "scripts": {}, "showCorrectAnswer": true, "type": "gapfill"}, {"stepsPenalty": 0, "prompt": " $\\displaystyle{\\frac{\\var{s}w}{\\var{t}}}=\\var{u}$. \n $w=$ [[0]] ", "variableReplacements": [], "variableReplacementStrategy": "originalfirst", "gaps": [{"allowFractions": true, "variableReplacements": [], "maxValue": "{u*t}/{s}", "minValue": "{u*t}/{s}", "variableReplacementStrategy": "originalfirst", "correctAnswerFraction": true, "showCorrectAnswer": true, "scripts": {}, "marks": 1, "type": "numberentry", "showPrecisionHint": false}], "steps": [{"prompt": " \n • In order to solve for the unknown variable, you must isolate the variable. • \n • In the order of operation, multiplication and division are completed before addition and subtraction. • \n ", "variableReplacements": [], "variableReplacementStrategy": "originalfirst", "showCorrectAnswer": true, "scripts": {}, "marks": 0, "type": "information"}], "marks": 0, "scripts": {}, "showCorrectAnswer": true, "type": "gapfill"}], "extensions": [], "statement": " Solve the following linear equations using basic BIDMAS calculations: \n For help on solving linear equations, check this video- https://www.youtube.com/watch?v=8j3kRdiRjPA \n Give your answer as either an integer, a non-recurring decimal or a fraction ", "variable_groups": [], "variablesTest": {"maxRuns": "100", "condition": ""}, "preamble": {"css": "", "js": ""}, "variables": {"ans1": {"definition": "(c-b)/a", "templateType": "anything", "group": "Ungrouped variables", "name": "ans1", "description": ""}, "ans2": {"definition": "(g-d)/(-f)", "templateType": "anything", "group": "Ungrouped variables", "name": "ans2", "description": ""}, "ans3": {"definition": "(k+j)*h", "templateType": "anything", "group": "Ungrouped variables", "name": "ans3", "description": ""}, "ans4": {"definition": "n*m+l", "templateType": "anything", "group": "Ungrouped variables", "name": "ans4", "description": ""}, "ans5": {"definition": "p/q-r", "templateType": "anything", "group": "Ungrouped variables", "name": "ans5", "description": ""}, "ans6": {"definition": "u*t/s", "templateType": "anything", "group": "Ungrouped variables", "name": "ans6", "description": ""}, "a": {"definition": "random(2..12)", "templateType": "anything", "group": "Ungrouped variables", "name": "a", "description": ""}, "c": {"definition": "random(-12..12 except [0,b])", "templateType": "anything", "group": "Ungrouped variables", "name": "c", "description": ""}, "b": {"definition": "random(2..12 except a)", "templateType": "anything", "group": "Ungrouped variables", "name": "b", "description": ""}, "d": {"definition": "random(1..12 except [f,g])", "templateType": "anything", "group": "Ungrouped variables", "name": "d", "description": ""}, "g": {"definition": "random(-12..12 except [0])", "templateType": "anything", "group": "Ungrouped variables", "name": "g", "description": ""}, "f": {"definition": "random(2..12)", "templateType": "anything", "group": "Ungrouped variables", "name": "f", "description": ""}, "h": {"definition": "random(list(2..12)+[20,50,100,200])", "templateType": "anything", "group": "Ungrouped variables", "name": "h", "description": ""}, "k": {"definition": "random(-13..-1 except -j)", "templateType": "anything", "group": "Ungrouped variables", "name": "k", "description": ""}, "j": {"definition": "random(2..12 except [h])", "templateType": "anything", "group": "Ungrouped variables", "name": "j", "description": ""}, "m": {"definition": "random(2..12 except l)", "templateType": "anything", "group": "Ungrouped variables", "name": "m", "description": ""}, "l": {"definition": "random(2..12)", "templateType": "anything", "group": "Ungrouped variables", "name": "l", "description": ""}, "n": {"definition": "random(-12..12)", "templateType": "anything", "group": "Ungrouped variables", "name": "n", "description": ""}, "q": {"definition": "random(-12..12 except [0,1,-1])", "templateType": "anything", "group": "Ungrouped variables", "name": "q", "description": ""}, "p": {"definition": "random(-12..12 except[0,q])", "templateType": "anything", "group": "Ungrouped variables", "name": "p", "description": ""}, "s": {"definition": "random([-13,-11,-7,-5,-3,-2,13,11,7,5,3,2])", "templateType": "anything", "group": "Ungrouped variables", "name": "s", "description": ""}, "r": {"definition": "random(1..12)", "templateType": "anything", "group": "Ungrouped variables", "name": "r", "description": ""}, "u": {"definition": "random(-12..12 except 0)", "templateType": "anything", "group": "Ungrouped variables", "name": "u", "description": ""}, "t": {"definition": "random([13,11,7,5,3,2] except s)", "templateType": "anything", "group": "Ungrouped variables", "name": "t", "description": ""}}, "metadata": {"description": " This exercise will help you solve equations of type ax-b = c. \n ", "licence": "Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International"}, "type": "question", "showQuestionGroupNames": false, "question_groups": [{"name": "", "pickingStrategy": "all-ordered", "pickQuestions": 0, "questions": []}], "contributors": [{"name": "Sarah Turner", "profile_url": "https://numbas.mathcentre.ac.uk/accounts/profile/881/"}]}]}], "contributors": [{"name": "Sarah Turner", "profile_url": "https://numbas.mathcentre.ac.uk/accounts/profile/881/"}]}
3,687
11,886
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.59375
5
CC-MAIN-2020-45
latest
en
0.558767
https://id.scribd.com/document/363505413/Ananda-Bastian-S-150116064-pdf
1,561,035,902,000,000,000
text/html
crawl-data/CC-MAIN-2019-26/segments/1560627999218.7/warc/CC-MAIN-20190620125520-20190620151520-00297.warc.gz
470,270,716
55,658
Anda di halaman 1dari 4 # PERHITUNGAN LIFT SPEED A. Luas Lantai Trial Cars fpm RT AVTRP I (S) PHC Rental Office (lb) (s) (S) (%) 9 lantai Floor to floor : 3.5m (11.4 ft) 1 1763 400 115 66 38.3 13.4 Luas Perlantai : 720m2 (2.362ft) 2 2204 400 115 68 28.75 23.6 Interval : 25-29 detik Waiting : 15-17 detik 3 2500 400 115 66 57.5 14.4 4 3527 400 115 75 40 17.3 B. Building Population Averange Use Lift yang digunakan ialah perhitungan trial ke-2 karena 9 lantai x 720m2 / 11m2 (person) = 589 memiliki interval 28.75 detik (good service) yang lebih cepat di banding perhitungan solusi lainnya. ## C. Minimum Handling Capacity Lift dengan kapasitas 13 orang, PHC = 13 % Jumlah lift 4 dengan kecepatan lift 3m/s HC = 13% x 589 = 76.57 = 77 orang Rise = 9 lantai x 3,5 m/lantai = 31.5 = 103 ft Trial 1 Trial 3 Car Capacity = 800 kg (1763lb) Car Capacity = 1250 kg (2500lb) Speed = 400fpm (2m/s) Speed = 400fpm (2m/s) Capacity (p) = 10 persons Capacity (p) = 16 persons RT = 115 seconds AVTRP = 66 seconds RT = 115 seconds AVTRP = 66 seconds ## Single Car Capacity Single Car Capacity h = 300 (p) : RT = 300 (10) : 115 = 26.0 persons h = 300 (p) : RT = 300 (16) : 115 = 41.7 persons N=HC/h = 77 persons / 26.0 = 2.9 (3 cars) N=HC/h = 77 persons / 41.7 = 1.8 (2 cars) I =RT/N = 115 / 3 = 38.3 seconds I =RT/N = 115 / 2 = 57.5 seconds Aktual PHC = 3 x 13% / 2.9 = 13.4 % Aktual PHC = 2 x 13% / 1.8 = 14.4 % Trial 2 Trial 4 Car Capacity = 1000 kg (2204lb) Car Capacity = 1600 kg (3527lb) Speed = 400fpm (2m/s) Speed = 400fpm (2m/s) Capacity (p) = 13 persons Capacity (p) = 21 persons RT = 115 seconds AVTRP = 68 seconds RT = 115 seconds AVTRP = 75 seconds ## Single Car Capacity Single Car Capacity h = 300 (p) : RT = 300 (13) : 115 = 34 persons h = 300 (p) : RT = 300 (21) : 115 = 54.7 persons N=HC/h = 77 persons / 34 = 2.2 (3+1 cars) N=HC/h = 77 persons / 54.7 = 1.4 (2 cars) I =RT/N = 115 / 4 = 28.75 seconds (good service) I =RT/N = 115 / 2 = 57.5 seconds Aktual PHC = 4 x 13% / 2.2 = 23.6 % Aktual PHC = 2 x 13% / 1.4 = 18.5 % JUDUL PROYEK SISTEM BANGUNAN 4 NAMA Ananda Bastian Sulistian RENTAL OFFICE NILAI DOSEN PENGAMPU NPM KELAS JUDUL GAMBAR SKALA PARAF Soesilo Boedi Leksono Ir. MT. Frengky Benedictus Ola, S.T., M.T. 150116064 B UTILITAS BANGUNAN - LIFT YANG YANG DIGUNAKAN PADA DESAIN SPESIFIKASI LIFT ## Kapasitas Lift : 13 orang Jumlah Lift : 4 unit Kecepatan : 2 m/s Ukuran R. Mesin : 220 x 215 x 240 (cm) Ukuran Lift : 160 x 150 x 240 (cm) Ukuran Hoistway : 220 x 215 (cm) Ukuran Pintu : 90 x 210 (cm) Kedalaman pit : 140 (cm) Tipe Bukaan : Center Opening Pemasok : Ningbo Conai Escalator And Elevator Co., Ltd. Harga : 700-2400 USD Sumber: www.alibaba.com JUDUL PROYEK SISTEM BANGUNAN 4 NAMA Ananda Bastian Sulistian RENTAL OFFICE NILAI DOSEN PENGAMPU NPM KELAS JUDUL GAMBAR SKALA PARAF Soesilo Boedi Leksono Ir. MT. Frengky Benedictus Ola, S.T., M.T. 150116064 B UTILITAS BANGUNAN - Machine Room Plan Ventilator Window Pintu 60x210cm Exhaust Fan Controller Hoistway Plan Skala 1:50 Skala 1:50 A' A 220 220 160 110 90 150 215 215 Potongan A-A' Hoist Way Total Height 240 210 170 Skala 1:50 JUDUL PROYEK SISTEM BANGUNAN 4 Ananda Bastian Sulistian NAMA RENTAL OFFICE NILAI DOSEN PENGAMPU NPM KELAS JUDUL GAMBAR SKALA PARAF Soesilo Boedi Leksono Ir. MT. Frengky Benedictus Ola, S.T., M.T. 150116064 B UTILITAS BANGUNAN - KONSEP Easy Maintanance Pengunaan Stainless Steel dan cermin yang minim perawatan. Media Promosi Memperkenalkan kantor sewa dan lainya kepada khalayak umum. JUDUL PROYEK SISTEM BANGUNAN 4 NAMA Ananda Bastian Sulistian RENTAL OFFICE NILAI DOSEN PENGAMPU NPM KELAS JUDUL GAMBAR SKALA PARAF Soesilo Boedi Leksono Ir. MT. Frengky Benedictus Ola, S.T., M.T. 150116064 B UTILITAS BANGUNAN -
1,584
3,739
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.453125
3
CC-MAIN-2019-26
latest
en
0.425077
https://www.coursehero.com/file/6109014/Week-15-Wed-Dec-8/
1,526,992,139,000,000,000
text/html
crawl-data/CC-MAIN-2018-22/segments/1526794864725.4/warc/CC-MAIN-20180522112148-20180522132148-00092.warc.gz
724,576,747
130,355
{[ promptMessage ]} Bookmark it {[ promptMessage ]} Week 15 Wed Dec 8 Week 15 Wed Dec 8 - Posted before class WEEK 15 Wednesday... This preview shows pages 1–4. Sign up to view the full content. 1 Posted before class. WEEK 15, Wednesday, Dec 8 Section 3.10 . Continuous Random Variables . Discuss and contrast discrete random variables with their probability mass functions with continuous random variables with their probability density functions. Continuous Probability density function f ( x ) Cumulative distribution function x dy y f x X P x F ) ( ) ( ) ( ) ( [ ) ( ] ) [( ) ( ) ( ) ( 2 2 2 2 2 2 dx x f x X E X E X V dx x xf X E ) ( ) ( X V X SD Mathematical models for so- called “ continuous random variables ” are very different from those for discrete random variables. Continuous random variables are modeled to take values on the real number line in this way. The probability distribution is defined through the probability density function for the random variable (see page 128 of the textbook). Consider a random variable X with probability density function f . Then here are the properties of f . f ( x ) ≥ 0 for all - ∞ < x < ∞ 1 ) ( dx x f and b a dx x f b X a P ) ( ) ( for all real numbers a < b . With this model for probability distributions, probabilities are given by areas and ) ( ) ( ) ( ) ( ) ( b X a P b X a P b X a P dx x f b X a P b a . This preview has intentionally blurred sections. Sign up to view the full version. View Full Document 2 It follows that 0 ) ( ) ( ) ( a a dx x f a X a P a X P for all real numbers a . Individual points get zero probability when using probability density functions! For any p with 0 < p < 1, one can find a 100pth percentile of the distribution, that is, a point x 0 such that P(X < x 0 ) = p and P(X > x 0 ) = 1 p. Exercise 3-79 . (c) Ans. Section 3.11 . Continuous Uniform on the Interval ( A , B ) . Here is the density function: A B 1 , A < x < B ) ( x f 0, otherwise Here is a graph of the density function. y = f ( x ) A B x Important Results for U ( A , B ): = E ( X ) = ( A + B )/2 2 = V ( X ) = ( B A ) 2 /12 , = SD ( X ) = ( B A )/ 12. Exercise 3-78 . Ans. 3/4 = 0.75; = 7 minutes 3 Section 3.12 . Exponential Distribution with Parameter > 0 . Here is the density function: x e , x ≥ 0 ) ( x f 0, x < 0 Important Results for Exponential ( ): = E ( X ) = 1/ 2 = V ( X ) = 1/ 2 , = SD ( X ) = 1/ . There are some useful formulas for probabilities of events involving an exponentially distributed random variable X . Consider these useful formulas (p. 131): x x t x t e e dt e x X P x F 1 | ) ( ) ( 0 0 , x > 0 x e x X P ) ( , x > 0 b a e e b X a P ) ( , 0 ≤ a b Example 1. Light Bulbs . A plant has a large assembly area with many light bulbs that are turned on 24 hours per day. An electronic device records how many hours each new light bulb burns until it fails and finds across a large sample that the mean time to failure for new light bulbs is 2,732 hours. Based on the failure time data, the plant engineer models the failure time X of a new light bulb as Exponential( ) with mean 2,732 hours. Use this model to answer the following questions. This preview has intentionally blurred sections. Sign up to view the full version. View Full Document This is the end of the preview. Sign up to access the rest of the document. {[ snackBarMessage ]} What students are saying • As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students. Kiran Temple University Fox School of Business ‘17, Course Hero Intern • I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero. Dana University of Pennsylvania ‘17, Course Hero Intern • The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time. Jill Tulane University ‘16, Course Hero Intern
1,124
4,268
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.875
4
CC-MAIN-2018-22
latest
en
0.835274
https://dotnet.github.io/infer/userguide/Increment%20log%20density.html
1,656,493,275,000,000,000
text/html
crawl-data/CC-MAIN-2022-27/segments/1656103626162.35/warc/CC-MAIN-20220629084939-20220629114939-00459.warc.gz
252,677,383
3,515
## Increment log density Sometimes it is convenient to specify parts of the model directly in terms of their log density instead of writing a sampler. This can be done using `Variable.ConstrainEqualRandom`. When you write `Variable.ConstrainEqualRandom(x, dist)`, you are incrementing the log density by `dist.GetLogProb(x)`. If you just want to increment the log density by `x`, use ``````/// Increments the log density by x Variable.ConstrainEqualRandom(x, Gaussian.FromNatural(1,0)); `````` or ``````/// Increments the log density by x Variable.ConstrainEqualRandom(x, Gamma.FromNatural(0,-1)); `````` as appropriate. For example: ``````/// Increments the log-density by -0.5*x*x - MMath.LnSqrt2PI Variable.ConstrainEqualRandom(x, new Gaussian(0, 1)) `````` increments the log-density by `(new Gaussian(0,1)).GetLogProb(x)` which is equal to `-0.5*x*x - MMath.LnSqrt2PI`. This is equivalent to: ``````Variable<double> y = Variable.GaussianFromMeanAndVariance(x, 1); y.ObservedValue = 0; `````` Unlike sampling, `ConstrainEqualRandom` works with improper distributions. Improper distributions are unnormalized, which means `Gaussian.FromNatural(1,0).GetLogProb(x) == Gamma.FromNatural(0,-1).GetLogProb(x) == x`.
333
1,222
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.890625
3
CC-MAIN-2022-27
latest
en
0.740315
http://ravenflagpress.herokuapp.com/discussion/id/741/
1,537,567,584,000,000,000
text/html
crawl-data/CC-MAIN-2018-39/segments/1537267157569.48/warc/CC-MAIN-20180921210113-20180921230513-00109.warc.gz
204,588,310
2,003
Practice Final A7 Lilyhui For the question I approached it by using the recursive equation E[X]=4(E[X]+1). The second E[X] is suppose to have a subscript of x-1. The answer is 84 for the third time the die lands on 1 arpanshah: March 19, 2015, 7:07 p.m. How exactly would the steps for this one be written out? Thanks! Lilyhui: March 19, 2015, 7:24 p.m. Eqn is E[X]=1/p *(E[X]+1). Since it averages 4 rolls to land a 1, p=1/4 thus giving you the eqn E[X]=4(E[X]+1). Again the second E[X] is suppose to have a subscript of x-1(Cant find a way to type it out). From here you would start with E[X1]=4 and continue to plug that into the eqn until you 1 three times in a row weisbart: March 20, 2015, 1:25 a.m. You can also do it directly by considering the first few possibilities. What are the false starts that you need to consider?
261
830
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.296875
3
CC-MAIN-2018-39
latest
en
0.944302
https://www.daniweb.com/programming/software-development/threads/398814/q-extract-integers-from-a-list
1,656,839,826,000,000,000
text/html
crawl-data/CC-MAIN-2022-27/segments/1656104215805.66/warc/CC-MAIN-20220703073750-20220703103750-00288.warc.gz
770,401,234
16,473
# I wanted to change the question. I have a list something like this: list1= For this list, if the element in the list starts with N, i want '+' to be inserted before the integer. If the element in the list starts with S, I want '-' to be inserted before the integer. For example, I want the above list to look like this: I can't use any module. Can anyone help me with this one? # made some correction. ## All 6 Replies That is not list of elements any type I know. Here's my quick and dirty solution: ``````list1= ['NNW 30', 'SE 15', 'SSW 60', 'NNE 70', 'N 10'] for index, item in enumerate(list1): if item.startswith('N'): list1[index] = "+" + item[-2:] elif item.startswith('S'): list1[index] = "-" + item[-2:] print(list1)`````` Notice the use of startswith(), and the negative used to slice the letters in each list item. This will not work if the number has more or less than 2 digits, but it can easily be changed to treat that as well - use that as an exercise. It might be safer to extract the numeric value ... ``````list1= ['NNW 30', 'SE 15', 'SSW 60', 'NNE 70', 'N 10'] list2 = [] for item in list1: s = "" for c in item: # extract numeric value if c in '1234567890.-': s += c if item[0] == 'N': s = '+' + s if item[0] == 'S': s = '-' + s list2.append(s) print(list2) # ['+30', '-15', '-60', '+70', '+10']`````` commented: elegant solution. +1 Excellent solution Vegaseat, definitely better than mine. "c in '123...'" is very useful. Thanks for sharing. Ok, if we are going to give (again) ready answer: ``````>>> values = [('+' if item.startswith('N') else ('-' if item.startswith('S') else '')) + item.rsplit(' ', 1)[-1] for item in list1] >>> values ['+30', '-15', '-60', '+70', '+10'] >>>`````` Great answer Tony, though I'll need some time to fully comprehend it ;-). What would you recommend when faced with such questions? I'm still learning the ropes here. Be a part of the DaniWeb community We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, learning, and sharing knowledge.
588
2,093
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.96875
3
CC-MAIN-2022-27
longest
en
0.813451
https://oeis.org/A133712
1,642,673,369,000,000,000
text/html
crawl-data/CC-MAIN-2022-05/segments/1642320301737.47/warc/CC-MAIN-20220120100127-20220120130127-00239.warc.gz
484,501,137
3,831
The OEIS is supported by the many generous donors to the OEIS Foundation. Hints (Greetings from The On-Line Encyclopedia of Integer Sequences!) A133712 Column l=5 of irregular triangle in A133709. 2 0, 0, 420, 15225, 185031, 1438906, 8689306, 44352346, 200070606, 818907792, 3093635652, 10914809127, 36278256537, 114357327402, 343708626298, 989318816383, 2737219679833, 7302776865288, 18839417766108, 47108352127209, 114421884019959 (list; graph; refs; listen; history; text; internal format) OFFSET 1,3 LINKS MAPLE A133712 := proc(m)         A133709(m, 5) ; end proc: seq(A133712(m), m=1..30) ; # R. J. Mathar, Nov 23 2011 MATHEMATICA T[m_, l_] := T[m, l] = If[l == 1, 1, Sum[(-1)^i Binomial[l, i]* Binomial[2^(l - i) + m - 2, m], {i, 0, l - 1}] - Sum[StirlingS2[l, i]* T[m, i], {i, 1, l - 1}]]; Table[T[m, 5], {m, 1, 30}] (* Jean-François Alcover, Apr 03 2020 *) CROSSREFS Cf. A133709. Sequence in context: A223365 A288071 A289226 * A058834 A022046 A289348 Adjacent sequences:  A133709 A133710 A133711 * A133713 A133714 A133715 KEYWORD nonn AUTHOR N. J. A. Sloane, Dec 30 2007 STATUS approved Lookup | Welcome | Wiki | Register | Music | Plot 2 | Demos | Index | Browse | More | WebCam Contribute new seq. or comment | Format | Style Sheet | Transforms | Superseeker | Recent The OEIS Community | Maintained by The OEIS Foundation Inc. Last modified January 20 03:25 EST 2022. Contains 350467 sequences. (Running on oeis4.)
526
1,432
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.546875
3
CC-MAIN-2022-05
latest
en
0.498915
https://askthetask.com/23/solve-the-inequality-2y-4x-6
1,675,409,123,000,000,000
text/html
crawl-data/CC-MAIN-2023-06/segments/1674764500044.16/warc/CC-MAIN-20230203055519-20230203085519-00446.warc.gz
131,327,215
7,040
0 like 0 dislike Solve the inequality 2y=4x-6 0 like 0 dislike y=2x+3. Step-by-step explanation: 2y-4x=6. 2y−4x=6. Add 4x to both sides. Add 4x to both sides. 2y=6+4x. 2y=6+4x. The equation is in standard form. The equation is in standard form. 2y=4x+6. 2y=4x+6. Divide both sides by 2. Divide both sides by 2. y=2x+3. y=2x+3. by
148
335
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.03125
4
CC-MAIN-2023-06
latest
en
0.863329
https://stats.stackexchange.com/questions/363110/chebyshevs-theorem-with-large-standard-deviations
1,643,374,681,000,000,000
text/html
crawl-data/CC-MAIN-2022-05/segments/1642320305494.6/warc/CC-MAIN-20220128104113-20220128134113-00308.warc.gz
583,916,986
34,553
# Chebyshev's theorem with large standard deviations I was wondering how reliable is a large standard deviation whose negative values are below 0? If we were calculating 2 standard deviations away from the mean to catch at least 75% of the observations over a non-normal distribution and end up with (say) -600 standard deviation over a mean of 300 knowing that the measure we are applying the standard deviation to does not accept negative values(i.e. min value is 0), we would end up with something like 300 - (600*2) on the left of our distribution and 300 + (600*2) on the right. Does it even make sense to have those negative values below 0? Or in this case, is it imperative to normalize the data? Thanks • You didn't say this, but am I correct to assume that your data is measuring something that cannot be negative? Can you provide more information on what you're trying to do? Aug 20 '18 at 19:23 • Hi Chris, correct. I have a large population of 174k observation circa which represent the deposit amounts from unique users in the last 6 months. That population is largely right skewed and of course cannot accept negative values. I want to apply the ± 2 SD to catch at least 75% of the observations and to get rid of outliers skewing the data. Thanks Aug 20 '18 at 19:30 • for your purposes would you consider any deposit lower than the mean to be an outlier? Aug 20 '18 at 19:31 • No, in the specific distribution I have none of the low values below the mean would be considered outliers.Thanks Aug 20 '18 at 19:38 • In many applications--likely a great many--any rule that automatically eliminates data it declares to be "outliers" will be suspect, difficult to defend, and may lead to procedures that have poor properties. – whuber Aug 20 '18 at 19:51 Standard deviation is always positive, so a std of -600 doesn't make sense. Chebyshev's inequality is just that: an inequality. It doesn't say that to get 75% of the data, you have to go out 2 std. It says you have to go out at most 2 std. In your examples, at least 75% of the data has a value greater than -900. Now, you may know, from sources other than Chebyshev's inequality, that all of the data has a value greater than 0, and hence greater than -900. So in that case, Chebyshev's inequality doesn't give you any more information about the lower bound than what you already had. In that case, Chebyshev's inequality isn't particularly useful, but it is still valid. • I think the std is 600, OP is just concerned with the value of mean - std*2 being negative. Aug 20 '18 at 19:25 • Thanks for your answer, would it be then accurate in your opinion to consider anything from 0 up to +2 SD from the mean a valid threshold to delimit our data, streamlining its central values? Aug 20 '18 at 19:41 Given you said in your comment none of the low values below the mean would be considered outliers I wouldn't worry about the lower end (mean - std*2). I use Chebyshev's inequality in a similar situation-- data that is not normally distributed, cannot be negative, and has a long tail on the high end. While there can be outliers on the low end (where mean is high and std relatively small) it's generally on the high side. We use 3 std for excluding outliers prior to doing some forecasting, so that no more than 11% of observations will be excluded per the inequality. In practice when we first implemented we found that only about 1% of observations were excluded. Chebyshev's was a nice theoretical bound to give us some justification for setting the threshold at 3 std's instead of needing to go higher. • Thanks Chris, I came to the same conclusion but had no basis to sustain it. However the end result was that 99% circa of the observations were caught with a thick agglomerate close to the mean and mildly above. Thanks Aug 20 '18 at 19:49
917
3,823
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.421875
3
CC-MAIN-2022-05
latest
en
0.94946
https://nl.mathworks.com/matlabcentral/cody/problems/344-back-to-basics-2-function-path/solutions/1200761
1,603,866,012,000,000,000
text/html
crawl-data/CC-MAIN-2020-45/segments/1603107896778.71/warc/CC-MAIN-20201028044037-20201028074037-00112.warc.gz
435,922,754
17,275
Cody # Problem 344. Back to basics 2 - Function Path Solution 1200761 Submitted on 30 May 2017 by Informaton This solution is locked. To view this solution, you need to provide a solution of the same size or smaller. ### Test Suite Test Status Code Input and Output 1   Pass x = 'sin'; y=which(x) assert(isequal(path2func(x),y(11:end-1))) y = 'built-in (/opt/mlsedu/matlab/R2017a/toolbox/matlab/elfun/@double/sin)' y = 'built-in (/opt/mlsedu/matlab/R2017a/toolbox/matlab/elfun/@double/sin)' y = '/opt/mlsedu/matlab/R2017a/toolbox/matlab/elfun/@double/sin' 2   Pass x = 'peaks'; y=which(x) assert(isequal(path2func(x),y)) y = '/opt/mlsedu/matlab/R2017a/toolbox/matlab/specgraph/peaks.m' y = '/opt/mlsedu/matlab/R2017a/toolbox/matlab/specgraph/peaks.m' 3   Pass x = 'system'; y=which(x) assert(isequal(path2func(x),y(11:end-1))) y = 'built-in (/opt/mlsedu/matlab/R2017a/toolbox/matlab/general/system)' y = 'built-in (/opt/mlsedu/matlab/R2017a/toolbox/matlab/general/system)' y = '/opt/mlsedu/matlab/R2017a/toolbox/matlab/general/system' ### Community Treasure Hunt Find the treasures in MATLAB Central and discover how the community can help you! Start Hunting!
375
1,172
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.125
3
CC-MAIN-2020-45
latest
en
0.480239
https://samsarjant.com/blog/2010/01/phd-progress-covering-and-ce-policies/
1,675,515,191,000,000,000
text/html
crawl-data/CC-MAIN-2023-06/segments/1674764500126.0/warc/CC-MAIN-20230204110651-20230204140651-00268.warc.gz
520,451,348
24,618
# PhD Progress: Covering and CE Policies Another issue has arisen regarding using covering and policies. FOXCS can use covering however it likes, as it is based on a rule voting system. But I intend for this work to be based on a strict deterministic policy. Or perhaps just probabilistic, which is near the same anyway. The problem lies with triggering covering when using CE to generate the policy. A policy generated by CE will only use some of the rules within the rulebase generated by covering. This could be a problem, as covering may be triggered again if a bad rule is chosen. A possible solution to this, or at least a step in the right direction is to modify the whole policy creation process. A policy is created which contains at least one of each possible actions in the environment. So the CE process is now only applicable towards optimising the conditions for an action. However, typical policies contain multiple copies of the same action, with different parameters (the onAB policy contains three rules: one has a constant-ised action, the other two are moveFloor actions regarding blocks on top of a and b). To get around this problem, the following strategy is proposed. Use an adaptive policy (with firing rules) of initial size |A|, each slot corresponding to an action in the state description. When covering is triggered, these slots are filled, using maximally generalised covering where possible. Once a suitable rule is found regarding the action (the weighting of a single is rule is > 1-epsilon), fix the slot as that rule and create another slot of the same action, using all other rules from the slot. Note that we want the system to quickly converge, so the previously proposed CE algorithm using a sliding window for updates may work well. Note that during this process, new rules are being covered/mutated and optimised. New rules can be created by taking almost true rules and covering them to fit the current state, or mutated if the preconditions fire, but the action does not. A mutation such as this removes the erroneous rule and adds mutations of it which fit the current state space. Mutations must follow the goal description in that they only use constants mentioned in the goal. Regarding the order of slots within an adaptive policy, a specificity measure can be used. It is usually better to check if specific rules fire before looking at the general ones, and if the policy is deterministic, this is especially useful. As such, the more specific a rule is within the agent’s policy, the higher it will be. Specificity can be measured by the # of preconditions (disregarding type and inequal preds) + the # of constants used in the preconditions + 0.5 for every constant re-used. So the rule `on(a,b) & clear(a) = 2 + 2 + 0.5 = 4.5`, and the rule `on(X,Y) & on(Y,Z) & clear(X) = 3`. ## One Reply to “PhD Progress: Covering and CE Policies” 1. Nick W says: Hey Sarj! It’s Nick – from your honours class of 08. Having a hard time finding your email or contact info on your website – want to catch up some time? Give me a buzz at “nexinarus at gmail dot com” if you do. It would be good to catch up on how your PhD is going etc.
699
3,183
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.765625
3
CC-MAIN-2023-06
longest
en
0.959301
https://metanumbers.com/141738
1,638,065,215,000,000,000
text/html
crawl-data/CC-MAIN-2021-49/segments/1637964358443.87/warc/CC-MAIN-20211128013650-20211128043650-00309.warc.gz
484,373,341
7,381
# 141738 (number) 141,738 (one hundred forty-one thousand seven hundred thirty-eight) is an even six-digits composite number following 141737 and preceding 141739. In scientific notation, it is written as 1.41738 × 105. The sum of its digits is 24. It has a total of 3 prime factors and 8 positive divisors. There are 47,244 positive integers (up to 141738) that are relatively prime to 141738. ## Basic properties • Is Prime? No • Number parity Even • Number length 6 • Sum of Digits 24 • Digital Root 6 ## Name Short name 141 thousand 738 one hundred forty-one thousand seven hundred thirty-eight ## Notation Scientific notation 1.41738 × 105 141.738 × 103 ## Prime Factorization of 141738 Prime Factorization 2 × 3 × 23623 Composite number Distinct Factors Total Factors Radical ω(n) 3 Total number of distinct prime factors Ω(n) 3 Total number of prime factors rad(n) 141738 Product of the distinct prime numbers λ(n) -1 Returns the parity of Ω(n), such that λ(n) = (-1)Ω(n) μ(n) -1 Returns: 1, if n has an even number of prime factors (and is square free) −1, if n has an odd number of prime factors (and is square free) 0, if n has a squared prime factor Λ(n) 0 Returns log(p) if n is a power pk of any prime p (for any k >= 1), else returns 0 The prime factorization of 141,738 is 2 × 3 × 23623. Since it has a total of 3 prime factors, 141,738 is a composite number. ## Divisors of 141738 8 divisors Even divisors 4 4 2 2 Total Divisors Sum of Divisors Aliquot Sum τ(n) 8 Total number of the positive divisors of n σ(n) 283488 Sum of all the positive divisors of n s(n) 141750 Sum of the proper positive divisors of n A(n) 35436 Returns the sum of divisors (σ(n)) divided by the total number of divisors (τ(n)) G(n) 376.481 Returns the nth root of the product of n divisors H(n) 3.99983 Returns the total number of divisors (τ(n)) divided by the sum of the reciprocal of each divisors The number 141,738 can be divided by 8 positive divisors (out of which 4 are even, and 4 are odd). The sum of these divisors (counting 141,738) is 283,488, the average is 35,436. ## Other Arithmetic Functions (n = 141738) 1 φ(n) n Euler Totient Carmichael Lambda Prime Pi φ(n) 47244 Total number of positive integers not greater than n that are coprime to n λ(n) 23622 Smallest positive number such that aλ(n) ≡ 1 (mod n) for all a coprime to n π(n) ≈ 13117 Total number of primes less than or equal to n r2(n) 0 The number of ways n can be represented as the sum of 2 squares There are 47,244 positive integers (less than 141,738) that are coprime with 141,738. And there are approximately 13,117 prime numbers less than or equal to 141,738. ## Divisibility of 141738 m n mod m 2 3 4 5 6 7 8 9 0 0 2 3 0 2 2 6 The number 141,738 is divisible by 2, 3 and 6. ## Classification of 141738 • Arithmetic • Abundant ### Expressible via specific sums • Polite • Non-hypotenuse • Square Free • Sphenic ## Base conversion (141738) Base System Value 2 Binary 100010100110101010 3 Ternary 21012102120 4 Quaternary 202212222 5 Quinary 14013423 6 Senary 3012110 8 Octal 424652 10 Decimal 141738 12 Duodecimal 6a036 20 Vigesimal he6i 36 Base36 31d6 ## Basic calculations (n = 141738) ### Multiplication n×y n×2 283476 425214 566952 708690 ### Division n÷y n÷2 70869 47246 35434.5 28347.6 ### Exponentiation ny n2 20089660644 2847468320359272 403594464791082494736 57204672250558450638891168 ### Nth Root y√n 2√n 376.481 52.1389 19.4031 10.7225 ## 141738 as geometric shapes ### Circle Diameter 283476 890566 6.31135e+10 ### Sphere Volume 1.19274e+16 2.52454e+11 890566 ### Square Length = n Perimeter 566952 2.00897e+10 200448 ### Cube Length = n Surface area 1.20538e+11 2.84747e+15 245497 ### Equilateral Triangle Length = n Perimeter 425214 8.69908e+09 122749 ### Triangular Pyramid Length = n Surface area 3.47963e+10 3.35577e+14 115729 ## Cryptographic Hash Functions md5 38e1a540f3edab802717a06244236800 c5065c829fed3cb2108647bb1889f305240c561a 76195fd29757b90bb28af86093c05fc195d6a02966e3ebb2207463269b53490a 44a11531ccd999a1e8c0c426066e095d5ea6f8829ac9c6b790b8f5537ceeab79409b5194f61df016a998ad71e80f7c63da7b07971d6d6dc76b6af4011677ea89 b88319679779d18c82f64e2e54fbd3e8d9473375
1,447
4,229
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.640625
4
CC-MAIN-2021-49
latest
en
0.816188
https://help.qlik.com/zh-TW/qlikview/April2020/Subsystems/Client/Content/QV_QlikView/Examples%20of%20Alternate%20States%20in%20Chart%20Expressions.htm
1,660,195,290,000,000,000
text/html
crawl-data/CC-MAIN-2022-33/segments/1659882571234.82/warc/CC-MAIN-20220811042804-20220811072804-00132.warc.gz
285,905,938
11,466
# 圖表運算式中輪替狀態機的範例 ## 同步處理不同狀態的選項 count({\$} DISTINCT [Invoice Number]) count({State1} DISTINCT [Invoice Number]) count({State2} DISTINCT [Invoice Number]) count({State1<Year = \$::Year, Month = \$::Month>} DISTINCT [Invoice Number]) count({State2<Year = \$::Year, Month = \$::Month>} DISTINCT [Invoice Number]) QlikView 開發人員將會讓 State1 和 State2 的年度和月份選項與預設狀態中的年度和月份選項保持同步。QlikView 開發人員可以視需要將元素新增到 set 修飾詞,以便讓狀態之間的更多欄位保持一致。 ## 集合運算子 Examples: count({\$ + State1} DISTINCT [Invoice Number]) count({1 - State1} DISTINCT [Invoice Number]) count({State1 * State2} DISTINCT [Invoice Number]) ## 隱含欄位值定義 set 運算子也可用於元素函數 P() 及 E()。這些函數僅適用於 set 運算式。 Examples: count({\$<[Invoice Number] = p({State1} [Invoice Number])>} DISTINCT [Invoice Number]) count({\$<[Invoice Number] = State1::[Invoice Number]>} DISTINCT [Invoice Number]) Examples: count({\$<[Invoice Number] = p({State1} [Invoice Number]) * p({State2} [Invoice Number])>} DISTINCT [Invoice Number]) count({\$<[Invoice Number] = p({\$} [Invoice Number]) * p({State1} [Invoice Number])>} DISTINCT [Invoice Number]) Examples: count({\$<[Invoice Number] = p({\$} [Invoice Number]) * p({State1<Year = \$::Year, Month = \$::Month>} [Invoice Number])>} DISTINCT [Invoice Number])
419
1,242
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.109375
3
CC-MAIN-2022-33
latest
en
0.466706
https://doubtnut.com/question-answer-physics/a-ship-is-to-reach-a-place-20-south-of-west-in-what-direction-should-it-be-steered-if-angle-of-decli-69130311
1,601,441,550,000,000,000
text/html
crawl-data/CC-MAIN-2020-40/segments/1600402118004.92/warc/CC-MAIN-20200930044533-20200930074533-00208.warc.gz
348,077,021
57,795
or # A ship is to reach a place 20^(@) south of west . In what direction should it be steered if angle of declination at the place is 16^(@) west ? Question from  Class 12  Chapter Magnetism And Matter Apne doubts clear karein ab Whatsapp par bhi. Try it now. CLICK HERE Loading DoubtNut Solution for you Watch 1000+ concepts & tricky questions explained! 1.2 K+ views | 100+ people like this Share Share Answer Text Solution : <img src="https://d10lpgp6xz60nq.cloudfront.net/physics_images/AAK_T6_PHY_C18_SLV_008_S01.png" width="80%"> <br> Ship has to reach `20^(@)` south of west along line OA . It should be steered west of north of magnetic north at an angle . <br> `phi = [90^(@) + (20^(@) - 16^(@))]` <br> `phi = 94^(@)` west of north Related Video 2:36 9.6 K+ Views | 5.6 K+ Likes 1:38 28.9 K+ Views | 83.8 K+ Likes 1:29 7.5 K+ Views | 15.4 K+ Likes 4:53 5.9 K+ Views | 24.3 K+ Likes 2:51 6.3 K+ Views | 13.3 K+ Likes 4:00 16.1 K+ Views | 16.2 K+ Likes 1:48 97.9 K+ Views | 302.9 K+ Likes
370
1,019
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.890625
3
CC-MAIN-2020-40
latest
en
0.727566
https://www.neetprep.com/questions/477-Chemistry/7857-Basic-Concepts-Chemistry?courseId=141&testId=1107866-Past-Year----MCQs&subtopicId=25-Equation-Based-Problem
1,726,022,037,000,000,000
text/html
crawl-data/CC-MAIN-2024-38/segments/1725700651343.80/warc/CC-MAIN-20240911020451-20240911050451-00074.warc.gz
844,746,127
48,730
0.33%  of iron (by weight) is present in hemoglobin (Molecular wt = 67200). The number of iron atom(s) in one molecule will be: 1.  1 2.  2 3.  3 4.  4 Subtopic:  Equation Based Problem | 71% From NCERT AIPMT - 1998 To view explanation, please take trial in the course. NEET 2025 - Target Batch Hints To view explanation, please take trial in the course. NEET 2025 - Target Batch The volume of CO2 obtained by the complete decomposition of 9.85 grams of BaCO3 is: 1. 2.24 lit. 2. 1.12 lit. 3. 0.84 lit. 4. 0.56 lit Subtopic:  Equation Based Problem | 69% From NCERT AIPMT - 2000 To view explanation, please take trial in the course. NEET 2025 - Target Batch Hints To view explanation, please take trial in the course. NEET 2025 - Target Batch The percentage of Se in peroxidase anhydrous enzyme is 0.5% by weight (at. wt = 78.4) then the minimum molecular weight of peroxidase anhydrous enzymes is: 1. $1.568$ $×$ ${10}^{4}$ 2. $1.568$ $×$ ${10}^{3}$ 3. 15.68 4. $2.136$ $×$ ${10}^{4}$ Subtopic:  Equation Based Problem | 69% From NCERT AIPMT - 2001 To view explanation, please take trial in the course. NEET 2025 - Target Batch Hints To view explanation, please take trial in the course. NEET 2025 - Target Batch The mass of carbon anode consumed (giving only carbon dioxide) in the production of 270 kg of aluminum metal from bauxite by the Hall process is: 1. 90 kg 2. 540 kg 3. 180 kg 4. 270 kg (Atomic mass : Al = 27) Subtopic:  Equation Based Problem | From NCERT AIPMT - 2005 To view explanation, please take trial in the course. NEET 2025 - Target Batch Hints To view explanation, please take trial in the course. NEET 2025 - Target Batch
559
1,667
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 9, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.640625
3
CC-MAIN-2024-38
latest
en
0.764068
http://trilinos.sandia.gov/packages/docs/r10.0/packages/intrepid/doc/html/classIntrepid_1_1Basis__HGRAD__HEX__C2__FEM.html
1,397,882,933,000,000,000
text/html
crawl-data/CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00315-ip-10-147-4-33.ec2.internal.warc.gz
253,046,755
3,984
# Intrepid::Basis_HGRAD_HEX_C2_FEM< Scalar, ArrayScalar > Class Template Reference Implementation of the default H(grad)-compatible FEM basis of degree 2 on Hexahedron cell. More... Inheritance diagram for Intrepid::Basis_HGRAD_HEX_C2_FEM< Scalar, ArrayScalar >: List of all members. ## Public Member Functions Constructor. void getValues (ArrayScalar &outputValues, const ArrayScalar &inputPoints, const EOperator operatorType) const Evaluation of a FEM basis on a reference Hexahedron cell. void getValues (ArrayScalar &outputValues, const ArrayScalar &inputPoints, const ArrayScalar &cellVertices, const EOperator operatorType=OPERATOR_VALUE) const FVD basis evaluation: invocation of this method throws an exception. ## Private Member Functions void initializeTags () Initializes tagToOrdinal_ and ordinalToTag_ lookup arrays. ## Detailed Description ### template<class Scalar, class ArrayScalar> class Intrepid::Basis_HGRAD_HEX_C2_FEM< Scalar, ArrayScalar > Implementation of the default H(grad)-compatible FEM basis of degree 2 on Hexahedron cell. Implements Lagrangian basis of degree 2 on the reference Hexahedron cell. The basis has cardinality 27 and spans a COMPLETE tri-quadratic polynomial space. Basis functions are dual to a unisolvent set of degrees-of-freedom (DoF) defined and enumerated as follows: ================================================================================================= | | degree-of-freedom-tag table | | | DoF |----------------------------------------------------------| DoF definition | | ordinal | subc dim | subc ordinal | subc DoF ord |subc num DoF | | |=========|==============|==============|==============|=============|===========================| | 0 | 0 | 0 | 0 | 1 | L_0(u) = u(-1,-1,-1) | |---------|--------------|--------------|--------------|-------------|---------------------------| | 1 | 0 | 1 | 0 | 1 | L_1(u) = u( 1,-1,-1) | |---------|--------------|--------------|--------------|-------------|---------------------------| | 2 | 0 | 2 | 0 | 1 | L_2(u) = u( 1, 1,-1) | |---------|--------------|--------------|--------------|-------------|---------------------------| | 3 | 0 | 3 | 0 | 1 | L_3(u) = u(-1, 1,-1) | |---------|--------------|--------------|--------------|-------------|---------------------------| | 4 | 0 | 4 | 0 | 1 | L_4(u) = u(-1,-1, 1) | |---------|--------------|--------------|--------------|-------------|---------------------------| | 5 | 0 | 5 | 0 | 1 | L_5(u) = u( 1,-1, 1) | |---------|--------------|--------------|--------------|-------------|---------------------------| | 6 | 0 | 6 | 0 | 1 | L_6(u) = u( 1, 1, 1) | |---------|--------------|--------------|--------------|-------------|---------------------------| | 7 | 0 | 7 | 0 | 1 | L_7(u) = u(-1, 1, 1) | |---------|--------------|--------------|--------------|-------------|---------------------------| |---------|--------------|--------------|--------------|-------------|---------------------------| | 8 | 1 | 0 | 0 | 1 | L_8(u) = u( 0,-1,-1) | |---------|--------------|--------------|--------------|-------------|---------------------------| | 9 | 1 | 1 | 0 | 1 | L_9(u) = u( 1, 0,-1) | |---------|--------------|--------------|--------------|-------------|---------------------------| | 10 | 1 | 2 | 0 | 1 | L_10(u) = u( 0, 1,-1) | |---------|--------------|--------------|--------------|-------------|---------------------------| | 11 | 1 | 3 | 0 | 1 | L_11(u) = u(-1, 0,-1) | |---------|--------------|--------------|--------------|-------------|---------------------------| | 12 | 1 | 8 | 0 | 1 | L_12(u) = u(-1,-1, 0) | |---------|--------------|--------------|--------------|-------------|---------------------------| | 13 | 1 | 9 | 0 | 1 | L_13(u) = u( 1,-1, 0) | |---------|--------------|--------------|--------------|-------------|---------------------------| | 14 | 1 | 10 | 0 | 1 | L_14(u) = u( 1, 1, 0) | |---------|--------------|--------------|--------------|-------------|---------------------------| | 15 | 1 | 11 | 0 | 1 | L_15(u) = u(-1, 1, 0) | |---------|--------------|--------------|--------------|-------------|---------------------------| | 16 | 1 | 4 | 0 | 1 | L_16(u) = u( 0,-1, 1) | |---------|--------------|--------------|--------------|-------------|---------------------------| | 17 | 1 | 5 | 0 | 1 | L_17(u) = u( 1, 0, 1) | |---------|--------------|--------------|--------------|-------------|---------------------------| | 18 | 1 | 6 | 0 | 1 | L_18(u) = u( 0, 1, 1) | |---------|--------------|--------------|--------------|-------------|---------------------------| | 19 | 1 | 7 | 0 | 1 | L_19(u) = u(-1, 0, 1) | |---------|--------------|--------------|--------------|-------------|---------------------------| |---------|--------------|--------------|--------------|-------------|---------------------------| | 20 | 3 | 0 | 0 | 1 | L_20(u) = u( 0, 0, 0) | |---------|--------------|--------------|--------------|-------------|---------------------------| |---------|--------------|--------------|--------------|-------------|---------------------------| | 21 | 2 | 4 | 0 | 1 | L_21(u) = u( 0, 0,-1) | |---------|--------------|--------------|--------------|-------------|---------------------------| | 22 | 2 | 5 | 0 | 1 | L_22(u) = u( 0, 0, 1) | |---------|--------------|--------------|--------------|-------------|---------------------------| | 23 | 2 | 3 | 0 | 1 | L_23(u) = u(-1, 0, 0) | |---------|--------------|--------------|--------------|-------------|---------------------------| | 24 | 2 | 1 | 0 | 1 | L_24(u) = u( 1, 0, 0) | |---------|--------------|--------------|--------------|-------------|---------------------------| | 25 | 2 | 0 | 0 | 1 | L_25(u) = u( 0,-1, 0) | |---------|--------------|--------------|--------------|-------------|---------------------------| | 26 | 2 | 2 | 0 | 1 | L_26(u) = u( 0, 1, 0) | |=========|==============|==============|==============|=============|===========================| | MAX | maxScDim=2 | maxScOrd=12 | maxDfOrd=0 | - | | |=========|==============|==============|==============|=============|===========================| Remarks: Ordering of DoFs follows the node order in Hexahedron<27> topology. Note that node order in this topology does not follow the natural oder of k-subcells where the nodes are located, except for nodes 0 to 7 which coincide with the vertices of the base Hexahedrn <8> topology. As a result, L_0 to L_7 are associated with nodes 0 to 7, but L_8 to L_19 are not associated with edges 0 to 12 in that order. Definition at line 124 of file Intrepid_HGRAD_HEX_C2_FEM.hpp. ## Member Function Documentation template<class Scalar , class ArrayScalar > void Intrepid::Basis_HGRAD_HEX_C2_FEM< Scalar, ArrayScalar >::getValues ( ArrayScalar & outputValues, const ArrayScalar & inputPoints, const EOperator operatorType ) const [inline, virtual] Evaluation of a FEM basis on a reference Hexahedron cell. Returns values of operatorType acting on FEM basis functions for a set of points in the reference Hexahedron cell. For rank and dimensions of I/O array arguments see Section MD array template arguments for basis methods. Parameters: outputValues [out] - rank-2 or 3 array with the computed basis values inputPoints [in] - rank-2 array with dimensions (P,D) containing reference points operatorType [in] - operator applied to basis functions Implements Intrepid::Basis< Scalar, ArrayScalar >. Definition at line 104 of file Intrepid_HGRAD_HEX_C2_FEMDef.hpp. The documentation for this class was generated from the following files: Generated on Tue Oct 20 15:10:08 2009 for Intrepid by  1.6.1
2,449
9,166
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.515625
3
CC-MAIN-2014-15
longest
en
0.542832
http://www.physicsgre.com/viewtopic.php?f=19&t=2519
1,568,570,867,000,000,000
text/html
crawl-data/CC-MAIN-2019-39/segments/1568514572235.63/warc/CC-MAIN-20190915175150-20190915201150-00047.warc.gz
311,564,602
7,207
## help virial theorem betelgeuse1 Posts: 116 Joined: Sat May 09, 2009 10:14 am ### help virial theorem Can anyone give me a simple explanation for the virial theorem and an example. I understand it's something like <T>=-(1/2)U where T=total kinetic energy and U=total potential energy. Now I saw a solution on physicsGRE.net for 0177 problem 3 where somebody says U=3/2T. I may understand 3 as being something like the number of degrees of freedom for translational motion but I don't get how the 2 comes down instead of up. Thanks a lot blackcat007 Posts: 378 Joined: Wed Mar 26, 2008 9:14 am ### Re: help virial theorem why do you need virial thm to answer this question.. simply equate the centrifugal with the gravitational force (non inertial frame)? in physicsgre.net there are many redundant NEC's physics_auth Posts: 163 Joined: Sat Jul 18, 2009 7:24 pm ### Re: help virial theorem I will explain briefly, not providing a full proof. Consider a collection of particles whose position vectors r_a and momenta p_a are both bounded (i.e. they remain finite for all values of the time). The virial theorem is -in a more general case- an expression about the average kinetic energy of the system of the forementioned particles. Specifically it is: <T> = -(1/2) * <Σ F_a*r_a> (1) where T = total kinetic energy Σ = summation over a = the number of particles of the system F_a = the force on the a-th particle r_a = the position of the a-th particle where the right hand side was named by Clausius "virial". Notice also the factor -1/2 on the right hand side (RHS). If the forces on each particle are derivable from some potential U_a, then (1) transforms to the following form: <T> = (1/2) <Σ r_a * grad(U_a)> (2), since in this case F_a = - grad(U_a). Of particular interest is the case of 2 particles that interact via some central force-> F analogous to (r^n) , where n = some exponent. In this case the potential energy of interaction takes the form U = k*r^(n+1), where k = proportionality constant. Thus, for this particular case it is (r = relative position of the 2 particles, U = potential energy of interaction), in spherical coordinates (see the RHS side of (2)): r*grad(U) = r*dU/dr = k(n+1)*r^(n+1) = (n+1)*U (3), since U = k*r^(n+1). Combining (3) and (2) (for the case of 2 particles always) we find that <T> = (n+1)/2 * <U> (4). If the mutual interaction is gravitational then n = -2 (see the force dependence on r above, how it was defined) and from (4) it comes out that <T> = -(1/2) * <U> . This is a useful relation for calculations concerning the planetary motion. In the alternative answer of GR0177 #3, the argumentation is correct but the virial theorem does not seem to be employed correctly. In general, think carefully before reading the solutions given on this site. I have met several errors in the quoted answers. Think before you "digest" the quoted answer. Physics_auth betelgeuse1 Posts: 116 Joined: Sat May 09, 2009 10:14 am ### Re: help virial theorem Thank you physics_auth. I was scared that all I know suddenly became wrong...
834
3,082
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.65625
4
CC-MAIN-2019-39
latest
en
0.879277
http://blog.csdn.net/zhaomingliang/article/details/1781895
1,519,369,646,000,000,000
text/html
crawl-data/CC-MAIN-2018-09/segments/1518891814493.90/warc/CC-MAIN-20180223055326-20180223075326-00065.warc.gz
42,414,000
15,066
# 无聊时写过的一点代码,包括二叉树、二叉树模板、平衡二叉树模板,对某些人可能有用,呵呵 /* A binary tree * by cookie.chao@gmail.com * Oct 1, 2006 */ #include <iostream.h> #include <vector> #include <string> class BTnode { friend class BTree; public: BTnode(); BTnode(int &); BTnode operator=(BTnode &); ~BTnode(); private: int     _val; int     _cnt; BTnode  *_lchild; BTnode  *_rchild; }; inline BTnode::BTnode() { _val = 0; _cnt = 1; _lchild = 0; _rchild = 0; } inline BTnode::BTnode(int &s) { _val = s; _cnt = 1; _lchild = 0; _rchild = 0; } inline BTnode::~BTnode() { } BTnode BTnode::operator=(BTnode &rhs) { _val = rhs._val; _cnt = rhs._cnt; _lchild = rhs._lchild; _rchild = rhs._rchild; return rhs; } class BTree { friend class BTnode; public: void    add(int &); void    addtonode(BTnode *, int &); bool    rmove(int &); bool    pre_traversal(); bool    pre_traversal_node(BTnode *t); bool    mid_traversal(); bool    post_traversal(); bool    empty() { return _root == 0; } void    clear(); void    del_node(BTnode *); BTree(); BTree(int &); BTree(BTree &); ~BTree(); private: BTnode  *_root; }; BTree::BTree() { _root = 0; } BTree::BTree(int &s) { BTnode root(s); _root = &root; } BTree::~BTree() { clear(); } void BTree::addtonode(BTnode *node, int &s) { if( s == node->_val) { (node->_cnt)++; } else if(s > node->_val) { if(node->_rchild == 0) { BTnode *pn = new BTnode(s); node->_rchild = pn; } else { addtonode(node->_rchild, s); } } else { if(node->_lchild == 0) { BTnode *pn = new BTnode(s); node->_lchild = pn; } else { addtonode(node->_lchild, s); } } } void BTree::add(int &s) { if(_root == 0) { BTnode *pn = new BTnode(s); _root = pn; } else { addtonode(_root, s); } } bool BTree::pre_traversal() { pre_traversal_node(_root); return true; } bool BTree::pre_traversal_node(BTnode* root) { if(root == 0) { return false; } cout<<root->_val<<" "<<root->_cnt<<endl; pre_traversal_node(root->_lchild); pre_traversal_node(root->_rchild); return true; } void BTree::clear() { del_node(_root); return; } void BTree::del_node(BTnode *root) { if( (root->_lchild == 0) && (root->_rchild == 0) ) { delete root; return; } else { if(root->_lchild != 0) { del_node(root->_lchild); root->_lchild = 0; } if(root->_rchild != 0) { del_node(root->_rchild); root->_rchild = 0; } delete root; return; } } void main() { int s; BTree t; s = 2; t.add(s); s = 1; t.add(s); t.add(s); t.add(s); s = 2; t.add(s); s = 0; t.add(s); t.add(s); s = 3; t.add(s); s = 4; t.add(s); t.pre_traversal(); } /* A binary tree using template * by cookie.chao@gmail.com * Oct 5, 2006 */ #include <iostream> #include <vector> #include <string> //////////////////////////////////////////////////////////////////////// using namespace std; template<typename Type> class BTree; template<typename Type> class BTnode { friend class BTree<Type>; public: BTnode(); BTnode(Type &); BTnode operator=(BTnode &); ~BTnode(); private: Type     _val; int     _cnt; BTnode    *_lchild; BTnode    *_rchild; }; template<typename Type> inline BTnode<Type>::BTnode() { _val = 0; _cnt = 1; _lchild = 0; _rchild = 0; } template<typename Type> inline BTnode<Type>::BTnode(Type &s) { _val = s; _cnt = 1; _lchild = 0; _rchild = 0; } template<typename Type> inline BTnode<Type>::~BTnode() { } template<typename Type> BTnode<Type> BTnode<Type>::operator=(BTnode<Type> &rhs) { _val = rhs._val; _cnt = rhs._cnt; _lchild = rhs._lchild; _rchild = rhs._rchild; return rhs; } //////////////////////////////////////////////////////////////////////////////// template<typename Type> class BTree { friend class BTnode<Type>; public: void    add(Type &); void    addtonode(BTnode<Type> *, Type &); bool    rmove(Type &); bool    pre_traversal(); bool    mid_traversal(); bool    post_traversal(); bool    pre_traversal_node(BTnode<Type> *t); bool    mid_traversal_node(BTnode<Type> *t); bool    post_traversal_node(BTnode<Type> *t); bool    empty() { return _root == 0; } void    clear(); void    del_node(BTnode<Type> *); BTree(); BTree(Type &); BTree(BTree<Type> &); ~BTree(); private: BTnode<Type>  *_root; }; template<typename Type> BTree<Type>::BTree() { _root = 0; } template<typename Type> BTree<Type>::BTree(Type &s) { BTnode<Type> root(s); _root = &root; } template<typename Type> BTree<Type>::~BTree() { clear(); } template<typename Type> void BTree<Type>::addtonode(BTnode<Type> *node, Type &s) { if(s == node->_val) { (node->_cnt)++; } else if(s > node->_val) { if(node->_rchild == 0) { BTnode<Type> *pn = new BTnode<Type>(s); node->_rchild = pn; } else { addtonode(node->_rchild, s); } } else { if(node->_lchild == 0) { BTnode<Type> *pn = new BTnode<Type>(s); node->_lchild = pn; } else { addtonode(node->_lchild, s); } } } template<typename Type> void BTree<Type>::add(Type &s) { if(_root == 0) { BTnode<Type> *pn = new BTnode<Type>(s); _root = pn; } else { addtonode(_root, s); } } /////////////////////////////////////////////////////////////////////// //先序遍历 template<typename Type> bool BTree<Type>::pre_traversal() { pre_traversal_node(_root); return true; } template<typename Type> bool BTree<Type>::pre_traversal_node(BTnode<Type>* root) { if(root == 0) { return false; } cout<<root->_val<<" "<<root->_cnt<<endl; pre_traversal_node(root->_lchild); pre_traversal_node(root->_rchild); return true; } /////////////////////////////////////////////////////////////////////// //中序遍历 template<typename Type> bool BTree<Type>::mid_traversal() { mid_traversal_node(_root); return true; } template<typename Type> bool BTree<Type>::mid_traversal_node(BTnode<Type>* root) { if(root == 0) { return false; } mid_traversal_node(root->_lchild); cout<<root->_val<<" "<<root->_cnt<<endl; mid_traversal_node(root->_rchild); return true; } /////////////////////////////////////////////////////////////////////// //后序遍历 template<typename Type> bool BTree<Type>::post_traversal() { post_traversal_node(_root); return true; } template<typename Type> bool BTree<Type>::post_traversal_node(BTnode<Type>* root) { if(root == 0) { return false; } post_traversal_node(root->_lchild); post_traversal_node(root->_rchild); cout<<root->_val<<" "<<root->_cnt<<endl; return true; } /////////////////////////////////////////////////////////////////////// template<typename Type> void BTree<Type>::clear() { del_node(_root); return; } template<typename Type> void BTree<Type>::del_node(BTnode<Type> *root) { if( (root->_lchild == 0) && (root->_rchild == 0) ) { delete root; return; } else { if(root->_lchild != 0) { del_node(root->_lchild); root->_lchild = 0; } if(root->_rchild != 0) { del_node(root->_rchild); root->_rchild = 0; } return; } } //////////////////////////////////////////////////////////////////////////// void main() { int i; cout<<"BTree<int>/n"; BTree<int> t; i = 5; t.add(i); i = 3; t.add(i); i = 7; t.add(i); i = 1; t.add(i); i = 4; t.add(i); i = 6; t.add(i); i = 8; t.add(i); cout<<"pre/n"; t.pre_traversal(); cout<<"mid/n"; t.mid_traversal(); cout<<"post/n"; t.post_traversal(); cout<<"BTree<string>/n"; string s; BTree<string> st; s = "abbaa"; st.add(s); s = "bbbbb"; st.add(s); s = "abaaa"; st.add(s); s = "aaaaa"; st.add(s); st.add(s); st.pre_traversal(); cout<<"BTree<char>/n"; BTree<char> ct; char c; c = 'd'; ct.add(c); c = 'a'; ct.add(c); c = 'e'; ct.add(c); c = 'f'; ct.add(c); c = 'd'; ct.add(c); ct.pre_traversal(); } /* A balance binary tree using template * by cookie.chao@gmail.com * Oct 5, 2006 */ //尚未完成对值的删除的操作 //此时删除,如果该值所在的结点的_cnt变为0,则将会删除以该结点为根结点的子树 //并且,删除后没有进行旋转,可能破坏树的平衡性 #include <iostream> #include <vector> #include <string> using namespace std; //////////////////////////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////////////////////////// template<typename Type> class BBTree; template<typename Type> class BBTnode            //结点 { friend class BBTree<Type>; public: BBTnode(); BBTnode(const Type &); BBTnode operator=(const BBTnode &); ~BBTnode(); private: Type     _val;                    //结点的值 int     _cnt;                    //出现的次数 int        _bf;                //平衡度 BBTnode    *_lchild;                //左子树 BBTnode    *_rchild;                //右子树 }; template<typename Type> inline BBTnode<Type>::BBTnode() { _val = 0; _cnt = 1; _bf = 0; _lchild = 0; _rchild = 0; } template<typename Type> inline BBTnode<Type>::BBTnode(const Type &s) { _val = s; _cnt = 1; _bf = 0; _lchild = 0; _rchild = 0; } template<typename Type> inline BBTnode<Type>::~BBTnode() { } template<typename Type> BBTnode<Type> BBTnode<Type>::operator=(const BBTnode<Type> &rhs) { _val = rhs._val; _cnt = rhs._cnt; _bf = rhs._bf; _lchild = rhs._lchild; _rchild = rhs._rchild; return rhs; } //////////////////////////////////////////////////////////////////////////////// template<typename Type> class BBTree { friend class BBTnode<Type>; public: void    add(const Type &);                    //添加一个值 bool    remove(const Type &);                    //删除一个值 bool    pre_traversal();                    //先序遍历 bool    mid_traversal();                    //中序遍历 bool    post_traversal();                    //后序遍历 bool    empty() { return _root == 0; }                //是否为空 void    clear();                        //清空 int        get_depth(BBTnode<Type> *);            //取得树的深度 BBTnode<Type>*    r_rotate(BBTnode<Type> *);            //右旋 BBTnode<Type>*    l_rotate(BBTnode<Type> *);            //左旋 BBTree(); BBTree(const Type &); BBTree(const BBTree<Type> &); ~BBTree(); private: BBTnode<Type>  *_root; BBTnode<Type>*    addtonode(BBTnode<Type> *, const Type &); bool    pre_traversal_node(BBTnode<Type> *); bool    mid_traversal_node(BBTnode<Type> *); bool    post_traversal_node(BBTnode<Type> *); bool    remove_node(BBTnode<Type> *, const Type &); void    del_node(BBTnode<Type> *);                //删除子树 }; template<typename Type> int BBTree<Type>::get_depth(BBTnode<Type> *root) { if(root == 0) return 0; int depth, l_depth, r_depth; if((root->_lchild == 0) && (root->_rchild == 0)) return 1; l_depth = get_depth(root->_lchild); r_depth = get_depth(root->_rchild); depth = l_depth > r_depth ? l_depth:r_depth; ++depth; return depth; } template<typename Type> BBTree<Type>::BBTree() { _root = 0; } template<typename Type> BBTree<Type>::BBTree(const Type &s) { BBTnode<Type> root(s); _root = &root; } template<typename Type> BBTree<Type>::~BBTree() { clear(); } template<typename Type> void BBTree<Type>::add(const Type &s) { if(_root == 0) { BBTnode<Type> *pn = new BBTnode<Type>(s); _root = pn; } else { _root = addtonode(_root, s); } } template<typename Type> bool BBTree<Type>::remove(const Type &val) { if(_root == 0) { cout<<"Can't remove element because there doesn't exit such an element/n"; return false; } return remove_node(_root, val); } template<typename Type> bool BBTree<Type>::remove_node(BBTnode<Type> *root, const Type &val) { if(root->_val == val) { --root->_cnt; if(root->_cnt == 0) { //delete root; //root = 0; return true; } return false; } else if(root->_val < val) { if(root->_rchild == 0) { cout<<"Can't remove element because there doesn't exit such an element/n"; return false; } else if(remove_node(root->_rchild, val)) { if(root->_rchild->_cnt == 0) { del_node(root->_rchild); root->_rchild = 0; } root->_bf = get_depth(root->_lchild) - get_depth(root->_rchild); return true; } else { return false; } } else { if(root->_lchild == 0) { cout<<"Can't remove element because there doesn't exit such an element/n"; return false; } else if(remove_node(root->_lchild, val)) { if(root->_lchild->_cnt == 0) { del_node(root->_lchild); root->_lchild = 0; } root->_bf = get_depth(root->_lchild) - get_depth(root->_rchild); return true; } else { return false; } } } template<typename Type> BBTnode<Type>* BBTree<Type>::addtonode(BBTnode<Type> *node, const Type &s) { if(s == node->_val) { (node->_cnt)++; return node; } else if(s > node->_val) { if(node->_rchild == 0) { BBTnode<Type> *pn = new BBTnode<Type>(s); node->_rchild = pn; --node->_bf; } else { node->_rchild = addtonode(node->_rchild, s); node->_bf = get_depth(node->_lchild) - get_depth(node->_rchild); if(node->_bf == -2) { if(node->_rchild->_bf == -1) { node = l_rotate(node); node->_bf = get_depth(node->_lchild) - get_depth(node->_rchild); node->_lchild->_bf = get_depth(node->_lchild->_lchild) - get_depth(node->_lchild->_rchild); } if(node->_rchild->_bf == 1) { node->_rchild = r_rotate(node->_rchild); node = l_rotate(node); node->_bf = get_depth(node->_lchild) - get_depth(node->_rchild); node->_lchild->_bf = get_depth(node->_lchild->_lchild) - get_depth(node->_lchild->_rchild); node->_rchild->_bf = get_depth(node->_rchild->_lchild) - get_depth(node->_rchild->_rchild); } } } } else { if(node->_lchild == 0) { BBTnode<Type> *pn = new BBTnode<Type>(s); node->_lchild = pn; ++node->_bf; } else { node->_lchild = addtonode(node->_lchild, s); node->_bf = get_depth(node->_lchild) - get_depth(node->_rchild); if(node->_bf == 2) { if(node->_lchild->_bf == 1) { node = r_rotate(node); node->_bf = get_depth(node->_lchild) - get_depth(node->_rchild); node->_rchild->_bf = get_depth(node->_rchild->_lchild) - get_depth(node->_rchild->_rchild); } if(node->_lchild->_bf == -1) { node->_lchild = l_rotate(node->_lchild); node = r_rotate(node); node->_bf = get_depth(node->_lchild) - get_depth(node->_rchild); node->_lchild->_bf = get_depth(node->_lchild->_lchild) - get_depth(node->_lchild->_rchild); node->_rchild->_bf = get_depth(node->_rchild->_lchild) - get_depth(node->_rchild->_rchild); } } } } return node; } template<typename Type> BBTnode<Type>* BBTree<Type>::r_rotate(BBTnode<Type> *node) { cout<<"r_rotate "<<node->_val<<" with "<<node->_lchild->_val<<endl; BBTnode<Type> *temp; temp = node->_lchild; node->_lchild = node->_lchild->_rchild; temp->_rchild = node; return temp; } template<typename Type> BBTnode<Type>* BBTree<Type>::l_rotate(BBTnode<Type> *node) { cout<<"l_rotate "<<node->_val<<" with "<<node->_rchild->_val<<endl; BBTnode<Type> *temp; temp = node->_rchild; node->_rchild = node->_rchild->_lchild; temp->_lchild = node; return temp; } /////////////////////////////////////////////////////////////////////// //先序遍历 template<typename Type> bool BBTree<Type>::pre_traversal() { pre_traversal_node(_root); return true; } template<typename Type> bool BBTree<Type>::pre_traversal_node(BBTnode<Type>* root) { if(root == 0) { return false; } cout<<"val:"<<root->_val<<" cnt:"<<root->_cnt<<" bf:"<<root->_bf<<endl; pre_traversal_node(root->_lchild); pre_traversal_node(root->_rchild); return true; } /////////////////////////////////////////////////////////////////////// //中序遍历 template<typename Type> bool BBTree<Type>::mid_traversal() { mid_traversal_node(_root); return true; } template<typename Type> bool BBTree<Type>::mid_traversal_node(BBTnode<Type>* root) { if(root == 0) { return false; } mid_traversal_node(root->_lchild); cout<<"val:"<<root->_val<<" cnt:"<<root->_cnt<<" bf:"<<root->_bf<<endl; mid_traversal_node(root->_rchild); return true; } /////////////////////////////////////////////////////////////////////// //后序遍历 template<typename Type> bool BBTree<Type>::post_traversal() { post_traversal_node(_root); return true; } template<typename Type> bool BBTree<Type>::post_traversal_node(BBTnode<Type>* root) { if(root == 0) { return false; } post_traversal_node(root->_lchild); post_traversal_node(root->_rchild); cout<<"val:"<<root->_val<<" cnt:"<<root->_cnt<<" bf:"<<root->_bf<<endl; return true; } /////////////////////////////////////////////////////////////////////// template<typename Type> void BBTree<Type>::clear() { del_node(_root); return; } template<typename Type> void BBTree<Type>::del_node(BBTnode<Type> *root) { if( (root->_lchild == 0) && (root->_rchild == 0) ) { delete root; return; } else { if(root->_lchild != 0) { del_node(root->_lchild); root->_lchild = 0; } if(root->_rchild != 0) { del_node(root->_rchild); root->_rchild = 0; } return; } } //////////////////////////////////////////////////////////////////////////// void main() { cout<<"BBTree<int>/n"; BBTree<int> t; t.add(5); t.add(7); t.add(6); t.add(9); t.add(8); t.add(4); t.add(3); t.add(2); t.add(1); t.add(1); t.add(1); t.remove(1); t.remove(1); t.pre_traversal(); cout<<"mid/n"; t.mid_traversal(); cout<<"post/n"; t.post_traversal(); cout<<"BBT<string>/n"; BBTree<string> st; st.add("ddddddd"); st.add("bbbbbbb"); st.add("fffffff"); string s; s = "eeeeeee"; st.add(s); st.pre_traversal(); } • 本文已收录于以下专栏: ## 平衡二叉树SBT模板 • u010660276 • 2015年02月10日 16:38 • 589 ## 二叉树的C++模板类头文件源代码实现 • Alex123980 • 2016年06月01日 17:14 • 1942 ## 二叉树c++模板实现 • zhengzhongwu • 2017年01月23日 15:01 • 710 ## 二叉树的模板类实现 • Avalon_Y • 2016年03月20日 21:25 • 1496 ## 平衡二叉树转换 • u013076044 • 2014年12月17日 12:32 • 1213 ## 【算法】判断一颗二叉树是否是平衡二叉树 1.问题描述:   判断一颗二叉树是否是平衡二叉树。 2.问题分析:   平衡二叉树要求左子树和右子树的高度相差为1,且左右子树都是平衡二叉树,显然需要计算二叉树高度的函数。 3.代码: templa... • forestLight • 2011年06月29日 17:31 • 11133 ## 【2】输入一颗二叉树判断是不是平衡二叉树 • cgl1079743846 • 2014年05月23日 21:08 • 2426 ## 题目:输入一棵二叉树的根结点,判断该树是不是平衡二叉树 • yanxiaolx • 2016年08月23日 17:05 • 1563 ## 输入一棵二叉树,判断该二叉树是否是平衡二叉树。 • wukong412 • 2015年06月11日 17:51 • 641 ## 剑指offer:判断二叉树是不是平衡二叉树(java) • abc7845129630 • 2016年10月06日 18:48 • 3667 举报原因: 您举报文章:无聊时写过的一点代码,包括二叉树、二叉树模板、平衡二叉树模板,对某些人可能有用,呵呵 色情 政治 抄袭 广告 招聘 骂人 其他 (最多只允许输入30个字)
5,509
17,132
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.546875
3
CC-MAIN-2018-09
latest
en
0.105359
https://justaaa.com/advanced-math/627527-consider-the-experiment-of-rolling-two-standard
1,723,354,248,000,000,000
text/html
crawl-data/CC-MAIN-2024-33/segments/1722640975657.64/warc/CC-MAIN-20240811044203-20240811074203-00813.warc.gz
266,376,293
9,723
Question # Consider the experiment of rolling two standard (six-sided) dice and taking their sum. Assume that each... Consider the experiment of rolling two standard (six-sided) dice and taking their sum. Assume that each die lands on each of its faces equally often. We consider the outcomes of this experiment to be the ordered pairs of numbers on the dice, and the events of interest to be the different sums. Write out the generating function F(x) for the sums of the dice, and show how it factors into the generating functions for the individual die rolls. Use F(x) to find another pair of dice, not identical to each other, that give the same probabilities of their sum as normal dice. (Hint: how else can you factor F(x)?) Solution Given that #### Earn Coins Coins can be redeemed for fabulous gifts.
172
815
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.96875
3
CC-MAIN-2024-33
latest
en
0.913388
https://sisypheannews.com/on-line-casino-online-wagering-method-optimistic-development-method/
1,611,536,016,000,000,000
text/html
crawl-data/CC-MAIN-2021-04/segments/1610703561996.72/warc/CC-MAIN-20210124235054-20210125025054-00187.warc.gz
552,622,076
8,139
# On line casino Online Wagering Method – Optimistic Development Method If you discuss about the casino on the internet betting program, you will discover there are a lot of men and women who will discourage you. They will say that betting on the internet genuinely is not a good resource to make funds. But I will say that it is extremely straightforward to make from online casino video games, if you know the on line casino online betting approaches. Truly income administration information is what most of the gamblers lack. Therefore some are presently bankrupt while some are taking pleasure in an affluent life. Did anyone of you hear about “Positive Progression Method”, this is one particular of the extremely properly recognized casino online betting approach. You can say this is a logic that tells you the opportunities of profitable four moments in a single row. At the starting or just at the preliminary stage the bet is of one device, the next guess is of three models, the 3rd guess is of two units and the fourth wager is of six models. As a result it is also called the one-3-two-six technique. I will illustrate this casino on the web betting technique in detail, to give you a very clear comprehension. For instance you spot your initial wager of \$ten. The 2nd bet is supposed to be \$thirty – when you win the initial guess, your \$ten will get additional up with the \$20 already placed on the table. The whole will come to \$30. So the second guess you area would be of \$30. The grand whole prior to you engage in the third bet will be of \$60 overall (the \$thirty bet put by you in the next guess mixed together with the next guess profitable presently placed on the desk). From the \$60 you consider away \$forty and the 3rd guess is of \$twenty. Your 3rd bet will be of \$20 and after successful the third wager you will earn \$40. Now, for 메리트카지노 사이트 will insert \$twenty a lot more to the overall \$forty to make it a \$60 guess for the forth wager you spot. Winning the fourth guess you will be left with \$one hundred twenty. This is the internet profit you make from this casino online betting technique. To proceed the match you will again spot a bet of \$ten and adhere to the “Optimistic Development System” once again. Soon after ending the forth bet, you start off more than yet again. Moreover, each and every time you unfastened a guess, commence yet again with preliminary \$ten wager.
525
2,434
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.96875
3
CC-MAIN-2021-04
latest
en
0.950626
https://stat.ethz.ch/pipermail/r-help/2015-September/432166.html
1,580,318,692,000,000,000
text/html
crawl-data/CC-MAIN-2020-05/segments/1579251801423.98/warc/CC-MAIN-20200129164403-20200129193403-00138.warc.gz
653,744,529
2,749
Sarah Goslee sarah.goslee at gmail.com Mon Sep 14 19:06:42 CEST 2015 ```On Mon, Sep 14, 2015 at 11:11 AM, JORGE COLACO <j_colaco at utad.pt> wrote: > I would greatly appreciate if you could let me know why the R does not make > the right computations in the case below. > Jorge Colaço R made the correct computations: it did exactly what you told it. It isn't R's fault that what you told it isn't what you meant. You want to subtract the column means from each column; what you actual told R was to subtract Xmean from X element by element column-wise, recycling Xmean as necessary. Here's what you meant: X<-matrix(c(-1,0,1,-1,1,0, 0,0,1,1,1,-1, -1,0,-1,1,0,1, 1,1,-1,-1,0,0, 0,0,1,1,-1,1),nrow=5,ncol=6,byrow=T) Xmean <- colMeans(X) sweep(X, 2, Xmean, "-") Thank you for providing a simple reproducible example and clear idea straightforward. > > R version 3.2.2 (2015-08-14) -- "Fire Safety" > Copyright (C) 2015 The R Foundation for Statistical Computing > Platform: i386-w64-mingw32/i386 (32-bit) > > R is free software and comes with ABSOLUTELY NO WARRANTY. > You are welcome to redistribute it under certain conditions. > Type 'license()' or 'licence()' for distribution details. > > R is a collaborative project with many contributors. > 'citation()' on how to cite R or R packages in publications. > > Type 'demo()' for some demos, 'help()' for on-line help, or > 'help.start()' for an HTML browser interface to help. > Type 'q()' to quit R. >> X<-matrix(c(-1,0,1,-1,1,0, > + 0,0,1,1,1,-1, > + -1,0,-1,1,0,1, > + 1,1,-1,-1,0,0, > + 0,0,1,1,-1,1),nrow=5,ncol=6,byrow=T) > >> > mean<-c(mean(X[,1]),mean(X[,2]),mean(X[,3]),mean(X[,4]),mean(X[,5]),mean(X[,6])) >> mean > [1] -0.2 0.2 0.2 0.2 0.2 0.2 >> X-mean > [,1] [,2] [,3] [,4] [,5] [,6] > [1,] -0.8 -0.2 0.8 -1.2 0.8 -0.2 > [2,] -0.2 0.2 0.8 0.8 0.8 -1.2 > [3,] -1.2 -0.2 -0.8 0.8 -0.2 0.8 > [4,] 0.8 0.8 -1.2 -0.8 -0.2 -0.2 > [5,] -0.2 -0.2 0.8 0.8 -0.8 0.8 >> > > Right Result Should Be: > > ans = > -0.80000 -0.20000 0.80000 -1.20000 0.80000 -0.20000 > 0.20000 -0.20000 0.80000 0.80000 0.80000 -1.20000 > -0.80000 -0.20000 -1.20000 0.80000 -0.20000 0.80000 > 1.20000 0.80000 -1.20000 -1.20000 -0.20000 -0.20000 > 0.20000 -0.20000 0.80000 0.80000 -1.20000 0.80000 > > [[alternative HTML version deleted]] > > ______________________________________________ > R-help at r-project.org mailing list -- To UNSUBSCRIBE and more, see > https://stat.ethz.ch/mailman/listinfo/r-help
1,052
2,524
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.625
3
CC-MAIN-2020-05
longest
en
0.769252
http://www.mscs.dal.ca/~selinger/quipper/doc/Algorithms-TF-Simulate.html
1,511,012,578,000,000,000
text/html
crawl-data/CC-MAIN-2017-47/segments/1510934804965.9/warc/CC-MAIN-20171118132741-20171118152741-00337.warc.gz
449,490,593
3,754
The Quipper System Algorithms.TF.Simulate Description This module contains functions for simulating and debugging the Triangle Finding Oracle and its subroutines. Synopsis # Native and simulated arithmetic functions For each arithmetic routine implemented in the Triangle Finding Oracle, we give two parallel implementations: one using Haskell’s arithmetic, and one by simulating the circuit execution. These can then be cross-checked against each other for correctness. Increment an m-bit Quipper integer (mod 2m). Native Haskell. Increment an m-bit Quipper integer (mod 2m). Simulated from increment. Increment an m-bit Triangle Finding integer (mod 2m–1). Native Haskell. Increment an m-bit TF integer (mod 2m–1). Simulated from increment_TF. Double an m-bit TF integer (mod 2m–1). Native Haskell. Double an m-bit TF integer (mod 2m–1). Simulated from double_TF. Add two IntTFs. Native Haskell. Add two IntTFs. Simulated from o7_ADD. Multiply two IntTFs. Native Haskell. Multiply two IntTFs. Simulated from o8_MUL. Raise an IntTF to the 17th power. Native Haskell. Raise an IntTF to the 17th power. Simulated from o4_POW17. Compute the reduction, mod 3, of lower-order bits of an IntTF. Native Haskell. Compute the reduction, mod 3, of lower-order bits of an IntTF. Simulated from o5_MOD3. Compute the reduction, mod 3, of lower-order bits of an IntTF. Simulated from o5_MOD3_alt. # Native and simulated oracle functions oracle_haskell :: Int -> [Bool] -> [Bool] -> Bool Source # Oracle: compute the edge information between two nodes. Native Haskell. oracle_simulate :: Int -> [Bool] -> [Bool] -> Bool Source # Oracle: compute the edge information between two nodes. Simulated from o1_ORACLE. oracle_aux_haskell :: Int -> [Bool] -> [Bool] -> (([Bool], [Bool]), (IntTF, IntTF, IntTF, IntTF, IntTF, IntTF), (Bool, Bool, Bool, Bool, Bool, Bool, Bool)) Source # oracle_aux_simulate :: Int -> [Bool] -> [Bool] -> (([Bool], [Bool]), (IntTF, IntTF, IntTF, IntTF, IntTF, IntTF), (Bool, Bool, Bool, Bool, Bool, Bool, Bool)) Source # Oracle auxiliary information. Simulated from o1_ORACLE_aux. show_oracle_details :: Show a => (([Bool], [Bool]), (a, a, a, a, a, a), (Bool, Bool, Bool, Bool, Bool, Bool, Bool)) -> String Source # A specialized show for oracle auxiliary data. convertNode_haskell :: Int -> [Bool] -> IntTF Source # Conversion of a node to an integer. Native Haskell. convertNode_simulate :: Int -> [Bool] -> IntTF Source # Conversion of a node to an integer. Simulated from o2_ConvertNode. # Testing functions Various small test suites, checking the simulated circuit arithmetic functions against their Haskell equivalents. Give full table of values for increment functions. Give full table of values for the increment_TF functions. Give full table of values for the double_TF functions. addTF_table :: Int -> [String] Source # Give full table of values for the TF addition (o7_ADD) functions. multTF_table :: Int -> [String] Source # Give full table of values for the TF multiplication (o8_MUL) functions. pow17_table :: Int -> [String] Source # Give full table of values for the pow17 functions. mod3_table :: Int -> [String] Source # Give full table of values for the mod3 functions. oracle_table :: Int -> Int -> [String] Source # Give full table of values for the oracle. Give a full table of values for o1_ORACLE_aux. convertNode_table :: Int -> Int -> [String] Source # Give full table of values for the ConvertNode functions. A compilation of the various tests above, to be called by Main. oracle_tests :: Int -> Int -> IO () Source # A suite of tests for the oracle, to be called by Main.
913
3,663
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.734375
3
CC-MAIN-2017-47
latest
en
0.758567
http://newboatbuilders.com/pages/load.html
1,490,884,323,000,000,000
text/html
crawl-data/CC-MAIN-2017-13/segments/1490218194601.22/warc/CC-MAIN-20170322212954-00475-ip-10-233-31-227.ec2.internal.warc.gz
237,163,998
8,873
Disclaimer:   I am not a spokesperson for the US Coast Guard or ABYC. For an official interpretation of regulations or standards you must contact the US Coast Guard or other organization referenced..   More..... ### How do I determine how many people my boats can carry, how much weight? I developed an E-course in Capacity and Flotation for Boat Builders, for Professional Boat Builder Magazine. They no longer offer it. I am offering it as an e-book. Go to Ike's Store to see more The Maximum safe load and the persons capacity that a boat can carry is based on the displacement weight of a boat. What is displacement weight? That is essentially the amount of weight that it would take to sink your boat. There are several ways to find this out. 1. You can calculate the volume of water displaced (hence displacement) when the boat is sunk to the point where water starts to come in, also called the static float plane. Multiply this volume times 62.4. (The weight of one cubic foot of fresh water.): or, (Click image for full size.) 2. Or put weight in the boat until water starts to come in. This sounds simple but is difficult for the average boat builder because it requires a lot of weight. The amount of weight can be considerable. On a larger boat it can be 10,000 lb. or more. The amount of weight it takes to sink the boat is the displacement weight. If you want to do it this way hire a test lab or call the Coast Guard. See the flotation page for information about testing. 3. Or with smaller boats fill the boat with water using a bucket of a known amount. Fill it until the level of the water in the boat and outside the boat is equal, that is when water starts to flow in and out of the boat. Multiply the number of gallons times 8. That gives you the displacement weight. Inboard and Stern-Drive Boats under 20 feet. Maximum Weight Capacity=(displacement weight-boat weight-4(machinery weight))/5 = W or: Maximum Weight Capacity = (Maximum Displacement - boat weight)/7 = W Maximum Persons Capacity = W; or for boats with W less than 550 use the test method. Then measure the following on your boat: (Click image for full size) Cockpit area. 40% reference area. Passenger area 70% reference areas. Two foot reference areas fore and aft. Test Method: float the boat in calm water with all the normal gear aboard, that is, engines, batteries, controls, etc. Add weights along the outboard side of the passenger area . The weight should be at seat height and distributed equally for and aft. Add weights until water is about to come over the gunwale. Stop. Add up the weight. Maximum Persons Capacity = Total of Weights/0.6 Maximum Persons (in people) = (Maximum Persons Weight + 32)/141. Round up or down. A bit of advice. Maximum persons capacity doesn't have to be the maximum amount, it can be less. Smart boat builders down rate the maximum weight and persons capacity to cover liability and all that other heavy junk people carry on to their boat like coolers full of beer, extra gas, the spare fish finder they just have to have, the ski-boards and slalom skis, etc., etc. REMEMBER! The boat operator will exceed whatever you put on the label and then blame you if something goes wrong! Be conservative. Give yourself some room. Outboard Boats under 20 feet rated for greater than 2 HP. Maximum Weight Capacity = W- boat weight /5 Maximum Persons Capacity = Maximum Weight Capacity - Col 6 Table 4 Outboard boats under 20 feet, 2 HP or less. Maximum Weight Capacity = (W- boat weight) x 3/10 Maximum Persons Capacity = (Maximum Weight Capacity - 25) x  0.90 Maximum Persons (in people) = (Maximum Persons Weight + 32)/141   Round up or down. Manually propelled boats. Maximum Weight Capacity = (W- boat weight) x 3/10 Maximum Persons Capacity = Maximum Weight Capacity x 0.90 Maximum Persons (in people) = (Maximum Persons Weight + 32)/141  Round up or down. The same warning applies to Inboard boats. Be conservative. Error on the side of safety.
913
3,999
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.703125
4
CC-MAIN-2017-13
longest
en
0.932679
http://elibrary.matf.bg.ac.rs/handle/123456789/4255?show=full
1,621,007,224,000,000,000
text/html
crawl-data/CC-MAIN-2021-21/segments/1620243991428.43/warc/CC-MAIN-20210514152803-20210514182803-00581.warc.gz
19,915,776
7,443
# Konvejeva notacija u teoriji čvorova i njena primena u metodima za određivanje rastojanja čvorova dc.contributor.advisor Tošić, Dušan dc.contributor.author Zeković, Ana dc.date.accessioned 2016-08-08T07:31:17Z dc.date.available 2016-08-08T07:31:17Z dc.date.issued 2015 dc.identifier.uri http://hdl.handle.net/123456789/4255 dc.description.abstract A main focus of the paper is construction of new methods for en_US defining diverse knot distance types - the distance of knots made by crossing changes (Gordian distance) and the distance among knots made by crossing smoothing (smoothing distance). Different ways of knots presentation are introduced, with objective to a mirror curve model. It is presented a purpose of the model, coding of knots, by using the model preferences, as well as introduction of a method to determinate a knots presented by the model and derived all the knots that could be placed to a nets dimensions p×q (p ≤ 4, q ≤ 4). Diverse knot notations are described into details, with a focus to Conway’s notation and its topological characteristics. As it is known, a present algorithms are based on an algebra of chain fractions, that are in close relation with a presentation of rational knots, which results in an absence of a huge number of non-rational knots, in an existing Gordian’s distance tables. The subject of the paper is an implementation of methods with bases on determination of new distances equal 1. The methods are based on a non-minimal presentation of rational and non-rational knots, generation of algorithms established on geometrical characteristics of Conway’s notation and a weighted graph search. The results are organized into Gordian’s distance knots tables up to 9 crossings, and have been enclosed with the paper. In order to append the table with knots having a bigger number of crossings, it has been suggested a method for extension of results for knot families. Using facts of relation among Gordian’s numbers and smoothing numbers, a new method for smoothing number determination is presented, and results in a form of lists for knots not having more then 11 crossings. In conjunction with Conway’s notation concept and the method, algorithms for a smoothing distance are generated. New results are organized in knot tables, up to 9 crossings, combined with previous results, and enclosed with the paper. A changes and smoothing to a knot crossing could be applied for modeling topoisomerase and recombinase actions of DNA chains. It is presented the method for studying changes introduced by the enzymes. A main contribution to the paper is the concept of Conways notation, used for all relevant results and methods, which led to introduction of a method for derivation a new knots in Conways notation by extending C-links. In a lack of an adequat pattern for an existing knot tables in DT-notation, there is usage of a structure based on topological knot concepts. It is proposed a method for knot classification based on Conways notation, tables of all knots with 13 crossings and alternated knots with 14 crossings has been generated and enclosed. The subject of the paper takes into consideration Bernhard-Jablan’s hypothesis for a determination of unknotting number using minimal knot diagrams. The determination is crucial in computation of diverse knot distances. The paper covers one of main problems in knot theory and contains a new method of knot minimization. The method is based on relevance of local and global minimization. 5 There are defined new terms such as a maximum and a mixed unknotting number. The knots that do not change a minimum crossing number, after only one crossing change are taken into consideration for the analyzes. Three classes of the knots are recognized, and called by authors . Kauffman’s knots, Zekovic knots and Taniyama’s knots. The most interesting conclusion correlated with Zekovic knots is that all derived Perko’s knots (for n ≤ 13 crossings) are actually Zekovic knots. Defining this class of knots provides opportunity to emphasize new definitions of specifis featured for well-known Perko’s knots. dc.description.provenance Submitted by Slavisha Milisavljevic (slavisha) on 2016-08-08T07:31:17Z No. of bitstreams: 1 phdZekovicAna.pdf: 5246306 bytes, checksum: af16c7c794a085a41131317ac46a149e (MD5) en dc.description.provenance Made available in DSpace on 2016-08-08T07:31:17Z (GMT). No. of bitstreams: 1 phdZekovicAna.pdf: 5246306 bytes, checksum: af16c7c794a085a41131317ac46a149e (MD5) Previous issue date: 2015 en dc.language.iso sr en_US dc.publisher Beograd en_US dc.title Konvejeva notacija u teoriji čvorova i njena primena u metodima za određivanje rastojanja čvorova en_US mf.author.birth-date 1982-03-11 mf.author.birth-place Beograd en_US mf.author.birth-country Srbija en_US mf.author.residence-state Srbija en_US mf.author.citizenship Srpsko en_US mf.author.nationality Srpkinja en_US mf.subject.area Računarstvo en_US mf.subject.keywords Conway notation, knot distance, unknotting number, knot minimization, Perko pair knots en_US mf.contributor.committee Jablan, Slavik mf.contributor.committee Rakić, Zoran mf.contributor.committee Filipović, Vladimir mf.university.faculty Mathematical Faculty en_US mf.document.references 143 en_US mf.document.pages 182 en_US mf.document.genealogy-project No en_US mf.university Belgrade University en_US ## Files in this item Files Size Format View phdZekovicAna.pdf 5.246Mb PDF View/Open
1,266
5,458
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.65625
3
CC-MAIN-2021-21
latest
en
0.906065
https://www.coursehero.com/file/6048674/ALRChap10/
1,516,119,471,000,000,000
text/html
crawl-data/CC-MAIN-2018-05/segments/1516084886437.0/warc/CC-MAIN-20180116144951-20180116164951-00514.warc.gz
905,922,550
26,466
# ALRChap10 - Chapter 10 Variable Selection Multicollinearity... This preview shows pages 1–3. Sign up to view the full content. Chapter 10 Variable Selection Multicollinearity A set of terms, X 1 ,X 2 , . . .,X p are approximately collinear if, for constants c 0 , c 1 , . . ., c p , which is similar to a linear regression mean function with intercept c 0 /c j and slopes - c l /c j . Diagnostic method: Step1: Regress X j on the other X ’s Step2: Calculate R 2 , which we will call R 2 j If the largest R 2 j is near 1, we would diagnose approximate collinearity. VIF When p > 2, the variance of the j -th coefficient is This preview has intentionally blurred sections. Sign up to view the full version. View Full Document /( 1 - R 2 j ) is called the j th variance inflation factor , or VIF j Variable Selection Principle of Parsimony (Occam’s razor): Choose fewer variables with sufficient explanatory power. This is a desirable modeling strategy. The goal of variable selection is thus to identify the smallest subset of covariates that provides good fit. One way of achieving this is to retain the significant predictors in the fitted multiple regression. This may not work well if some variables are strongly correlated among themselves or if there are too many variables (e.g., exceeding the sample size). This is the end of the preview. Sign up to access the rest of the document. {[ snackBarMessage ]} ### Page1 / 7 ALRChap10 - Chapter 10 Variable Selection Multicollinearity... This preview shows document pages 1 - 3. Sign up to view the full document. View Full Document Ask a homework question - tutors are online
397
1,637
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.09375
3
CC-MAIN-2018-05
latest
en
0.849395
http://www.thescienceforum.com/astronomy-cosmology/27810-what-area-sky-earth-print.html
1,669,674,130,000,000,000
text/html
crawl-data/CC-MAIN-2022-49/segments/1669446710662.60/warc/CC-MAIN-20221128203656-20221128233656-00741.warc.gz
91,124,761
2,339
# What 'area' is the sky of the earth? • April 2nd, 2012, 06:08 PM Quantime What 'area' is the sky of the earth? What is the total 'area' of the sky we can see from earth? Not sure how to phrase this but what would be the value of space that the sky equals, if say the moon was 1cm in diameter? (using that as just an example to try to explain what I mean) Say for instance at one point on earth as far as the horizon all around (assuming the earth has no hills or dips), what would be the area of space out of all space visible across the sky that one could see. Then what is then the total 'area' of the sky we can see? I wanted to then be able to map exactly what percentage of the sky around the earth that the moon covers at average distance. Not sure if I made that seem clear? • April 2nd, 2012, 06:23 PM mathman I think it would be much clearer if you looked at in terms of solid angle rather than area. The sky that is visible is about 1/2 the total, so the solid angle would be 2π steradians. The moon solid angle is 6.4236x10^-5 steradians (I got this from the second reference). Solid angle - Wikipedia, the free encyclopedia Lunar Calibration - Lunar Irradiance Model
307
1,186
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.203125
3
CC-MAIN-2022-49
latest
en
0.954641
itadviser.ro
1,582,824,329,000,000,000
text/html
crawl-data/CC-MAIN-2020-10/segments/1581875146744.74/warc/CC-MAIN-20200227160355-20200227190355-00257.warc.gz
404,587,458
14,916
50449B Useful Formulas & Functions (Microsoft Excel) 250€ This 1 day course provides students with the knowledge and skills to the usage of useful formulas and functions in Microsoft Excel 2007 and Microsoft Excel 2010. Audience profile This course is intended for users of Microsoft Office Excel who want to learn about useful formulas and functions. At course completion After completing this course, students will be able to: • Apply Formula and Functions Basic • Statistical and Logical Functions • Lookup and Reference Formulas • Text Formulas • Date and Time Formulas • Array and Database Functions • Efficiency Tips Continutul cursului poate fi adaptat in functie de necesitatile si/sau specificul cursantilor. Totodata, editia de curs poate fi aleasa de catre cursanti inainte de inscierea in cadrul grupei: 2010/2013/2016. La finalul fiecarui curs, fiecare persoana va primi o diploma ce atesta participarea in cadrul respectivei grupe de curs, Certificate of Achievement, elaborata de ITAdviser, Microsoft Certified Partner on Learning Solutions. Diploma are recunoastere atat la nivel national cat si international. Detalii curs • Durata 1 zi • Nivel 200 • Limba curs Engleza/Romana • Limba suport Engleza • Cursanti/grupa 10 • Examen • Certificare Course Outline Module 1: Making Data Work For You This module explains how to understand and apply Excel basic formulas and functions. Lessons Formula basics Using cell references Copy formula without changing cell reference Transpose formula Using nested functions After completing this module, students will be able to: Understand and apply formula basics Use cell references Copy formula without changing cell reference Transpose formula using paste special Use nested functions Module 2: Statistical and Logical Functions This module explains how to use logical functions including CountIf, Sumif, If, IsError. Lessons Perform calculation using CountIF Perform calculation using SumIF Perform calculation using AverageA Using IF function to prevent division by zero Using IsError function to avoid error display Creating multiple conditions using nested IF Using logical function OR, And After completing this module, students will be able to: Perform calculation using CountIf, SumIf, AverageA Use If function to prevent division by zero Use IsError function to avoid error display Create multiple conditions using nested IF Use logical function OR, AND Module 3: Lookup and Reference Formulas This module explains how to apply and use lookup formulas including vlookup, hlookup, match and index. Lessons Use Vlookup to find specific data Use Hlookup to find values in rows Use Match and Index to retrieve data After completing this module, students will be able to: Use Vlookup to find specific data Use Hlookup to find values in rows Use Match and Index to retrieve data Module 4: Text Formulas This module explains how to apply Text formula to help change casing of text, append text and numerical value in excel spreadsheet. Lessons Changing case of text Append text and numerical value Convert imported text format into numbers Break imported date field into individual columns After completing this module, students will be able to: Change case of text using Upper, Lower or Proper formula Append text and numerical value Convert imported text format into numbers Break imported date field into individual columns Module 5: Date and Time Formulas This module explains how to make use of calculate the difference of two given Date fields and to perform calculation with Time fields. Lessons Perform addition to Date fields Calculate difference between two Dates Perform calculations with Time fields After completing this module, students will be able to: Perform addition and calculate difference between two dates Perform calculations with Time fields Module 6: Array and Database Functions This module explains how to apply and use advance formula including Array, Frequency and Database functions. Lessons Using Array Formulas Calculate the difference between Maximum and Minimum values Using Frequency function to Count responses Using Database functions DSum and DCount After completing this module, students will be able to: Use Array Formulas Calculate the difference between Maximum and Minimum values in an Array Use Frequency function to Count responses in tabulated data Use Database functions DSum and DCount Module 7: Efficiency Tips This module discusses some useful Excel Tips including application of Data Validations and Auditing Tools. Lessons Shortening worksheets names Protecting cells containing formulas Using Data Validation Displaying Formula syntax Using Auditing Tools for errors checking Tracing precedent and dependent Adding comments to worksheet After completing this module, students will be able to: Understand the advantages of shortening worksheet names Protect cells from amendments by others Use Data validation to improve data entries Use Auditing Tools for checking errors Add useful notes by commenting worksheet 400€
973
5,040
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.90625
3
CC-MAIN-2020-10
latest
en
0.587448
https://mathemerize.com/in-an-equilateral-triangle-prove-that-three-times-the-square-of-one-side-is-equal-to-four-times-the-square-of-one-its-altitudes/
1,721,245,375,000,000,000
text/html
crawl-data/CC-MAIN-2024-30/segments/1720763514801.32/warc/CC-MAIN-20240717182340-20240717212340-00304.warc.gz
345,823,912
30,878
# In an equilateral triangle, prove that three times the square of one side is equal to four times the square of one its altitudes. ## Solution : Let ABC be and equilateral triangle and let AD $$\perp$$ BC. In $$\triangle$$ ADB and ADC, we have : AB = AC          (given) and $$\angle$$ ADB = $$\angle$$ ADB        (each 90) By RHS criteria of congruence, we have : $$\triangle$$  ADB $$\cong$$ $$\triangle$$ ADC So, BD = DC or  BD = DC = $$1\over 2$$ BC             ……..(1) Since $$\triangle$$  ADB is a right triangle, angled at D, by Pythagoras theorem, we have : $${AB}^2$$ = $${AD}^2$$ + $${BD}^2$$ $${AB}^2$$ = $${AD}^2$$ + $$({1\over 2}BC)^2$$        (from 1) $${AB}^2$$ = $${AD}^2$$ + $${1\over 4}{BC}^2$$ $${AB}^2$$ = $${AD}^2$$ + $${AB}^2\over 4$$            ($$\therefore$$  BC = AB) $${3\over 4}{AB}^2$$ = $${AD}^2$$  or  $$3{AB}^2$$ = $$4{AD}^2$$
345
873
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.53125
5
CC-MAIN-2024-30
latest
en
0.717287
https://www.gradesaver.com/textbooks/math/calculus/calculus-early-transcendentals-2nd-edition/chapter-5-integration-5-3-fundamental-theorem-of-calculus-5-3-exercises-page-374/39
1,585,519,499,000,000,000
text/html
crawl-data/CC-MAIN-2020-16/segments/1585370496227.25/warc/CC-MAIN-20200329201741-20200329231741-00458.warc.gz
977,661,352
12,512
## Calculus: Early Transcendentals (2nd Edition) $$1$$ \eqalign{ & \int_0^{\pi /4} {{{\sec }^2}\theta } d\theta \cr & {\text{recall that }}\int {{{\sec }^2}\theta } d\theta = \tan \theta + C \cr & \int_0^{\pi /4} {{{\sec }^2}\theta } d\theta = \left. {\left( {\tan \theta } \right)} \right|_0^{\pi /4} \cr & {\text{Using The Fundamental Theorem}} \cr & = \left( {\tan \frac{\pi }{4}} \right) - \left( {\tan 0} \right) \cr & {\text{simplify}} \cr & = \left( 1 \right) - \left( 0 \right) \cr & = 1 \cr}
209
501
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.796875
4
CC-MAIN-2020-16
latest
en
0.439312
https://www.edaboard.com/group.php?gmid=8725&s=126406a2e5af7ec48f60866f97bb8e32&do=discuss
1,571,796,424,000,000,000
text/html
crawl-data/CC-MAIN-2019-43/segments/1570987828425.99/warc/CC-MAIN-20191023015841-20191023043341-00481.warc.gz
878,482,802
8,348
# 24 second shotclock timer for basketball using C compiler 1. clarence501 Hey Guys, I really need your professional help right now. I'm making this 24 second shotclock timer for my project. I have been able to make the source code for setting the 24 shot clock, using interrupt. But I'm stuck on how to do the timing. What I want to do is to set the shot clock either from 0-24 value and then when i switch RB4 == 1 and press the interrupt RB0 again, it will start counting. I don't know how to do that yet. Can you guys give me some professional Help? This is the ISIS circuit model: Here is my source code: #include <pic.h> int num = 0; void initialize(void); void timing(void); void do_outputs(void); void main(void) { initialize(); while(1) { do_outputs(); timing(); } } void initialize(void) { PORTC = 0x40; PORTD = 0x40; TRISC = 0x00; TRISD = 0x00; INTEDG = 0; GIE = 1; INTE = 1; } void timing(void) { } void do_outputs(void) { GIE = 0; if (INTF) { INTF = 0; if((RB2==1) && (RB4==0)) {num++;} if((RB2==0) && (RB4==0)) {num--;} { if(num==-1) { num=24; PORTC = 0x10; PORTD = 0x40; } if(num==0) { PORTC = 0x40; PORTD = 0x40; } if(num==1) { PORTC = 0x79; PORTD = 0x40; } if(num==2) { PORTC = 0x24; PORTD = 0x40; } if(num==3) { PORTC = 0x30; PORTD = 0x40; } if(num==4) { PORTC = 0x19; PORTD = 0x40; } if(num==5) { PORTC = 0x12; PORTD = 0x40; } if(num==6) { PORTC = 0x02; PORTD = 0x40; } if(num==7) { PORTC = 0x78; PORTD = 0x40; } if(num==8) { PORTC = 0x00; PORTD = 0x40; } if(num==9) { PORTC = 0x10; PORTD = 0x40; } if(num==10) { PORTC = 0x40; PORTD = 0x79; } if(num==11) { PORTC = 0x79; PORTD = 0x79; } if(num==12) { PORTC = 0x24; PORTD = 0x79; } if(num==13) { PORTC = 0x30; PORTD = 0x79; } if(num==14) { PORTC = 0x19; PORTD = 0x79; } if(num==15) { PORTC = 0x12; PORTD = 0x79; } if(num==16) { PORTC = 0x02; PORTD = 0x79; } if(num==17) { PORTC = 0x78; PORTD = 0x79; } if(num==18) { PORTC = 0x00; PORTD = 0x79; } if(num==19) { PORTC = 0x10; PORTD = 0x79; } if(num==20) { PORTC = 0x40; PORTD = 0x24; } if(num==21) { PORTC = 0x79; PORTD = 0x24; } if(num==22) { PORTC = 0x24; PORTD = 0x24; } if(num==23) { PORTC = 0x30; PORTD = 0x24; } if(num==24) { PORTC = 0x19; PORTD = 0x24; } if(num==25) { PORTC = 0x40; PORTD = 0x40; num = 0; } } GIE =1; } } 2. betwixt Can't help with ISIS, I've never used it but your schematic will not work. You must at least make these changes: 1. add current limiting resisors in each segment connection of both displays. 2. add decoupling capacitors across the all the supply pins 3. add clock components, the 877a does not have an internal clock generator 4. tie /MCLR high, preferably through a resistor. Also you may have to debounce the switch contacts, either in hardware or software. The software would be far smaller and easier to maintain if you put the segment values into an array, for example "digit_1[10] = {0x10,0x40,0x79...};" so you could set the displayed digit by using something like "digit_1[2]; digit_2[3];" to show "23". It also makes the calculation of which digit to display much simpler. You are also missing code to actually time the seconds, it looks as though you intend to use timer interrupts but the ISR is missing. Brian. Results 1 to 2 of 2
1,192
3,210
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.546875
3
CC-MAIN-2019-43
latest
en
0.457755
http://nataraz.in/diagram/magic-jack-wiring-diagram
1,606,415,674,000,000,000
text/html
crawl-data/CC-MAIN-2020-50/segments/1606141188899.42/warc/CC-MAIN-20201126171830-20201126201830-00612.warc.gz
65,528,871
8,793
# Magic Jack Wiring Diagram • Wiring Diagram • Date : November 26, 2020 ## Magic Jack Wiring Diagram Jack Downloads Magic Jack Wiring Diagram jackpot city casino jackboxtv jacket jackpot city jack dorsey jackson jacksepticeye jackpot bingo winnipeg jack nicklaus jackpot city casino login jack log in jack black Magic Jack Wiring DiagramThe Way to Make Venn Diagram in PowerPoint PowerPoint's Venn diagram has long been among the crucial features of any demonstration which makes use of it. By taking the opportunity to create a great Venn diagram, you can readily determine the numerous members of a certain group, and the association between them. A individual has to only select what types of objects are supposed to be utilized as parts of the circle. From that point, he or she is able to drag different objects to fill the entire area. Once you have decided which objects belong to every member of this group, you may use the resulting type of diagram to categorize every one of them. By way of instance, if the two objects that you used in the center of this circle are clothing products, then these will be the initial ones. The thing that belongs to the group must possess some similarity to the clothing item which you used to draw the center of the ring. If you draw a circle with two associates and several members inside it, then you can tell the team the center one is composed of people. To be able to acquire a better visualization of the ring, you may use a larger area to draw on the circle. Drawing the region larger will help you draw the ring on a two-dimensional surface. By using a round surface, you can earn a diagram that is a replica of the one you learned in high school geometry. The distinction is thatin the first case, you can use two different horizontal lines to describe the linking points. Then, you may add another horizontal line to join the 2 things you've drawn. This is the simplest approach to draw a Venn diagram, and it is also the most traditional manner. When employing this method, you may readily see the overlap of the areas that you draw. You can then start to use colors which can allow you to reveal the members of the group that they are associated with one another. You can do it by combining colors which have similar values, or combining colors which are the same. By using the methods and suggestions described previously can readily use a Venn diagram to categorize your own objects. You can easily make a diagram with just a couple of mouse clicks, and it's a wonderful way to quickly draw the right groupings that you need.
534
2,589
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.171875
3
CC-MAIN-2020-50
latest
en
0.932393
https://de.mathworks.com/matlabcentral/cody/problems/44345-matlab-counter/solutions/1303893
1,611,048,501,000,000,000
text/html
crawl-data/CC-MAIN-2021-04/segments/1610703518201.29/warc/CC-MAIN-20210119072933-20210119102933-00593.warc.gz
304,107,131
17,247
Cody # Problem 44345. MATLAB Counter Solution 1303893 Submitted on 20 Oct 2017 by Eric G. This solution is locked. To view this solution, you need to provide a solution of the same size or smaller. ### Test Suite Test Status Code Input and Output 1   Pass assessFunctionAbsence({'regexp','regexpi','regexprep','str2num'},'FileName','counter.m') 2   Pass f = counter(0,1); assert(isequal(f(),0)) assert(isequal(f(),1)) assert(isequal(2,f())) assert(isequal(3,f())) y = function_handle with value: @()bob(b) 3   Pass f = counter(1,0); assert(isequal(f(),1)) assert(isequal(f(),1)) assert(isequal(1,f())) assert(isequal(1,f())) y = function_handle with value: @()bob(b) 4   Pass f = counter(10,2); assert(isequal(f(),10)) assert(isequal(f(),12)) assert(isequal(14,f())) assert(isequal(16,f())) y = function_handle with value: @()bob(b) 5   Pass f = counter(0,5); y_correct = [0, 5, 10, 15, 20, 55]; assert(isequal([f() f() f() f() f() f()+f()],y_correct)) y = function_handle with value: @()bob(b) 6   Pass x0 = randi(10); b = randi(10); f = counter(x0,b); y_correct = x0 + (0:1000)*b; assert(isequal(arrayfun(@(n)f(),0:1000),y_correct)) y = function_handle with value: @()bob(b) ### Community Treasure Hunt Find the treasures in MATLAB Central and discover how the community can help you! Start Hunting!
421
1,319
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.28125
3
CC-MAIN-2021-04
latest
en
0.552805
https://docs.microsoft.com/en-us/archive/msdn-magazine/2012/may/test-run-dive-into-neural-networks
1,618,213,489,000,000,000
text/html
crawl-data/CC-MAIN-2021-17/segments/1618038066613.21/warc/CC-MAIN-20210412053559-20210412083559-00179.warc.gz
320,533,855
14,646
May 2012 Volume 27 Number 05 # Test Run - Dive into Neural Networks By James McCaffrey | May 2012 An artificial neural network (usually just called a neural network) is an abstraction loosely modeled on biological neurons and synapses. Although neural networks have been studied for decades, many neural network code implementations on the Internet are not, in my opinion, explained very well. In this month’s column, I’ll explain what artificial neural networks are and present C# code that implements a neural network. The best way to see where I’m headed is to take a look at Figure 1 and Figure 2. One way of thinking about neural networks is to consider them numerical input-output mechanisms. The neural network in Figure 1 has three inputs labeled x0, x1 and x2, with values 1.0, 2.0 and 3.0, respectively. The neural network has two outputs labeled y0 and y1, with values 0.72 and -0.88, respectively. The neural network in Figure 1 has one layer of so-called hidden neurons and can be described as a three-layer, fully connected, feedforward network with three inputs, two outputs and four hidden neurons. Unfortunately, neural network terminology varies quite a bit. In this article, I’ll generally—but not always—use the terminology described in the excellent neural network FAQ at bit.ly/wfikTI. Figure 1 Neural Network Structure Figure 2 Neural Network Demo Program Figure 2 shows the output produced by the demo program presented in this article. The neural network uses both a sigmoid activation function and a tanh activation function. These functions are suggested by the two equations with the Greek letters phi in Figure 1. The outputs produced by a neural network depend on the values of a set of numeric weights and biases. In this example, there are a total of 26 weights and biases with values 0.10, 0.20 ... -5.00. After the weight and bias values are loaded into the neural network, the demo program loads the three input values (1.0, 2.0, 3.0) and then performs a series of computations as suggested by the messages about the input-to-hidden sums and the hidden-to-output sums. The demo program concludes by displaying the two output values (0.72, -0.88). I’ll walk you through the program that produced the output shown in Figure 2. This column assumes you have intermediate programming skills but doesn’t assume you know anything about neural networks. The demo program is coded using the C# language but you should have no trouble refactoring the demo code to another language such as Visual Basic .NET or Python. The program presented in this article is essentially a tutorial and a platform for experimentation; it does not directly solve any practical problem, so I’ll explain how you can expand the code to solve meaningful problems. I think you’ll find the information quite interesting, and some of the programming techniques can be valuable additions to your coding skill set. ## Modeling a Neural Network Conceptually, artificial neural networks are modeled on the behavior of real biological neural networks. In Figure 1 the circles represent neurons where processing occurs and the arrows represent both information flow and numeric values called weights. In many situations, input values are copied directly into input neurons without any weighting and emitted directly without any processing, so the first real action occurs in the hidden layer neurons. Assume that input values 1.0, 2.0 and 3.0 are emitted from the input neurons. If you examine Figure 1, you can see an arrow representing a weight value between each of the three input neurons and each of the four hidden neurons. Suppose the three weight arrows shown pointing into the top hidden neuron are named w00, w10 and w20. In this notation the first index represents the index of the source input neuron and the second index represents the index of the destination hidden neuron. Neuron processing occurs in three steps. In the first step, a weighted sum is computed. Suppose w00 = 0.1, w10 = 0.5 and w20 = 0.9. The weighted sum for the top hidden neuron is (1.0)(0.1) + (2.0)(0.5) + (3.0)(0.9) = 3.8. The second processing step is to add a bias value. Suppose the bias value is -2.0; then the adjusted weighted sum becomes 3.8 + (-2.0) = 1.8. The third step is to apply an activation function to the adjusted weighted sum. Suppose the activation function is the sigmoid function defined by 1.0 / (1.0 + Exp(-x)), where Exp represents the exponential function. The output from the hidden neuron becomes 1.0 / (1.0 + Exp(-1.8)) = 0.86. This output then becomes part of the weighted sum input into each of the output layer neurons. In Figure 1, this three-step process is suggested by the equation with the Greek letter phi: weighted sums (xw) are computed, a bias (b) is added and an activation function (phi) is applied. After all hidden neuron values have been computed, output layer neuron values are computed in the same way. The activation function used to compute output neuron values can be the same function used when computing the hidden neuron values, or a different activation function can be used. The demo program shown running in Figure 2 uses the hyperbolic tangent function as the hidden-to-output activation function. After all output layer neuron values have been computed, in most situations these values are not weighted or processed but are simply emitted as the final output values of the neural network. ## Internal Structure The key to understanding the neural network implementation presented here is to closely examine Figure 3, which, at first glance, might appear extremely complicated. But bear with me—the figure is not nearly as complex as it might first appear. Figure 3 shows a total of eight arrays and two matrices. The first array is labeled this.inputs. This array holds the neural network input values, which are 1.0, 2.0 and 3.0 in this example. Next comes the set of weight values that are used to compute values in the so-called hidden layer. These weights are stored in a 3 x 4 matrix labeled i-h weights where the i-h stands for input-to-hidden. Notice in Figure 1 that the demo neural network has four hidden neurons. The i-h weights matrix has a number of rows equal to the number of inputs and a number of columns equal to the number of hidden neurons. Figure 3 Neural Network Internal Structure The array labeled i-h sums is a scratch array used for computation. Note that the length of the i-h sums array will always be the same as the number of hidden neurons (four, in this example). Next comes an array labeled i-h biases. Neural network biases are additional weights used to compute hidden and output layer neurons. The length of the i-h biases array will be the same as the length of the i-h sums array, which in turn is the same as the number of hidden neurons. The array labeled i-h outputs is an intermediate result and the values in this array are used as inputs to the next layer. The i-h sums array has length equal to the number of hidden neurons. Next comes a matrix labeled h-o weights where the h-o stands for hidden-to-output. Here the h-o weights matrix has size 4 x 2 because there are four hidden neurons and two outputs. The h-o sums array, the h-o biases array and the this.outputs array all have lengths equal to the number of outputs (two, in this example). The array labeled weights at the bottom of Figure 3 holds all the input-to-hidden and hidden-to-output weights and biases. In this example, the length of the weights array is (3 * 4) + 4 + (4 * 2) + 2 = 26. In general, if Ni is the number of input values, Nh is the number of hidden neurons and No is the number of outputs, then the length of the weights array will be Nw = (Ni * Nh) + Nh + (Nh * No) + No. ## Computing the Outputs After the eight arrays and two matrices described in the previous section have been created, a neural network can compute its output based on its inputs, weights and biases. The first step is to copy input values into the this.inputs array. The next step is to assign values to the weights array. For the purposes of a demonstration you can use any weight values you like. Next, values in the weights array are copied to the i-h weights matrix, the i-h biases array, the h-o weights matrix and the h-o biases array. Figure 3 should make this relationship clear. The values in the i-h sums array are computed in two steps. The first step is to compute the weighted sums by multiplying the values in the inputs array by the values in the appropriate column of the i-h weights matrix. For example, the weighted sum for hidden neuron [3] (where I’m using zero-based indexing) uses each input value and the values in column [3] of the i-h weights matrix: (1.0)(0.4) + (2.0)(0.8) + (3.0)(1.2) = 5.6. The second step when computing i-h sum values is to add each bias value to the current i-h sum value. For example, because i-h biases [3] has value -7.0, the value of i-h sums [3] becomes 5.6 + (-7.0) = -1.4. After all the values in the i-h sums array have been calculated, the input-to-hidden activation function is applied to those sums to produce the input-to-hidden output values. There are many possible activation functions. The simplest activation function is called the step function, which simply returns 1.0 for any input value greater than zero and returns 0.0 for any input value less than or equal to zero. Another common activation function, and the one used in this article, is the sigmoid function, which is defined as f(x) = 1.0 / (1.0 + Exp(-x)). The graph of the sigmoid function is shown in Figure 4. Figure 4 The Sigmoid Function Notice the sigmoid function returns a value in the range strictly greater than zero and strictly less than one. In this example, if the value for i-h sums [3] after the bias value has been added is -1.4, then the value of i-h outputs [3] becomes 1.0 / (1.0 + Exp(-(-1.4))) = 0.20. After all the input-to-hidden output neuron values have been computed, those values serve as the inputs for the hidden-to-output layer neuron computations. These computations work in the same way as the input-to-hidden computations: preliminary weighted sums are calculated, biases are added and then an activation function is applied. In this example I use the hyperbolic tangent function, abbreviated as tanh, for the hidden-to-output activation function. The tanh function is closely related to the sigmoid function. The graph of the tanh function has an S-shaped curve similar to the sigmoid function, but tanh returns a value in the range (-1,1) instead of in the range (0,1). ## Combining Weights and Biases All of the neural network implementations I’ve seen on the Internet don’t maintain separate weight and bias arrays, but instead combine weights and biases into the weights matrix. How is this possible? Recall that the computation of the value of input-to-hidden neuron [3] resembled (i0 * w03) + (i1 * w13) + (i2 * w23) + b3, where i0 is input value [0], w03 is the weight for input [0] and neuron [3], and b3 is the bias value for hidden neuron [3]. If you create an additional, fake input [4] that has a dummy value of 1.0, and an additional row of weights that hold the bias values, then the previously described computation becomes: (i0 * w03) + (i1 * w13) + (i2 * w23) + (i3 * w33), where i3 is the dummy 1.0 input value and w33 is the bias. The argument is that this approach simplifies the neural network model. I disagree. In my opinion, combining weights and biases makes a neural network model more difficult to understand and more error-prone to implement. However, apparently I’m the only author who seems to have this opinion, so you should make your own design decision. ## Implementation I implemented the neural network shown in Figures 1, 2 and 3 using Visual Studio 2010. I created a C# console application named NeuralNetworks. In the Solution Explorer window I right-clicked on file Program.cs and renamed it to NeuralNetworksProgram.cs, which also changed the template-generated class name to NeuralNetworksProgram. The overall program structure, with most WriteLine statements removed, is shown in Figure 5. Figure 5 Neural Network Program Structure ``````using System; namespace NeuralNetworks { class NeuralNetworksProgram { static void Main(string[] args) { try { Console.WriteLine("\nBegin Neural Network demo\n"); NeuralNetwork nn = new NeuralNetwork(3, 4, 2); double[] weights = new double[] { 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 1.1, 1.2, -2.0, -6.0, -1.0, -7.0, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0, -2.5, -5.0 }; nn.SetWeights(weights); double[] xValues = new double[] { 1.0, 2.0, 3.0 }; double[] yValues = nn.ComputeOutputs(xValues); Helpers.ShowVector(yValues); Console.WriteLine("End Neural Network demo\n"); } catch (Exception ex) { Console.WriteLine("Fatal: " + ex.Message); } } } class NeuralNetwork { // Class members here public NeuralNetwork(int numInput, int numHidden, int numOutput) { ... } public void SetWeights(double[] weights) { ... } public double[] ComputeOutputs(double[] xValues) { ... } private static double SigmoidFunction(double x) { ... } private static double HyperTanFunction(double x) { ... } } public class Helpers { public static double[][] MakeMatrix(int rows, int cols) { ... } public static void ShowVector(double[] vector) { ... } public static void ShowMatrix(double[][] matrix, int numRows) { ... } } } // ns `````` I deleted all the template-generated using statements except for the one referencing the System namespace. In the Main function, after displaying a begin message, I instantiate a NeuralNetwork object named nn with three inputs, four hidden neurons and two outputs. Next, I assign 26 arbitrary weights and biases to an array named weights. I load the weights into the neural network object using a method named SetWeights. I assign values 1.0, 2.0 and 3.0 to an array named xValues. I use method ComputeOutputs to load the input values into the neural network and determine the resulting outputs, which I fetch into an array named yValues. The demo concludes by displaying the output values. ## The NeuralNetwork Class The NeuralNetwork class definition starts: ``````class NeuralNetwork { private int numInput; private int numHidden; private int numOutput; ... `````` As explained in the previous sections, the structure of a neural network is determined by the number of input values, the number of hidden layer neurons and the number of output values. The class definition continues as: ``````private double[] inputs; private double[][] ihWeights; // input-to-hidden private double[] ihSums; private double[] ihBiases; private double[] ihOutputs; private double[][] hoWeights;  // hidden-to-output private double[] hoSums; private double[] hoBiases; private double[] outputs; ... `````` These seven arrays and two matrices correspond to the ones shown in Figure 3. I use an ih prefix for input-to-hidden data and an ho prefix for hidden-to-output data. Recall that the values in the ihOutputs array serve as the inputs for the output layer computations, so naming this array precisely is a bit troublesome. Figure 6 shows how the NeuralNetwork class constructor is defined. Figure 6 The NeuralNetwork Class Constructor ``````public NeuralNetwork(int numInput, int numHidden, int numOutput) { this.numInput = numInput; this.numHidden = numHidden; this.numOutput = numOutput; inputs = new double[numInput]; ihWeights = Helpers.MakeMatrix(numInput, numHidden); ihSums = new double[numHidden]; ihBiases = new double[numHidden]; ihOutputs = new double[numHidden]; hoWeights = Helpers.MakeMatrix(numHidden, numOutput); hoSums = new double[numOutput]; hoBiases = new double[numOutput]; outputs = new double[numOutput]; } `````` After copying the input parameter values numInput, numHidden and numOutput into their respective class fields, each of the nine member arrays and matrices are allocated with the sizes I explained earlier. I implement matrices as arrays of arrays rather than using the C# multidimensional array type so that you can more easily refactor my code to a language that doesn’t support multidimensional array types. Because each row of my matrices must be allocated, it’s convenient to use a helper method such as MakeMatrix. The SetWeights method accepts an array of weights and bias values and populates ihWeights, ihBiases, hoWeights and hoBiases. The method begins like this: ``````public void SetWeights(double[] weights) { int numWeights = (numInput * numHidden) + (numHidden * numOutput) + numHidden + numOutput; if (weights.Length != numWeights) throw new Exception("xxxxxx"); int k = 0; ... `````` As explained earlier, the total number of weights and biases, Nw, in a fully connected feedforward neural network is (Ni * Nh) + (Nh * No) + Nh + No. I do a simple check to see if the weights array parameter has the correct length. Here, “xxxxxx” is a stand-in for a descriptive error message. Next, I initialize an index variable k to the beginning of the weights array parameter. Method SetWeights concludes: ``````for (int i = 0; i < numInput; ++i) for (int j = 0; j < numHidden; ++j) ihWeights[i][j] = weights[k++]; for (int i = 0; i < numHidden; ++i) ihBiases[i] = weights[k++]; for (int i = 0; i < numHidden; ++i) for (int j = 0; j < numOutput; ++j) hoWeights[i][j] = weights[k++]; for (int i = 0; i < numOutput; ++i) hoBiases[i] = weights[k++] } `````` Each value in the weights array parameter is copied sequentially into ihWeights, ihBiases, hoWeights and hoBiases. Notice no values are copied into ihSums or hoSums because those two scratch arrays are used for computation. ## Computing the Outputs The heart of the NeuralNetwork class is method ComputeOutputs. The method is surprisingly short and simple and begins: ``````public double[] ComputeOutputs(double[] xValues) { if (xValues.Length != numInput) throw new Exception("xxxxxx"); for (int i = 0; i < numHidden; ++i) ihSums[i] = 0.0; for (int i = 0; i < numOutput; ++i) hoSums[i] = 0.0; ... `````` First I check to see if the length of the input x-values array is the correct size for the NeuralNetwork object. Then I zero out the ihSums and hoSums arrays. If ComputeOutputs is called only once, then this explicit initialization is not necessary, but if ComputeOutputs is called more than once—because ihSums and hoSums are accumulated values—the explicit initialization is absolutely necessary. An alternative design approach is to not declare and allocate ihSums and hoSums as class members, but instead make them local to the ComputeOutputs method. Method ComputeOutputs continues: ``````for (int i = 0; i < xValues.Length; ++i) this.inputs[i] = xValues[i]; for (int j = 0; j < numHidden; ++j) for (int i = 0; i < numInput; ++i) ihSums[j] += this.inputs[i] * ihWeights[i][j]; ... `````` The values in the xValues array parameter are copied to the class inputs array member. In some neural network scenarios, input parameter values are normalized, for example by performing a linear transform so that all inputs are scaled between -1.0 and +1.0, but here no normalization is performed. Next, a nested loop computes the weighted sums as shown in Figures 1 and 3. Notice that in order to index ihWeights in standard form where index i is the row index and index j is the column index, it’s necessary to have j in the outer loop. Method ComputeOutputs continues: ``````for (int i = 0; i < numHidden; ++i) ihSums[i] += ihBiases[i]; for (int i = 0; i < numHidden; ++i) ihOutputs[i] = SigmoidFunction(ihSums[i]); ... `````` Each weighted sum is modified by adding the appropriate bias value. At this point, to produce the output shown in Figure 2, I used method Helpers.ShowVector to display the current values in the ihSums array. Next, I apply the sigmoid function to each of the values in ihSums and assign the results to array ihOutputs. I’ll present the code for method SigmoidFunction shortly. Method ComputeOutputs continues: ``````for (int j = 0; j < numOutput; ++j) for (int i = 0; i < numHidden; ++i) hoSums[j] += ihOutputs[i] * hoWeights[i][j]; for (int i = 0; i < numOutput; ++i) hoSums[i] += hoBiases[i]; ... `````` I use the just-computed values in ihOutputs and the weights in hoWeights to compute values into hoSums, then I add the appropriate hidden-to-output bias values. Again, to produce the output shown in Figure 2, I called Helpers.ShowVector. Method ComputeOutputs finishes: ``````for (int i = 0; i < numOutput; ++i) this.outputs[i] = HyperTanFunction(hoSums[i]); double[] result = new double[numOutput]; this.outputs.CopyTo(result, 0); return result; } `````` I apply method HyperTanFunction to the hoSums to generate the final outputs into class array private member outputs. I copy those outputs to a local result array and use that array as a return value. An alternative design choice would be to implement ComputeOutputs without a return value, but implement a public method GetOutputs so that the outputs of the neural network object could be retrieved. ## The Activation Functions and Helper Methods Here’s the code for the sigmoid function used to compute the input-to-hidden outputs: ``````private static double SigmoidFunction(double x) { if (x < -45.0) return 0.0; else if (x > 45.0) return 1.0; else return 1.0 / (1.0 + Math.Exp(-x)); } `````` Because some implementations of the Math.Exp function can produce arithmetic overflow, checking the value of the input parameter is usually performed. The code for the tanh function used to compute the hidden-to-output results is: ``````private static double HyperTanFunction(double x) { if (x < -10.0) return -1.0; else if (x > 10.0) return 1.0; else return Math.Tanh(x); } `````` The hyperbolic tangent function returns values between -1 and +1, so arithmetic overflow is not a problem. Here the input value is checked merely to improve performance. The static utility methods in class Helpers are just coding conveniences. The MakeMatrix method used to allocate matrices in the NeuralNetwork constructor allocates each row of a matrix implemented as an array of arrays: ``````public static double[][] MakeMatrix(int rows, int cols) { double[][] result = new double[rows][]; for (int i = 0; i < rows; ++i) result[i] = new double[cols]; return result; } `````` Methods ShowVector and ShowMatrix display the values in an array or matrix to the console. You can see the code for these two methods in the code download that accompanies this article (available at msdn.microsoft.com/magazine/msdnmag0512). ## Next Steps The code presented here should give you a solid basis for understanding and experimenting with neural networks. You might want to examine the effects of using different activation functions and varying the number of inputs, outputs and hidden layer neurons. You can modify the neural network by making it partially connected, where some neurons are not logically connected to neurons in the next layer. The neural network presented in this article has one hidden layer. It’s possible to create more complex neural networks that have two or even more hidden layers, and you might want to extend the code presented here to implement such a neural network. Neural networks can be used to solve a variety of practical problems, including classification problems. In order to solve such problems there are several challenges. For example, you must know how to encode non-numeric data and how to train a neural network to find the best set of weights and biases. I will present an example of using neural networks for classification in a future article. Dr. James McCaffrey works for Volt Information Sciences Inc., where he manages technical training for software engineers working at Microsoft’s Redmond, Wash., campus. He has worked on several Microsoft products including Internet Explorer and MSN Search. He’s the author of “.NET Test Automation Recipes” (Apress, 2006), and can be reached at jammc@microsoft.com. Thanks to the following Microsoft technical experts for reviewing this article: Dan Liebling and Anne Loomis Thompson
5,698
24,092
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.5625
3
CC-MAIN-2021-17
latest
en
0.907438
http://www.elmerfem.org/forum/viewtopic.php?f=15&t=3673&amp
1,568,844,407,000,000,000
text/html
crawl-data/CC-MAIN-2019-39/segments/1568514573368.43/warc/CC-MAIN-20190918213931-20190918235931-00226.warc.gz
243,916,877
7,401
## CFD continuation, bifurcation detection Discussion about coding and new developments YannGuevel Posts: 30 Joined: 26 May 2014, 12:37 Antispam: Yes ### CFD continuation, bifurcation detection Hello, we developped an Elmer solver in a dynamic library for continuation of steady flow solution and bifurcation detection. It's based on the perturbation method called Asymptotic Numerical Method. Those tools are allready improved and published. We decided to try Elmer FEM to get 3D and mutli-physics flows informations. I'll try to put a presentation of what we did in this Elmer library, and propose a "clean" version of the code to perform continuation and bifurcation detection. I'd like to know what is the best way to get advices on the way we coded this technique, and further questions we have. Should i put the code in here, or in a more specific mail? For example : + We had to create a new Dirichlet Boundary condition routine, in order to modify values at nodal point in a simple way. => it might exist a smarter way to code what we did, respecting an "Elmer way of coding". + Once a bifurcation is detected via specific indicators, branch switching requires an "augmented linear system" to be solved, because the tangent operator is singular at the bifurcation: => Do you know how to add temporarily 1 or 2 equation to the "solver % Matrix" object, and perform a linear solve of this augmented system? => Would it be easier to create a new temporary solver with the correct given number of equation to be solved. And how to do that? raback Posts: 3450 Joined: 22 Aug 2009, 11:57 Antispam: Yes Location: Espoo, Finland Contact: ### Re: CFD continuation, bifurcation detection Hi Yann Your application sounds interesting. I think you could put some basic information to the forum, and if needed correspond with the developers on specific details. There is an "augmented" linear system that is used for example to enforce continuity. You take the Solver % Matrix and merge it with Solver % Matrix % ConstraintMatrix and thereafter can use all standard solvers. I think there is a SolveWithLinearConstraints subroutine that does this. I may be missing some details but this machinery is something that has been evolving in recent times. So for you the trick would be to create the ConsraintMatrix. -Peter YannGuevel Posts: 30 Joined: 26 May 2014, 12:37 Antispam: Yes ### Re: CFD continuation, bifurcation detection Hi peter, thanks for your interest and this advice. I'll look for this trick of ConsraintMatrix with attention. I'll put a presentation of the method as soon as possible. YannGuevel Posts: 30 Joined: 26 May 2014, 12:37 Antispam: Yes ### Re: CFD continuation, bifurcation detection Hello, my PhD thesis is now available on: https://tel.archives-ouvertes.fr/tel-01305764 The Pdf is available here : https://tel.archives-ouvertes.fr/tel-01 ... 3/document We are now able to detect accurately steady bifurcation in the case of 2D and 3D incompressible flows using ELMER. The text is in French, except Chapter 5 which is the technical part : how i have implemented the bifurcation analysis method in Elmer. I'll make the code available on github as soon as possible. The main related article in English for the method are here: Journal of Computational Physics 2011 Guevel, Cadou "Automatic detection and branch switching methods for steady bifurcation in fluid mechanics" JCP2013 Cochelin, Médale "Power series analysis as a major breakthrough to improve the efficiency of Asymptotic Numerical Method in the vicinity of bifurcations" Journal of Non-Newtonian Fluid Mechanics 2013, Jawadi, Cadou "Asymptotic numerical method for steady flow of power-law fluids" JCP2015 Médale, Cochelin "High performance computations of steady-state bifurcations in 3D incompressible fluid flows by Asymptotic Numerical Method" This forum and the team helped me a lot during the coding/installing part : thank you!
956
3,945
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.765625
3
CC-MAIN-2019-39
latest
en
0.904899
https://www.codingdrills.com/tutorial/introduction-to-graph-algorithms/dfs-variations
1,726,795,777,000,000,000
text/html
crawl-data/CC-MAIN-2024-38/segments/1725700652073.91/warc/CC-MAIN-20240919230146-20240920020146-00827.warc.gz
649,255,713
76,285
# Variations of Depth-First Search Depth-First Search (DFS) is a fundamental algorithmic technique used in graph traversal. It is widely used to explore and analyze graphs, and it forms the basis for many other graph algorithms. In this tutorial, we will delve into the variations of DFS and understand how it can be modified to solve specific problems. ## 1. Depth-First Search (DFS) Recap Before we dive into the variations, let's quickly recap what DFS is all about. DFS is a graph traversal algorithm that starts at a given vertex and explores as far as possible along each branch before backtracking. It explores vertices in a depth-first manner, which means it moves deeper into the graph before visiting siblings or neighbors. The basic structure of a DFS algorithm involves maintaining a visited set to keep track of visited vertices and recursively traversing the graph, visiting an unvisited neighboring vertex at each step until there are no more unvisited vertices. To apply DFS to this graph, we start at a chosen vertex (let's say vertex 1) and explore its neighbors. We mark vertex 1 as visited and then recursively visit its unvisited neighbors. We repeat this process until we've visited all reachable vertices. The order in which the vertices are visited depends on the specific implementation of DFS. Now that we've refreshed our understanding of DFS, let's move on to its variations. ## 2. Depth-First Search (DFS) Variations ### 2.1. Modified DFS to Find a Path One variation of DFS involves modifying the algorithm to find a path between two given vertices. This can be useful in various applications, such as finding a path in a maze or identifying a route in a transportation network. To modify DFS for path finding, we introduce a target vertex and modify the stopping condition of the algorithm. Instead of continuing until there are no more unvisited vertices, we stop as soon as we reach the target vertex. We can use a boolean variable to track whether the target vertex has been found or not. Here's an example of a modified DFS algorithm to find a path between vertex 1 and vertex 7: ``````def dfs_path(graph, start, end, visited=None, path=None): if visited is None: visited = set() if path is None: path = [] path.append(start) if start == end: return path for neighbor in graph[start]: if neighbor not in visited: result = dfs_path(graph, neighbor, end, visited, path) if result is not None: return result path.pop() return None `````` In this modified version of DFS, we keep track of the visited vertices and the path so far. If the current vertex is the target vertex, we return the path. Otherwise, we continue exploring unvisited neighbors recursively. If no path is found, we backtrack by removing the current vertex from the path. ### 2.2. DFS with Backtracking Another variation of DFS involves introducing backtracking to the algorithm. Backtracking is a technique that involves undoing certain decisions to explore other paths when a dead end is reached. In DFS with backtracking, we keep track of multiple paths simultaneously using a stack. Whenever a dead end is encountered, we backtrack by removing the last vertex from the stack and continuing the exploration from the previous vertex. Here's an example of DFS with backtracking in Python: ``````def dfs_backtracking(graph, start): stack = [(start, [start])] while stack: vertex, path = stack.pop() if len(path) == len(graph): print("Solution found:", path) return for neighbor in graph[vertex]: if neighbor not in path: stack.append((neighbor, path + [neighbor])) `````` In this variation, we use a stack to maintain a collection of paths. At each step, we pop a vertex along with its corresponding path. If the path length is equal to the total number of vertices, we have found a solution. Otherwise, we iterate over the neighbors of the current vertex and add them to the stack if they have not been visited yet. ## Conclusion Depth-First Search (DFS) is a versatile algorithm that can be adapted to solve various problems. In this tutorial, we explored two variations of DFS - one for path finding and another with backtracking. By modifying the DFS algorithm, we can tailor it to suit specific requirements and effectively solve different graph-related problems. Remember to experiment with different implementations and adapt them based on your specific use cases. Happy coding!
918
4,407
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.84375
4
CC-MAIN-2024-38
latest
en
0.927273
https://reference.wolframcloud.com/language/ref/Skewness.html
1,680,058,837,000,000,000
text/html
crawl-data/CC-MAIN-2023-14/segments/1679296948932.75/warc/CC-MAIN-20230329023546-20230329053546-00549.warc.gz
555,128,636
15,798
# Skewness Skewness[list] gives the coefficient of skewness for the elements in list. Skewness[dist] gives the coefficient of skewness for the distribution dist. # Details • Skewness measures the asymmetry in list or of dist. • A positive skewness indicates a distribution with a long right tail. A negative skewness indicates a distribution with a long left tail. • Skewness handles both numerical and symbolic data. • Skewness[{{x1,y1,},{x2,y2,},}] gives {Skewness[{x1,x2,}],Skewness[{y1,y2,}],}. • Skewness[] is equivalent to CentralMoment[,3]/CentralMoment[,2]3/2. # Examples open allclose all ## Basic Examples(2) Skewness for a list of values: Skewness for a parametric distribution: ## Scope(14) ### Data(10) Exact input yields exact output: Approximate input yields approximate output: Skewness for a matrix gives column-wise skewness: Works with large arrays: SparseArray data can be used just like dense arrays: Find the skewness of WeightedData: Find the skewness of EventData: Find the skewness of TemporalData: Find the skewness of TimeSeries: The skewness depends only on the values: Find the skewness of data involving quantities: ### Distributions and Processes(4) Find the skewness for univariate distributions: Multivariate distributions: Skewness for derived distributions: Data distribution: Skewness for distributions with quantities: Skewness function for a random process: ## Applications(8) Zero skewness indicates that the distribution is symmetric: Distributions with longer tails to the right have positive skewness: Distributions with longer tails to the left have negative skewness: The limiting distribution for BinomialDistribution as is normal: The limiting value of skewness is 0: By the central limit theorem, skewness of normalized sums of random variables will converge to 0: Define a Pearson distribution with zero mean and unit variance, parameterized by skewness and kurtosis: Obtain parameter inequalities for Pearson types 1, 4, and 6: The region plot for Pearson types depending on the values of skewness and kurtosis: Generate a random sample from a ParetoDistribution: Determine the type of PearsonDistribution with moments matching the sample moments: This time series contains the number of steps taken daily by a person during a period of five months: Average number of steps: Analyze the skewness as an indication of a tail in the daily step distribution: The histogram of the frequency of daily counts confirms that the distribution has a longer left tail: Find the skewness for the heights of children in a class: Skewness close to 0 indicates distribution symmetric around the mean: ## Properties & Relations(2) Skewness for data can be computed from CentralMoment: Skewness for a distribution can be computed from CentralMoment: ## Neat Examples(1) The distribution of Skewness estimates for 50, 100, and 300 samples: Wolfram Research (2007), Skewness, Wolfram Language function, https://reference.wolfram.com/language/ref/Skewness.html. #### Text Wolfram Research (2007), Skewness, Wolfram Language function, https://reference.wolfram.com/language/ref/Skewness.html. #### CMS Wolfram Language. 2007. "Skewness." Wolfram Language & System Documentation Center. Wolfram Research. https://reference.wolfram.com/language/ref/Skewness.html. #### APA Wolfram Language. (2007). Skewness. Wolfram Language & System Documentation Center. Retrieved from https://reference.wolfram.com/language/ref/Skewness.html #### BibTeX @misc{reference.wolfram_2022_skewness, author="Wolfram Research", title="{Skewness}", year="2007", howpublished="\url{https://reference.wolfram.com/language/ref/Skewness.html}", note=[Accessed: 28-March-2023 ]} #### BibLaTeX @online{reference.wolfram_2022_skewness, organization={Wolfram Research}, title={Skewness}, year={2007}, url={https://reference.wolfram.com/language/ref/Skewness.html}, note=[Accessed: 28-March-2023 ]}
941
3,957
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.3125
3
CC-MAIN-2023-14
latest
en
0.689968
https://physics.stackexchange.com/questions/tagged/non-equilibrium?tab=newest&page=2
1,566,362,574,000,000,000
text/html
crawl-data/CC-MAIN-2019-35/segments/1566027315809.69/warc/CC-MAIN-20190821043107-20190821065107-00280.warc.gz
601,027,684
40,390
Questions tagged [non-equilibrium] The tag has no usage guidance. 183 questions Filter by Sorted by Tagged with 34 views 50 views Two types of path intergrals in statistical physics - difference? In non-equilibrium statistical physics as far as I can tell there are two types of path integrals to find conditional probabilities: Path integrals over the noise in the Langevin equation, $\vec u(t)$... 188 views 303 views Active Matter Systems Active matter is composed of large numbers of active "agents", each of which consumes energy in order to move or to exert mechanical forces. Due to the energy consumption, these systems are ... 882 views How is temperature defined in non-equilibrium? I see that temperature is defined always in equilibrium. But systems which are not in equilibrium with their environment. How is temperature defined in these cases? Humans for example, have a body ... 33 views Many-particle systems where current and applied force are in opposite direction Are there instances known (both observed or theoretic) of large-$N$ systems where applying an external force causes a current of the "corresponding charge" (present in the system) in the direction ... 140 views Entropy production in non-equilibrium systems: physical interpretation? I have been learning about entropy production in non-equilibrium systems as developed by Prigogine and others, especially in the context of chemical reactions. I now understand that from the first law ... 540 views Keldysh formalism and Kubo formula I am working on out-of-equilibrium problems of strongly correlated materials, so I am interested in the Keldysh formalism. I just started reading about the subject, and I don't understand quite well ... 821 views 131 views Does the Lindblad equation satisfy a fluctuation dissipation relation? The fluctuation dissipation relation is usually stated in terms of an identity that relates the retarded, advanced and either the Keldysh or time-ordered correlators. This is easily enforced in ... 752 views Discrete Langevin Equation We have the Langevin equation, that describes the motion of a particle in a viscous medium, given by $$\label{Langevin} \frac{dv}{dt} = -\gamma v + \zeta(t)$$ With the ... 478 views Size of a Brownian particle Usually in Brownian dynamics, we consider the Brownian particle size to be much-much larger than the size of the particles of the fluid on which the Brownian particle is immersed in. In this scenario ... 41 views Hydrogen bonding and dichotomous Markov process The hydrogen bonding function between water and monomer (from a micelle/bilayer) can be defined as $h(t) = 1$ if the hydrogen bond exists between the two and $h(t) = 0$, if there is no hydrogen ... 79 views How are boundary consitions implemented correctly in time dependent hydrodynamics? I posted this question more than one year ago and got an answer recently. This answer looks good to me, but indicates that something is wrong in my original approach to the problem. Can someone tell ... 72 views How is zonal flow defined and computed? The transition to turbulence in pipe flow was recently observed to be in the same universality class as directed percolation. This was done by reinterpreting the turbulence and laminar flow in terms ... 936 views Modern textbook on statistical field theory with an emphasis on applications to non-equilibrium phenomena? What is a good textbook on statistical field theory, with an emphasis on applications to non-equilibrium phenomena? I am a final-year undergraduate, have already taken introductory classes in ... Say that we have a system evolving over discrete timesteps. The quantity we are interested is X and is given by a distribution $P_X$. This distribution is evolving temporally, and we have a ...
803
3,798
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0}
2.5625
3
CC-MAIN-2019-35
latest
en
0.927178
http://gmatclub.com/forum/m26-147551.html
1,485,147,794,000,000,000
text/html
crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00324-ip-10-171-10-70.ec2.internal.warc.gz
123,682,981
47,688
M26-20 : Retired Discussions [Locked] Check GMAT Club Decision Tracker for the Latest School Decision Releases http://gmatclub.com/AppTrack It is currently 22 Jan 2017, 21:03 GMAT Club Daily Prep Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History Events & Promotions Events & Promotions in June Open Detailed Calendar M26-20 new topic post reply Question banks Downloads My Bookmarks Reviews Important topics Author Message Director Status: Gonna rock this time!!! Joined: 22 Jul 2012 Posts: 547 Location: India GMAT 1: 640 Q43 V34 GMAT 2: 630 Q47 V29 WE: Information Technology (Computer Software) Followers: 3 Kudos [?]: 61 [0], given: 562 Show Tags 19 Feb 2013, 22:52 Hi Sorry , I don't know how to format it correctly here, so pasting the image. My doubt is .. If we take 5 raised to 4 as common in the denominator, we will be left with [5 raised to 4 ( 5 raised to 3 - 1 ) ] raised to -2. Now this can be deduced to 5 raised to 2 * [5 raised to 3 - 1 ] raised to -2 .. So 25 remains in the denominator . However , if we do not take 5 raised to 4 as common in the denominator, the entire denominator can be taken above in the numerator by changing the sign of the exponent to positive 2. In this case no 25 remains in the denominator. I don't understand where I am going wrong . Kindly help. Attachments qs.JPG [ 18.05 KiB | Viewed 1543 times ] _________________ hope is a good thing, maybe the best of things. And no good thing ever dies. Who says you need a 700 ?Check this out : http://gmatclub.com/forum/who-says-you-need-a-149706.html#p1201595 My GMAT Journey : http://gmatclub.com/forum/end-of-my-gmat-journey-149328.html#p1197992 Manager Status: Helping People Ace the GMAT Joined: 16 Jan 2013 Posts: 184 Location: United States Concentration: Finance, Entrepreneurship GMAT 1: 770 Q50 V46 GPA: 3.1 WE: Consulting (Consulting) Followers: 10 Kudos [?]: 49 [0], given: 4 Show Tags 21 Feb 2013, 10:06 There is no denominator to worry about, move the denominator to the numerator and change the sign to positive. (3^5 - 3^2)^2 * (5^7 - 5^4)^2 Then factor (3^2(3^3 - 1))^2 * (5^4(5^3 - 1))^2 Simplify (9*26)^2 * (625*124)^2 E is the answer because there are only 2 factors of 13 in the above expression and E has 4. _________________ Want to Ace the GMAT with 1 button? Start Here: GMAT Answers is an adaptive learning platform that will help you understand exactly what you need to do to get the score that you want. Math Expert Joined: 02 Sep 2009 Posts: 36601 Followers: 7097 Kudos [?]: 93484 [0], given: 10563 Show Tags 22 Feb 2013, 00:54 OE is below: If $$y=\frac{(3^5-3^2)^2}{(5^7-5^4)^{-2}}$$, then y is NOT divisible by which of the following? A. 6^4 B. 62^2 C. 65^2 D. 15^4 E. 52^4 $$y=\frac{(3^5-3^2)^2}{(5^7-5^4)^{-2}}=(3^5-3^2)^2*(5^7-5^4)^2=3^4*(3^3-1)^2*5^8*(5^3-1)^2=3^4*26^2*5^8*124^2=2^6*3^4*5^8*13^2*31^2$$. Now, if you analyze each option you'll see that only $$52^4=2^8*13^4$$ is not a factor of $$y$$, since the power of 13 in it is higher than the power of 13 in $$y$$. _________________ Intern Joined: 20 Feb 2013 Posts: 20 Followers: 1 Kudos [?]: 9 [0], given: 2 Show Tags 22 Feb 2013, 01:16 If we observe the expression, it can be deduced to: y= 〖3^2 (26)(5^4 (124))〗^2 Now let us eliminate options: A: 6^4 can be eliminated as we have four 3s and four 2s B: 62^2 can be eliminated as we have 124^2 C: 65^2 can be eliminated as we have 26^2 and 5^4 D: 15^4 can easily be eliminated E: 52^4 cannot be eliminated as we do not have enough factors of 2 Hence answer is E _________________ Pushpinder Gill Intern Joined: 12 Dec 2012 Posts: 3 Followers: 0 Kudos [?]: 0 [0], given: 3 Show Tags 30 Mar 2013, 22:08 Bunuel wrote: OE is below: If $$y=\frac{(3^5-3^2)^2}{(5^7-5^4)^{-2}}$$, then y is NOT divisible by which of the following? A. 6^4 B. 62^2 C. 65^2 D. 15^4 E. 52^4 $$y=\frac{(3^5-3^2)^2}{(5^7-5^4)^{-2}}=(3^5-3^2)^2*(5^7-5^4)^2=3^4*(3^3-1)^2*5^8*(5^3-1)^2=3^4*26^2*5^8*124^2=2^6*3^4*5^8*13^2*31^2$$. Now, if you analyze each option you'll see that only $$52^4=2^8*13^4$$ is not a factor of $$y$$, since the power of 13 in it is higher than the power of 13 in $$y$$. Hi Brunel, How do you get 3^4(3^3-1)^2 from (3^5-3^2)^2? Math Expert Joined: 02 Sep 2009 Posts: 36601 Followers: 7097 Kudos [?]: 93484 [0], given: 10563 Show Tags 31 Mar 2013, 07:25 mp2469 wrote: Bunuel wrote: OE is below: If $$y=\frac{(3^5-3^2)^2}{(5^7-5^4)^{-2}}$$, then y is NOT divisible by which of the following? A. 6^4 B. 62^2 C. 65^2 D. 15^4 E. 52^4 $$y=\frac{(3^5-3^2)^2}{(5^7-5^4)^{-2}}=(3^5-3^2)^2*(5^7-5^4)^2=3^4*(3^3-1)^2*5^8*(5^3-1)^2=3^4*26^2*5^8*124^2=2^6*3^4*5^8*13^2*31^2$$. Now, if you analyze each option you'll see that only $$52^4=2^8*13^4$$ is not a factor of $$y$$, since the power of 13 in it is higher than the power of 13 in $$y$$. Hi Brunel, How do you get 3^4(3^3-1)^2 from (3^5-3^2)^2? Factor out 3^2 from (3^5-3^2)^2: (3^2(3^3-1))^2=3^4(3^3-1)^2. Hope it's clear. _________________ Re: M26-20   [#permalink] 31 Mar 2013, 07:25 Display posts from previous: Sort by M26-20 new topic post reply Question banks Downloads My Bookmarks Reviews Important topics Moderator: Bunuel Powered by phpBB © phpBB Group and phpBB SEO Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
2,136
5,788
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.5625
4
CC-MAIN-2017-04
latest
en
0.880057
http://javaexplorer03.blogspot.com/2016/03/selection-sort-java-code.html
1,511,020,643,000,000,000
text/html
crawl-data/CC-MAIN-2017-47/segments/1510934804976.22/warc/CC-MAIN-20171118151819-20171118171819-00144.warc.gz
152,824,617
18,200
## Thursday, 3 March 2016 ### Selection Sort: Java Code Selection sort is noted for its simplicity, and it has advantages when auxiliary memory is limited. The algorithm divides the input list into two parts: 1. The sublist of items already sorted, which is built up from left to right at the front (left) of the list, and 2. The sublist of items remaining to be sorted that occupy the rest of the list. Initially, the sorted sublist is empty and the unsorted sublist is the entire input list. The algorithm proceeds by finding the smallest (or largest, depending on sorting order) element in the unsorted sublist, exchanging (swapping) it with the leftmost unsorted element (putting it in sorted order), and moving the sublist boundaries one element to the right. Example: 54 35 22 32 21 // this is the initial, starting state of the array 21 35 22 32 54 // sorted sublist = {21} 21 22 35 32 54 // sorted sublist = {21, 22} 21 22 32 35 54 // sorted sublist = {21, 22, 32} 21 22 32 35 54 // sorted sublist = {21, 22, 32, 35} 21 22 32 35 54 // sorted sublist = {21, 22, 32, 35, 54} Complexity: Best Case           :    O(n^2) Average Case  :    O(n^2) Worst Case      :    O(n^2) import java.util.Scanner; public class SelectionSort { public static int[] sort(int[] arr) { // outer loop to maintain the index of sorted array. for (int i = 0; i < arr.length - 1; i++) { int index = i; // inner loop is searching in unsorted list. for (int j = i + 1; j < arr.length; j++) { // check for smallest number’s index. if (arr[j] < arr[index]) { index = j; } } // swap with left most unsorted index. int smallerNumber = arr[index]; arr[index] = arr[i]; arr[i] = smallerNumber; } return arr; } public static void main(String...args) { Scanner scan = new Scanner(System.in); System.out.println("Enter the no. of elements:"); int size = scan.nextInt(); int[] array = new int[size]; for(int i = 0;i<size;i++) { array[i] = scan.nextInt(); } int[] sortedArray = sort(array); for(int e : sortedArray){ System.out.print(e+" "); } } } Output: Enter the no. of elements: 5 54 35 22 32 21 21 22 32 35 54 References:
597
2,116
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.890625
3
CC-MAIN-2017-47
longest
en
0.630969
https://courses.p2pu.org/es/groups/music-theory-2/content/intervals-tonal-degree-names-part-3-minor-intervals/
1,621,122,032,000,000,000
text/html
crawl-data/CC-MAIN-2021-21/segments/1620243991488.53/warc/CC-MAIN-20210515223209-20210516013209-00002.warc.gz
189,129,241
10,421
This course will become read-only in the near future. Tell us at community.p2pu.org if that is a problem. # Intervals & Tonal Degree Names - Part 3: Minor Intervals Next we will be talking about minor intervals, like I said before a minor interval is when you lower a major interval by a half step. We will be talking about minor 2nds, 3rds, 6ths, and 7ths. First the minor second. So since we know what a major 2nd is we can figure out quite easily what a minor 2nd is. For example a major 2nd using C would be D. So a minor 2nd from C would be Db. Hopefully that makes sense. Here are all Minor 2nds: ·      A-Bb ·      A#-B ·      Ab-Bbb ·      B-C ·      B#-C# ·      Bb-Cb ·      C-Db ·      C#-D ·      Cb-Dbb ·      D-Eb ·      D#-E ·      Db-Ebb ·      E-F ·      E#-F# ·      Eb-Fb ·      F-Gb ·      F#-G ·      Fb-Gbb ·      G-Ab ·      G#-A ·      Gb-Abb These are all a half step up from the first note. The next minor interval is the minor 3rd. The minor third is seen a lot because of the minor chord, also because it is the 3rd scale degree in the minor scale which we will be going over later. You can also get a minor third from lowering the major 3rd interval by a half step. Here are all Minor 3rds: ·      A-C ·      A#-C# ·      Ab-Cb ·      B-D ·      B#-D# ·      Bb-Db ·      C-Eb ·      C#-E ·      Cb-Ebb ·      D-F ·      D#-F# ·      Db-Fb ·      E-G ·      E#-G# ·      Eb-Gb ·      F-Ab ·      F#-A ·      Fb-Abb ·      G-Bb ·      G#-B ·      Gb-Bb There are all the 1st - 3rd scale degree in a minor scale. They are also the 2nd to 4th scale degree in all major scales. Lastly they are a whole step and a half step away. The next is the minor 6th. Similarly to the minor 2nd and 3rd the minor 6th can be made by lowering the major 6th interval by a half step. Here are all minor 6ths: ·      A-F ·      A#-F# ·      Ab-Fb ·      B-G ·      B#-G# ·      Bb-Gb ·      C-Ab ·      C#-A ·      Cb-Abb ·      D-Bb ·      D#-B ·      Db-Bbb ·      E-C ·      E#-C# ·      Eb-Cb ·      F-Db ·      F#-D ·      Fb-Dbb ·      G-Eb ·      G#-E ·      Gb-Ebb The minor 6th is 4 whole steps away. It is also a major 3rd if you flip it. Example: Instead of C-Ab, Ab-C. The last is the minor 7th. Similar to all the rest it is as if you took a major 7th and lowered it by a half step. Here are all minor 7ths: ·      A-G ·      A#-G# ·      Ab-Gb ·      B-A ·      B#-A# ·      Bb-Ab ·      C-Bb ·      C#-B ·      Cb-Bbb ·      D-C ·      D#-C# ·      Db-Cb ·      E-D ·      E#-D# ·      Eb-Db ·      F-Eb ·      F#-E ·      Fb-Ebb ·      G-F ·      G#-F# ·      Gb-Fb The minor 7th is 5 whole steps away. It is also a whole step down if you flip it. It is also the to 5th to 4th scale degree in a major scale. Descending obviously. Example: F major scale descending: F, E, D, C, Bb, A, G, F. Remember that descending it 8, 7,6,5,4,3,2,1. This sums up the minor 7ths. To recap once again to keep your mind fresh with these intervals! A major interval is mainly seen in major scales, a perfect interval is when you can flip them and they usually stay the same or mirrored. Major and minor intervals share the same number intervals and just change by a half step in tone. Perfect intervals are unisons, 4ths, 5ths and octaves. An augmented interval is a perfect or major interval that has been raised by a half step. A diminished interval is a minor or perfect interval that has been lowered by a half step. Here are 4 pictures showing the minor 2nd, minor 3rd, minor 6th, and minor 7th. The 1st picture showing all minor 2nds on treble and bass clef played one after another in each measure using half notes. The 2nd picture showing all minor 3rds on treble and bass clef played one after another in each measure using half notes. The 3rd picture showing all minor 6ths on treble and bass clef played one after another in each measure using half notes. The 4th picture showing all minor 7ths on treble and bass clef played one after another in each measure using half notes. HOMEWORK: Give me 2 examples of each type of minor interval! Either type it out or use finale to write them out!
1,345
4,212
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.1875
4
CC-MAIN-2021-21
latest
en
0.866427
http://stackoverflow.com/questions/3129676/whats-a-good-way-to-generate-random-clusters-and-paths/3131010
1,441,033,288,000,000,000
text/html
crawl-data/CC-MAIN-2015-35/segments/1440644066017.21/warc/CC-MAIN-20150827025426-00351-ip-10-171-96-226.ec2.internal.warc.gz
221,420,519
24,046
# What's a good way to generate random clusters and paths? I'm toying around with writing a random map generator, and am not quite sure how to randomly generate realistic landscapes. I'm working with these sorts of local-scale maps, which presents some interesting problems. One of the simplest cases is the forest: `````` Sparse Medium Dense Typical trees 50% 70% 80% Massive trees — 10% 20% Light undergrowth 50% 70% 50% Heavy undergrowth — 20% 50% `````` Trees and undergrowth can exist in the same space, so an average sparse forest has 25% typical trees and light undergrowth, 25% typical trees, 25% light undergrowth, and 25% open space. Medium and dense forests will take a bit more thinking, but it's not where my problem lies either, as it's all evenly dispersed. My problem lies in generating clusters and paths, while keeping the percentage constraints. Marshes are a good example of this: `````` Moor Swamp Shallow bog 20% 40% Deep bog 5% 20% Light undergrowth 30% 20% Heavy undergrowth 10% 20% `````` Deep bog squares are usually clustered together and surrounded by an irregular ring of shallow bog squares. An additional map element, a hedgerow, may also be present, as well as a path of open ground, snaking through the bog. Both of these types of map elements (clusters and paths) present problems, as the total composition of the map should contain X% of the element, but it's not evenly distributed. Other elements, such as streams, ponds, and quicksand need either a cluster or path-type generation as well. What technique can I use to generate realistic maps given these constraints? I'm using C#, FYI (but this isn't a C#-specific question.) - Are these grid based maps? Its not clear to me from your description or the link. –  academicRobot Jun 28 '10 at 4:06 Yes, grid based, probably no smaller than 20x20 (though more likely larger, around 50x50ish?) –  dlras2 Jun 28 '10 at 4:58 Realistic "random" distribution is often done using Perlin Noise, which can be used to give a distribution with "clumps" like you mention. It works by summing/combining multiple layers of linearly interpolated values from random data points. Each layer (or "octave") has twice as many data points as the last, and confined to a narrower range of values. The result is "realistic" looking random texture. Here is a beautiful demonstration of the theory behind Perlin Noise by Hugo Elias. Here is the first thing I found on Perlin Noise in C#. What you can do is generate a Perlin Noise image and set a "threshold", where anything above a value is "on" and everything below it is "off". What you will end up with is clumps where things are above the threshold, which look irregular and awesome. Simply assign the ones above the threshold to where you want your terrain feature to be. Here is a demonstration if a program generating a Perlin Noise bitmap and then adjusting the cut-off threshold over time. A clear "clumping" is visible. It could be just what you wanted. Notice that, with a high threshold, very few points are above it, and it's sparse. But as the threshold lowers, those points "grow" into clumps (by the nature of perlin noise), and some of these clumps will join eachother, and basically create something very natural and terrain-like. Note that you could also set the "clump factor", or the tendency of features to clump, by setting the "turbulence" of your Perlin Noise function, which basically causes peaks and valleys of your PN function to be accentuated and closer together. Now, where to set the threshold? The higher the threshold, the lower the percentage of the feature on the final map. The lower the threshold, the higher the percentage. You can mess around with them. You could probably get exact percentages by fiddling around with a little math (it seems that the distribution of values follows a Normal Distribution; I could be wrong). Tweak it until it's just right :) EDIT As pointed out in the comments, you can find the exact percentage by creating a cumulative histogram (index of what % of the map is under a threshold) and pick the threshold that gives you the percent you need. The coolest thing here is that you can create features that clump around certain other features (like your marsh features) trivially here -- just use the same Perlin Noise map twice -- the second time, lowering the threshold. The first one will be clumpy, and the second one will be clumpy around the same areas, but with the clumps enlarged (refer to the flash animation posted earlier). As for other features like hedgerows, you could try modeling simple random walk lines that have a higher tendency to go straight than turn, and place them anywhere randomly on your perlin-based map. # samples Here is a sample 50x50 tile Sparse Forest Map. The undergrowth is colored brown and the trees are colored blue (sorry) to make it clear which is which. For this map I didn't make exact thresholds to match 50%; I only set the threshold at 50% of the maximum. Statistically, this will average out to exactly 50% every time. But it might not be exact enough for your purposes; see the earlier note for how to do this. Here is a demo of your Marsh features (not including undergrowth, for clarity), with shallow marsh in grey and deep marsh in back: This is just 50x50, so there are some artifacts from that, but you can see how easily you can make the shallow marsh "grow" from the deep marsh -- simply by adjusting the threshold on the same Perlin map. For this one, I eyeballed the threshold level to give the most eye-pleasing results, but for your own purposes, you could do what was mentioned before. Here is a marsh map generated from the same Perlin Noise map, but on stretched out over 250x250 tiled map instead: - The threshold can be exactly calculated for a given perlin noise bitmap. Make a histogram and from the histogram you can calculate the cut off point (range to keep) that will result in any percentage (regardless of the distribution); you can even do this in several ways. –  Unreason Jun 28 '10 at 8:59 Unreason - thanks; it looks like I completely misread the question to think that the asker wanted percentage of total features instead of percentage of map covered. I'll update my answer =) –  Justin L. Jun 28 '10 at 9:32 I love the samples! If I understand correctly - you generated two Perlin Noise maps for the forest (one for the brush, and one for the trees,) and overlayed them? Then for the marshes, you generated one map, and used slightly different thresholds? Did you use the code from gutgames? If so, do you what know the purpose of the octaves is? None of the other Perlin Noise code I've found has loops like that in it, and I can't figure it out. –  dlras2 Jun 28 '10 at 20:49 Also, an average percentage is perfectly fine - I'll probably use a threshold of 50% as well (or something similar.) –  dlras2 Jun 28 '10 at 20:50 Sorry - I missed the new link you had added. Thanks! –  dlras2 Jun 28 '10 at 21:29 I've never done this sort of thing, but here are some thoughts. You can obtain clusters by biasing random selection to locations on the grid that are close to existing elements of that type. Assign a default value of 1 to all squares. For squares with existing clustered elements, add clustering value to to adjacent squares (the higher the clustering value, the stronger the clustering will be). Then do random selection for the next element of that type on the probability distribution function of all the squares. For paths, you could have a similar procedure, except that paths would be extended step-wise (probability of path is finite at squares next to the end of the path and zero everywhere else). Directional paths could be done by increasing the probability of selection in the direction of the path. Meandering paths could have a direction that changes over the course of random extension (new_direction = mf * old_direction + (1-mf) * rand_direction, where mf is a momentum factor between 0 and 1). - To expand on academicRobot's comments, you could start with a default marsh or forest seed in some of the grid cells and let them grow from the source using a correlated random number. For instance a bog might have eight adjacent grid cells each of which has a 90% probability of also being a bog, but a 10% probability of being something else. You can let the ecosytem form from the seed and adjust the correlation until you get something that looks right. Probably pretty easy to implement even in a spreadsheet. - You could start reading links here. I remember looking at much better document. Will post it if I find it (it was also based on L-systems). But that's on the general side; on the particular problem you face I guess you should model it in terms of • percentages • other rules (clusters and paths) The point is that even though you don't know how to construct the map with given properties, if you are able to evaluate the properties (clustering ratio; path niceness) and score on them you can then brute force or do some other problem space transversal. If you still want to do generative approach then you will have to examine generative rules a bit closer; here's an idea that I would pursue • create patterns of different terrains and terrain covers that have required properties of 'clusterness', 'pathness' or uniformity • create the patterns in such a way that the values for deep bog are not discreet, but assign probability value; after the pattern had been created you can normalize this probability in such a way that it will produce required percentage of cover • mix different patterns together - You might have some success for certain types of area with a Voronoi pattern. I've never seen it used to create maps but I have seen it used in a number of similar fields. -
2,244
9,951
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.765625
3
CC-MAIN-2015-35
longest
en
0.927226
https://www.bunniestudios.com/blog/?p=6140
1,701,225,518,000,000,000
text/html
crawl-data/CC-MAIN-2023-50/segments/1700679100047.66/warc/CC-MAIN-20231129010302-20231129040302-00831.warc.gz
778,393,950
21,973
## Building a Curve25519 Hardware Accelerator Note: this post uses a $\LaTeX$ plugin to render math symbols. It may require a desktop browser for the full experience… The “double ratchet” algorithm is integral to modern end-to-end-encrypted chat apps, such as Signal, WhatsApp, and Matrix. It gives encrypted conversations the properties of resilience, forward secrecy, and break-in recovery; basically, even if an adversary can manipulate or observe portions of an exchange, including certain secret materials, the damage is limited with each turn of the double ratchet. The double-ratchet algorithm is a soup of cryptographic components, but one of the most computationally expensive portions is the “Diffie-Hellman (DH) key exchange”, using Elliptic Curve Diffie-Hellman (ECDH) with Curve25519. How expensive? This post from 2020 claims a speed record of 3.2 million cycles on a Cortex-M0 for just one of the core mathematical operations: fairly hefty. A benchmark of the x25519-dalek Rust crate on a 100 MHz RV32-IMAC implementation clocks in at 100ms per DH key exchange, of which several are involved in a double-ratchet. Thus, any chat client implementation on a small embedded CPU would suffer from significant UI lag. There are a few strategies to rectify this, ranging from adding a second CPU core to off-load the crypto, to making a full-custom hardware accelerator. Adding a second RISC-V CPU core is expedient, but it wouldn’t do much to force me to understand what I was doing as far as the crypto goes; and there’s already a strong contingent of folks working on multi-core RISC-V on FPGA implementations. The last time I implemented a crypto algorithm was for RSA on a low-end STM32 back in the mid 2000’s. I really enjoyed getting into the guts of the algorithm and stretching my understanding of the underlying mathematical primitives. So, I decided to indulge my urge to tinker, and make a custom hardware accelerator for Curve25519 using Litex/Migen and Rust bindings. ### What the Heck Is $\mathbf{{F}}_{{{{2^{{255}}}}-19}}$? I wanted the accelerator primitive to plug directly into the Rust Dalek Cryptography crates, so that I would be as light-fingered as possible with respect to changes that could lead to fatal flaws in the broader cryptosystem. I built some simple benchmarks and profiled where the most time was being spent doing a double-ratchet using the Dalek Crypto crates, and the vast majority of the time for a DH key exchange was burned in the Montgomery Multiply operation. I won’t pretend to understand all the fancy math-terms, but people smarter than me call it a “scalar multiply”, and it actually consists of thousands of “regular” multiplies on 255-bit numbers, with modular reductions in the prime field $\mathbf{{F}}_{{{{2^{{255}}}}-19}}$, among other things. Wait, what? So many articles and journals I read on the topic just talk about prime fields, modular reduction and blah blah blah like it’s asking someone to buy milk and eggs at the grocery store. It was a real struggle to try and make sense of all the tricks used to do a modular multiply, much less to say even elliptic curve point transformations. Well, that’s the point of trying to implement it myself — by building an engine, maybe you won’t understand all the hydrodynamic nuances around fuel injection, but you’ll at least gain an appreciation for what a fuel injector is. Likewise, I can’t say I deeply understand the number theory, but from what I can tell multiplication in $\mathbf{{F}}_{{{{2^{{255}}}}-19}}$ is basically the same as a “regular 255-bit multiply” except you do a modulo (you know, the % operator) against the number $2^{{255}}-19$ when you’re done. There’s a dense paper by Daniel J. Bernstein, the inventor of Curve25519, which I tried to read several times. One fact I gleaned is part of the brilliance of Curve25519 is that arithmetic modulo $2^{{255}}-19$ is quite close to regular binary arithmetic except you just need to special-case 19 outcomes out of $2^{{255}}$. Also, I learned that the prime modulus, $2^{{255}}-19$, is where the “25519” comes from in the name of the algorithm (I know, I know, I’m slow!). After a whole lot more research I stumbled on a paper by Furkan Turan and Ingrid Verbauwhede that broke down the algorithm into terms I could better understand. Attempts to reach out to them to get a copy of the source code was, of course, fruitless. It’s a weird thing about academics — they like to write papers and “share ideas”, but it’s very hard to get source code from them. I guess that’s yet another reason why I never made it in academia — I hate writing papers, but I like sharing source code. Anyways, the key insight from their paper is that you can break a 255-bit multiply down into operations on smaller, 17-bit units called “limbs” — get it? digits are your fingers, limbs (like your arms) hold multiple digits. These 17-bit limbs map neatly into 15 Xilinx 7-Series DSP48E block (which has a fast 27×18-bit multiplier): 17 * 15 = 255. Using this trick we can decompose multiplication in $\mathbf{{F}}_{{{{2^{{255}}}}-19}}$ into the following steps: 1. Schoolbook multiplication with 17-bit “limbs” 2. Collapse partial sums 3. Propagate carries 4. Is the sum $\geq$ $2^{{255}}-19$? 5. If yes, add 19; else add 0 6. Propagate carries again, in case the addition by 19 causes overflows The multiplier would run about 30% faster if step (6) were skipped. This step happens in a fairly small minority of cases, maybe a fraction of 1%, and the worst-case carry propagate through every 17-bit limb is diminishingly rare. The test for whether or not to propagate carries is fairly straightforward. However, short-circuiting the carry propagate step based upon the properties of the data creates a timing side-channel. Therefore, we prefer a slower but safer implementation, even if we are spending a bunch of cycles propagating zeros most of the time. Buckle up, because I’m about to go through each step in the algorithm in a lot of detail. One motivation of the exercise was to try to understand the math a bit better, after all! #### TL;DR: Magic Numbers If you’re skimming this article, you’ll see the numbers “19” and “17” come up over and over again. Here’s a quick TL;DR: • 19 is the tiny amount subtracted from $2^{{255}}$ to arrive at a prime number which is the largest number allowed in Curve25519. Overflowing this number causes things to wrap back to 0. Powers of 2 map efficiently onto binary hardware, and thus this representation is quite nice for hardware people like me. • 17 is a number of bits that conveniently divides 255, and fits into a single DSP hardware primitive (DSP48E) that is available on the target device (Xilinx 7-series FPGA) that we’re using in this post. Thus the name of the game is to split up this 255-bit number into 17-bit chunks (called “limbs”). #### Schoolbook Multiplication The first step in the algorithm is called “schoolbook multiplication”. It’s like the kind of multiplication you learned in elementary or primary school, where you write out the digits in two lines, multiply each digit in the top line by successive digits in the bottom line to create successively shifted partial sums that are then added to create the final result. Of course, there is a twist. Below is what actual schoolbook multiplication would be like, if you had a pair of numbers that were split into three “limbs”, denoted as A[2:0] and B[2:0]. Recall that a limb is not necessarily a single bit; in our case a limb is 17 bits. | A2 A1 A0 x | B2 B1 B0 ------------------------------------------ | A2*B0 A1*B0 A0*B0 A2*B1 | A1*B1 A0*B1 A2*B2 A1*B2 | A0*B2 (overflow) (not overflowing) The result of schoolbook multiplication is a result that potentially has 2x the number of limbs than the either multiplicand. Mapping the overflow back into the prime field (e.g. wrapping the overflow around) is a process called reduction. It turns out that for a prime field like $\mathbf{{F}}_{{{{2^{{255}}}}-19}}$, reduction works out to taking the limbs that extend beyond the base number of limbs in the field, shifting them right by the number of limbs, multiplying it by 19, and adding it back in; and if the result isn’t a member of the field, add 19 one last time, and take the result as just the bottom 255 bits (ignore any carry overflow). If this seems magical to you, you’re not alone. I had to draw it out before I could understand it. This trick works because the form of the field is $2^{{n}}-p$: it is a power of 2, reduced by some small amount $p$. By starting from a power of 2, most of the binary numbers representable in an n-bit word are valid members of the field. The only ones that are not valid field members are the numbers that are equal to $2^{{n}}-p$ but less than $2^{{n}}-1$ (the biggest number that fits in n bits). To turn these invalid binary numbers into members of the field, you just need to add $p$, and the reduction is complete. A diagram illustrating modular reduction The diagram above draws out the number lines for both a simple binary number line, and for some field $\mathbf{{F}}_{{{{2^{{n}}}}-p}}$. Both lines start at 0 on the left, and increment until they roll over. The point at which $\mathbf{{F}}_{{{{2^{{n}}}}-p}}$ rolls over is a distance $p$ from the end of the binary number line: thus, we can observe that $2^{{n}}-1$ reduces to $p-1$. Adding 1 results in $2^{{n}}$, which reduces to $p$: that is, the top bit, wrapped around, and multiplied by $p$. As we continue toward the right, the numbers continue to go up and wrap around, and for each wrap the distance between the “plain old binary” wrap point and the $\mathbf{{F}}_{{{{2^{{n}}}}-p}}$ wrap point increases by a factor of $p$, such that $2^{{n+1}}$ reduces to $2*p$. Thus modular reduction of natural binary numbers that are larger than our field $2^{{n}}-p$ consists of taking the bits that overflow an $n$-bit representation, shifting them to the right by $n$, and multiplying by $p$. In order to convince myself this is true, I tried out a more computationally tractable example than $\mathbf{{F}}_{{{{2^{{255}}}}-19}}$: the prime field $\mathbf{{F}}_{{{{2^{{6}}}}-5}} = 59$. The members of the field are from 0-58, and reduction is done by taking any number modulo 59. Thus, the number 59 reduces to 0; 60 reduces to 1; 61 reduces to 2, and so forth, until we get to 64, which reduces to 5 — the value of the overflowed bits (1) times $p$. Let’s look at some more examples. First, recall that the biggest member of the field, 58, in binary is 0b00_11_1010. Let’s consider a simple case where we are presented a partial sum that overflows the field by one bit, say, the number 0b01_11_0000, which is decimal 112. In this case, we take the overflowed bit, shift it to the right, multiply by 5: 0b01_11_0000 ^ move this bit to the right multiply by 0b101 (5) 0b00_11_0000 + 0b101 = 0b00_11_0101 = 53 And we can confirm using a calculator that 112 % 59 = 53. Now let’s overflow by yet another bit, say, the number 0b11_11_0000. Let’s try the math again: 0b11_11_0000 ^ move to the right and multiply by 0b101: 0b101 * 0b11 = 0b1111 0b00_11_0000 + 0b1111 = 0b00_11_1111 This result is still not a member of the field, as the maximum value is 0b0011_1010. In this case, we need to add the number 5 once again to resolve this “special-case” overflow where we have a binary number that fits in $n$ bits but is in that sliver between $2^{{n}}-p$ and $2^{{n}}-1$: 0b00_11_1111 + 0b101 = 0b01_00_0100 At this step, we can discard the MSB overflow, and the result is 0b0100 = 4; and we can check with a calculator that 240 % 59 = 4. Therefore, when doing schoolbook multiplication, the partial products that start to overflow to the left can be brought back around to the right hand side, after multiplying by $p$, in this case, the number 19. This magical property is one of the reasons why $\mathbf{{F}}_{{{{2^{{255}}}}-19}}$ is quite amenable to math on binary machines. Let’s use this finding to rewrite the straight schoolbook multiplication form from above, but now with the modular reduction applied to the partial sums, so it all wraps around into this compact form: | A2 A1 A0 x | B2 B1 B0 ------------------------------------------ | A2*B0 A1*B0 A0*B0 | A1*B1 A0*B1 19*A2*B1 + | A0*B2 19*A2*B2 19*A1*B2 ---------------------------- S2 S1 S0 As discussed above, each overflowed limb is wrapped around and multiplied by 19, creating a number of partial sums S[2:0] that now has as many terms as there are limbs, but with each partial sum still potentially overflowing the native width of the limb. Thus, the inputs to a limb are 17 bits wide, but we retain precision up to 48 bits during the partial sum stage, and then do a subsequent condensation of partial sums to reduce things back down to 17 bits again. The condensation is done in the next three steps, “collapse partial sums”, “propagate carries”, and finally “normalize”. However, before moving on to those sections, there is an additional trick we need to apply for an efficient implementation of this multiplication step in hardware. In order to minimize the amount of data movement, we observe that for each row, the “B” values are shared between all the multipliers, and the “A” values are constant along the diagonals. Thus we can avoid re-loading the “A” values every cycle by shifting the partial sums diagonally through the computation, allowing the “A” values to be loaded as “A” and “A*19” into holding register once before the computations starts, and selecting between the two options based on the step number during the computation. Mapping schoolbook multiply onto the hardware array to minimize data movement The diagram above illustrates how the schoolbook multiply is mapped onto the hardware array. The top diagram is an exact redrawing of the previous text box, where the partial sums that would extend to the left have been multiplied by 19 and wrapped around. Each colored block corresponds to a given DSP48E1 block, which you may recall is a fast 27×18 multiplier hardware primitive built into our Xilinx FPGAs. The red arrow illustrates the path of a partial sum in both the schoolbook form and the unwrapped form for hardware implementation. In the bottom diagram, one can clearly see that the Ax coefficients are constant for each column, and that for each row, the Bx values are identical across all blocks in each step. Thus each column corresponds to a single DSP48E1 block. We take advantage of the ability of the DSP48E1 block to hold two selectable A values to pre-load Ax and Ax*19 before the computation starts, and we bus together the Bx values and change them in sequence with each round. The partial sums are then routed to the “down and right” to complete the mapping. The final result is one cycle shifted from the canonical mapping. We have a one-cycle structural pipeline delay going from this step to the next one, so we use this pipeline delay to do a shift with no add by setting the opmode from C+M to C+0 (in other words, instead of adding to the current multiplication output for the last step, we squash that input and set it to 0). The fact that we pipeline the data also gives us an opportunity to pick up the upper limb of the partial sum collapse “for free” by copying it into the “D” register of the DSP48E1 during the shift step. In C, the equivalent code basically looks like this: // initialize the a_bar set of data for( int i = 0; i < DSP17_ARRAY_LEN; i++ ) { a_bar_dsp[i] = a_dsp[i] * 19; } operand p; for( int i = 0; i < DSP17_ARRAY_LEN; i++ ) { p[i] = 0; } // core multiply for( int col = 0; col < 15; col++ ) { for( int row = 0; row < 15; row++ ) { if( row >= col ) { p[row] += a_dsp[row-col] * b_dsp[col]; } else { p[row] += a_bar_dsp[15+row-col] * b_dsp[col]; } } } By leveraging the special features of the DSP48E1 blocks, in hardware this loop completes in just 15 clock cycles. #### Collapse Partial Sums At this point, the potential width of the partial sum is up to 43 bits wide. This next step divides the partial sums up into 17-bit words, and then shifts the higher to the next limbs over, allowing them to collapse into a smaller sum that overflows less. ... P2[16:0] P1[16:0] P0[16:0] ... P1[33:17] P0[33:17] P14[33:17]*19 ... P0[50:34] P14[50:34]*19 P13[50:34]*19 Again, the magic number 19 shows up to allow sums which “wrapped around” to add back in. Note that in the timing diagram you will find below, we refer to the mid- and upper- words of the shifted partial sums as “Q” and “R” respectively, because the timing diagram lacks the width within a data bubble to write out the full notation: so Q0,1 is P14[33:17] and R0,2 is P13[50:34] for P0[16:0]. Here’s the C code equivalent for this operation: // the lowest limb has to handle two upper limbs wrapping around (Q/R) prop[0] = (p[0] & 0x1ffff) + (((p[14] * 1) >> 17) & 0x1ffff) * 19 + (((p[13] * 1) >> 34) & 0x1ffff) * 19; // the second lowest limb has to handle just one limb wrapping around (Q) prop[1] = (p[1] & 0x1ffff) + ((p[0] >> 17) & 0x1ffff) + (((p[14] * 1) >> 34) & 0x1ffff) * 19; // the rest are just shift-and-add without the modular wrap-around for(int bitslice = 2; bitslice < 15; bitslice += 1) { prop[bitslice] = (p[bitslice] & 0x1ffff) + ((p[bitslice - 1] >> 17) & 0x1ffff) + ((p[bitslice - 2] >> 34)); } This completes in 2 cycles after a one-cycle pipeline stall delay penalty to retrieve the partial sum result from the previous step. #### Propagate Carries The partial sums will generate carries, which need to be propagated down the chain. The C-code equivalent of this looks as follows: for(int i = 0; i < 15; i++) { if ( i+1 < 15 ) { prop[i+1] = (prop[i] >> 17) + prop[i+1]; prop[i] = prop[i] & 0x1ffff; } } The carry-propagate completes in 14 cycles. Carry-propagates are expensive! #### Normalize We’re almost there! Except that $0 \leq result \leq 2^{{256}}-1$, which is slightly larger than the range of $\mathbf{{F}}_{{{{2^{{255}}}}-19}}$. Thus we need to check if number is somewhere in between 0x7ff….ffed and 0x7ff….ffff, or if the 256th bit will be set. In these cases, we need to add 19 to the result, so that the result is a member of the field $2^{{255}}-19$ (the 256th bit is dropped automatically when concatenating the fifteen 17-bit limbs together into the final 255-bit result). We use another special feature of the DSP48E1 block to help accelerate the test for this case, so that it can complete in a single cycle without slowing down the machine. We use the “pattern detect” (PD) feature of the DSP48E1 to check for all “1’s” in bit positions 255-5, and a single LUT to compare the final 5 bits to check for numbers between {prime_string} and $2^{{255}}-1$. We then OR this result with the 256th bit. With the help of the special DSP48E1 features, this operation completes in just a single cycle. After adding the number 19, we have to once again propagate carries. Even if we add the number 0, we also have to “propagate carries” for constant-time operation, to avoid leaking information in the form of a timing side-channel. This is done by running the carry propagate operation described above a second time. Once the second carry propagate is finished, we have the final result. #### Potential corner case There is a potential corner case where if the carry-propagated result going into “normalize” is between 0xFFFF_FFFF_FFFF_FFFF_FFFF_FFFF_FFFF_FFDA and 0xFFFF_FFFF_FFFF_FFFF_FFFF_FFFF_FFFF_FFEC In this case, the top bit would be wrapped around, multiplied by 19, and added to the LSB, but the result would not be a member of $2^{{255}}-19$ (it would be one of the 19 numbers just short of $2^{{255}}-1$), and the multiplier would pass it on as if it were a valid result. In some cases, this isn’t even a problem, because if the subsequent result goes through any operation that also includes a reduce operation, the result will still reduce correctly. However, I do not think this corner case is possible, because the overflow path to set the high bit is from the top limb going from 0x1_FFFF -> 0x2_0000 (that is, 0x7FFFC -> 0x80000 when written MSB-aligned) due to a carry coming in from the lower limb, and it would require the carry to be very large, not just +1 as shown in the simple rollover case, but a value from 0x1_FFED-0x1_FFDB. I don’t have a formal mathematical proof of this, but I strongly suspect that carry values going into the top limb cannot approach these large numbers, and therefore it is not possible to hit this corner case. Consider that the biggest value of a partial sum is 0x53_FFAC_0015 (0x1_FFFF * 0x1_FFFF * 15). This means the biggest value of the third overflowed 17-bit limb is 0x14. Therefore the biggest value resulting from the “collapse partial sums” stage is 0x1_FFFF + 0x1_FFFF + 0x14 = 0x4_0012. Thus the largest carry term that has to propagate is 0x4_0012 >> 17 = 2. 2 is much smaller than the amount required to trigger this condition, that is, a value in the range of 0x1_FFED-0x1_FFDB. Thus, perhaps this condition simply can’t happen? It’d be great to have a real mathematician comment if this is a real corner case… #### Real Hardware You can jump to the actual code that implements the above algorithm, but I prefer to think about implementations visually. Thus, I created this timing diagram that fully encapsulates all of the above steps, and the data movements between each part (click on the image for an editable, larger version; works best on desktop): Block diagrams of the multiplier and even more detailed descriptions of its function can be found in our datasheet documentation. There’s actually a lot to talk about there, but the discussion rapidly veers into trade-offs on timing closure and coding technique, and farther away from the core topic of the Curve25519 algorithm itself. ### Didn’t You Say We Needed Thousands of These…? So, that was the modular multiply. We’re done right? Nope! This is just one core op in a sequence of thousands of these to do a scalar multiply. One potentially valid strategy could be to try to hang the modular multiplier off of a Wishbone bus peripheral and shove numbers at it, and come back and retrieve results some time later. However, the cost of pushing 256-bit numbers around is pretty high, and any gains from accelerating the multiply will quickly be lost in the overhead of marshaling data. After all, a recurring theme in modern computer architecture is that data movement is more expensive than the computation itself. Damn you, speed of light! Thus, in order to achieve the performance I was hoping to get, I decided to wrap this inside a microcoded “CPU” of sorts. Really more of an “engine” than a car — if a RISC-V CPU is your every-day four-door sedan optimized for versatility and efficiency, the microcoded Curve25519 engine I created is more of a drag racer: a turbocharged engine block on wheels that’s designed to drive long flat stretches of road as fast as possible. While you could use this to drive your kids to school, you’ll have a hard time turning corners, and you’ll need to restart the engine after every red light. Above is a block diagram of the engine’s microcoded architecture. It’s a simple “three-stage” pipeline (FETCH/EXEC/RETIRE) that runs at 50MHz with no bypassing (that would be extremely expensive with 256-bit wide datapaths). I was originally hoping we could close timing at 100MHz, but our power-optimized -1L FPGA just wouldn’t have it; so the code sequencer runs at 50MHz; the core multiplier at 100MHz; and the register file uses four phases at 200MHz to access a simple RAM block to create a space-efficient virtual register file that runs at 50MHz. The engine has just 13 opcodes: There’s no compiler for it; instead, we adapted the most complicated Rust macro I’ve ever seen from johnas-schievink’s rustasm6502 crate to create the abomination that is engine25519-as. Here’s a snippet of what the assembly code looks like, in-lined as a Rust macro: let mcode = assemble_engine25519!( start: // from FieldElement.invert() // let (t19, t3) = self.pow22501(); // t19: 249..0 ; t3: 3,1,0 // let t0 = self.square(); // 1 e_0 = 2^1 mul %0, %30, %30 // self is W, e.g. %30 // let t1 = t0.square().square(); // 3 e_1 = 2^3 mul %1, %0, %0 mul %1, %1, %1 // let t2 = self * &t1; // 3,0 e_2 = 2^3 + 2^0 mul %2, %30, %1 // let t3 = &t0 * &t2; // 3,1,0 mul %3, %0, %2 // let t4 = t3.square(); // 4,2,1 mul %4, %3, %3 // let t5 = &t2 * &t4; // 4,3,2,1,0 mul %5, %2, %4 // let t6 = t5.pow2k(5); // 9,8,7,6,5 psa %28, #5 // coincidentally, constant #5 is the number 5 mul %6, %5, %5 pow2k_5: sub %28, %28, #1 // %28 = %28 - 1 brz pow2k_5_exit, %28 mul %6, %6, %6 brz pow2k_5, #0 pow2k_5_exit: // let t7 = &t6 * &t5; // 9,8,7,6,5,4,3,2,1,0 mul %7, %6, %5 ); The mcode variable is a [i32] fixed-length array, which is quite friendly to our no_std Rust environment that is Xous. Fortunately, the coders of the curve25519-dalek crate did an amazing job, and the comments that surround their Rust code map directly onto our macro language, register numbers and all. So translating the entire scalar multiply inside the Montgomery structure was a fairly straightforward process, including the final affine transform. ### How Well Does It Run? The fully accelerated Montgomery multiply operation was integrated into a fork of the curve25519-dalek crate, and wrapped into some benchmarking primitives inside Xous, a small embedded operating system written by Xobs. A software-only implementation of curve25519 would take about 100ms per DH operation on a 100MHz RV32-IMAC CPU, while our hardware-accelerated version completes in about 6.7ms — about a 15x speedup. Significantly, the software-only operation does not incur the context-switch to the sandboxed hardware driver, whereas our benchmark includes the overhead of the syscall to set up and run the code; thus the actual engine itself runs a bit faster per-op than the benchmark might hint at. However, what I’m most interested in is in-application performance, and therefore I always include the overhead of swapping to the hardware driver context to give an apples-to-apples comparison of end-user application performance. More importantly, the CPU is free to do other things while the engine does it’s thing, such as servicing the network stack or updating the UX. I think the curve25519 accelerator engine hit its goals — it strapped enough of a rocket on our little turtle of a CPU so that it’ll be able render a chat UX while doing double-ratchets as a background task. I also definitely learned more about the algorithm, although admittedly I still have a lot more to learn if I’m to say I truly understand elliptic curve cryptography. So far I’ve just shaken hands with the fuzzy monsters hiding inside the curve25519 closet; they seem like decent chaps — they’re not so scary, I just understand them poorly. A couple more interactions like this and we might even become friends. However, if I were to be honest, it probably wouldn’t be worth it to port the curve25519 accelerator engine from its current FPGA format to an ASIC form. Mask-defined silicon would run at least 5x faster, and if we needed the compute power, we’d probably find more overall system-level benefit from a second CPU core than a domain-specific accelerator (and hopefully by then the multi-core stuff in Litex will have sufficiently stabilized that it’d be a relatively low-risk proposition to throw a second CPU into a chip tape-out). That being said, I learned a lot, and I hope that by sharing my experience, someone else will find Curve25519 a little more approachable, too! ### 7 Responses to “Building a Curve25519 Hardware Accelerator” 1. Marco Merlin says: Good article! An open source implementation of such an algorithm (not targeting RISCV) is available here. https://trac.cryptech.is/browser/core/pkey/ed25519?order=name I have used the ecdsa256 and ecdsa384 cores of the cryptech project for a multimode IP implementing ECC point multiplication, and they are a good reference, even though NOT oriented towards a low gate count. However, it would have taken me forever to master the maths to get to a real IP if I started from scratch, so at least they accelerated my work! I am not sure I get why you needed the rust macro for the assembly… did you use actual 6502 microcontroller opcodes for the microcode ? When adding ECC 521p curve, I ended up writing a python script to generate microcode in the same format, but I would have liked to use a more clever approach. See the microcode here. https://trac.cryptech.is/browser/core/pkey/ecdsa256/rtl/ecdsa256_microcode_rom.v • bunnie says: No, they aren’t actual 6502 opcodes, we just used the format of the macro as a template to generate our machine code! It just so happened someone made a macro assembler for 6502 that was easily adapted to our purpose. 2. TellowKrinkle says: Since propagating carries takes so much time, what if you put that off until after all the multiplies were done? Instead of 15x 17-bit numbers, use 16x 16-bit numbers with the free (17th) bit of each number left to prevent carries from propagating more than once. All the multiplication would still be 17-bit (since the carry bits are no longer normalized), collapsing partial sums would have the shifts and masks modified for 16 bits, and the new sloppy carry propagation would look like this: for (int i = 0; i > 15) * 19 } else if (i == 15) { // 255 doesn’t divide evenly into 16 so this one is only 15 bits propout[15] = (propin[15] & 0x7fff) + (propin[14] >> 16) } else { propout[i] = (propin[i] & 0xffff) + (propin[i-1] >> 16) } } Then do all your multiplication with these sloppy non-normalized numbers and then do one proper carry propagation at the end. Would require more multiplies and larger (272-bit) registers, but would remove most of the carry propagation • TellowKrinkle says: Not sure what happened, the form submission seems to have eaten half the code. Here it is in base64: Zm9yIChpbnQgaSA9IDA7IGkgPCAxNjsgaSsrKSB7CiAgaWYgKGkgPT0gMCkgewogICAgcHJvcG91dFswXSA9IChwcm9waW5bMF0gJiAweGZmZmYpICsgKHByb3BpblsxNV0gPj4gMTUpICogMTkKICB9IGVsc2UgaWYgKGkgPT0gMTUpIHsKICAgIC8vIDI1NSBkb2Vzbid0IGRpdmlkZSBldmVubHkgaW50byAxNiBzbyB0aGlzIG9uZSBpcyBvbmx5IDE1IGJpdHMKICAgIHByb3BvdXRbMTVdID0gKHByb3BpblsxNV0gJiAweDdmZmYpICsgKHByb3BpblsxNF0gPj4gMTYpCiAgfSBlbHNlIHsKICAgIHByb3BvdXRbaV0gPSAocHJvcGluW2ldICYgMHhmZmZmKSArIChwcm9waW5baS0xXSA+PiAxNikKICB9Cn0= • Soatok says: Oh, the code’s there. This blog just allows arbitrary HTML. 3. Alex M says: I haven’t finished reading your post, but I think you would enjoy Martin Kleppman’s paper describing in clear details of a fast, constant time ed25519 implementation, which includes a trick to make dealing with carries less painful – you just use bigger limbs and do more computation before you propagate the carries. Not sure how well that would fit into the FPGA constraints, but definitely worth reading the very accessible paper! https://martin.kleppmann.com/2020/11/18/distributed-systems-and-elliptic-curves.html (see the second half of the post) 4. denmike says: Thanks for the write-up, which was an interesting read to me. And you say, that you don’t like to write papers :-)
8,336
31,632
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.515625
3
CC-MAIN-2023-50
latest
en
0.911802
https://www.meritnation.com/cbse-class-11-humanities/math/rd-sharma-xi-2018/transformation-formulae/textbook-solutions/69_1_3396_24538_8.6_137918
1,571,011,123,000,000,000
text/html
crawl-data/CC-MAIN-2019-43/segments/1570986648343.8/warc/CC-MAIN-20191013221144-20191014004144-00201.warc.gz
1,060,519,325
38,213
Rd Sharma Xi 2018 Solutions for Class 11 Humanities Math Chapter 8 Transformation Formulae are provided here with simple step-by-step explanations. These solutions for Transformation Formulae are extremely popular among Class 11 Humanities students for Math Transformation Formulae Solutions come handy for quickly completing your homework and preparing for exams. All questions and answers from the Rd Sharma Xi 2018 Book of Class 11 Humanities Math Chapter 8 are provided here for you for free. You will also love the ad-free experience on Meritnation’s Rd Sharma Xi 2018 Solutions. All Rd Sharma Xi 2018 Solutions for class Class 11 Humanities Math are prepared by experts and are 100% accurate. #### Question 1: Express each of the following as the product of sines and cosines: (i) sin 12x + sin 4x (ii) sin 5x − sin x (iii) cos 12x + cos 8x (iv) cos 12x − cos 4x (v) sin 2x + cos 4x (i) (ii) (iii) (iv) (v) #### Question 2: Prove that: (i) sin 38° + sin 22° = sin 82° (ii) cos 100° + cos 20° = cos 40° (iii) sin 50° + sin 10° = cos 20° (iv) sin 23° + sin 37° = cos 7° (v) sin 105° + cos 105° = cos 45° (vi) sin 40° + sin 20° = cos 10° (i) (ii) (iii) (iv) (v) (vi) #### Question 3: Prove that: (i) cos 55° + cos 65° + cos 175° = 0 (ii) sin 50° − sin 70° + sin 10° = 0 (iii) cos 80° + cos 40° − cos 20° = 0 (iv) cos 20° + cos 100° + cos 140° = 0 (v) (vi) $\mathrm{cos}\frac{\mathrm{\pi }}{12}-\mathrm{sin}\frac{\mathrm{\pi }}{12}=\frac{1}{\sqrt{2}}$ (vii) sin 80° − cos 70° = cos 50° (viii) sin 51° + cos 81° = cos 21° (i) (ii) (iii) (iv) (v) (vi) (vii) (viii) Prove that: (i) (ii) (i) (ii) #### Question 5: Prove that: (i) (ii) sin 47° + cos 77° = cos 17° (i) (ii) #### Question 6: Prove that: (i) cos 3A + cos 5A + cos 7A + cos 15A = 4 cos 4A cos 5A cos 6A (ii) cos A + cos 3A + cos 5A + cos 7A = 4 cos A cos 2A cos 4A (iii) sin A + sin 2A + sin 4A + sin 5A = 4 cos $\frac{A}{2}$ cos $\frac{3A}{2}$ sin 3A (iv) sin 3A + sin 2A − sin A = 4 sin A cos $\frac{A}{2}$ cos $\frac{3A}{2}$ (v) cos 20° cos 100° + cos 100° cos 140° − 140° cos 200° = −$\frac{3}{4}$ (vi) (vii) (i) (ii) (iii) (iv) (v) (vi) Hence, LHS = RHS (vii) Hence, LHS = RHS Disclaimer: The given question is incorrect. The correct question should be . Prove that: (i) (ii) (iii) (iv) (v) (i) (ii) (iii) (iv) (v) Prove that: (i) (ii) (iii) (iv) (v) (vi) (vii) (viii) (ix) (x) (xi) #### Question 9: Prove that: (i) (ii) cos (A + B + C) + cos (AB + C) + cos (A + BC) + cos (− A + B + C) = 4 cos A cos B cos C #### Question 10: Given: sin A + sin B = $\frac{1}{4}$         .....(i) cos A + cos B =$\frac{1}{2}$         .....(ii) Dividing (i) by (ii): #### Question 11: If cosec A + sec A = cosec B + sec B, prove that tan A tan B = $\mathrm{cot}\frac{A+B}{2}$. Given: #### Question 12: . Given: sin 2A = λ sin 2B $⇒\frac{\mathrm{sin}2A}{\mathrm{sin}2B}=\lambda$ $⇒\frac{\mathrm{tan}\left(A+B\right)}{\mathrm{tan}\left(A-B\right)}=\frac{\lambda +1}{\lambda -1}\phantom{\rule{0ex}{0ex}}$ Hence proved. #### Question 13: Prove that: (i) (ii) sin (B C) cos (AD) + sin (CA) cos (BD) + sin (AB) cos (CD) = 0 #### Question 15: If cos (α + β) sin (γ + δ) = cos (α − β) sin (γ − δ), prove that cot α cot β cot γ = cot δ #### Question 16: If y sin Ï• = x sin (2θ + Ï•), prove that (x + y) cot (θ + Ï•) = (yx) cot θ. Given: y sin Ï• = x sin (2θ + Ï•) #### Question 17: If cos (A + B) sin (CD) = cos (AB) sin (C + D), prove that tan A tan B tan C + tan D = 0. cos (A + B) sin (CD) = cos (AB) sin (C + D) $⇒$[cosA cosB − sinA sinB] [sinC cosD − cosC sinD] = [cosA cosB + sinA sinB] [sinC cosD +  cosC sinD] #### Question 18: If , prove that $xy+yz+zx=0$.                    [NCERT EXEMPLAR] #### Question 19: If , prove that .                           [NCERT EXEMPLAR] Given: $⇒\frac{m}{n}=\frac{\mathrm{sin}\left(\theta +2\alpha \right)}{\mathrm{sin}\theta }$ Applying componendo and dividendo, we get #### Question 1: If (cos α + cos β)2 + (sin α + sin β)2, write the value of λ. (cos α + cos β)2 + (sin α + sin β)2 Consider LHS: (cos α + cos β)2 + (sin α + sin β)2 #### Question 2: Write the value of sin $\frac{\mathrm{\pi }}{12}$ sin $\frac{5\mathrm{\pi }}{12}$. sin $\frac{\mathrm{\pi }}{12}$ sin $\frac{5\mathrm{\pi }}{12}$ #### Question 3: If sin A + sin B = α and cos A + cos B = β, then write the value of tan $\left(\frac{A+B}{2}\right)$. Given: sin A + sin B = α            .....(i) cos A + cos B = β           .....(ii) Dividing (i) by (ii): #### Question 4: If cos A = m cos B, then write the value of . #### Question 5: Write the value of the expression . #### Question 6: If A + B = $\frac{\mathrm{\pi }}{3}$ and cos A + cos B = 1, then find the value of cos $\frac{A-B}{2}$. Given: A + B = $\frac{\mathrm{\pi }}{3}$ and cos A + cos B = 1 #### Question 7: Write the value of $\mathrm{sin}\frac{\mathrm{\pi }}{15}\mathrm{sin}\frac{4\mathrm{\pi }}{15}\mathrm{sin}\frac{3\mathrm{\pi }}{10}$ #### Question 8: If sin 2A = λ sin 2B, then write the value of $\frac{\lambda +1}{\lambda -1}$. Given: sin 2A = λ sin 2B $⇒\frac{\mathrm{sin}2A}{\mathrm{sin}2B}=\lambda$ #### Question 9: Write the value of . #### Question 10: If cos (A + B) sin (CD) = cos (AB) sin (C + D), then write the value of tan A tan B tan C. cos (A + B) sin (CD) = cos (AB) sin (C + D) $⇒$[cosA cosB − sinA sinB] [sinC cosD − cosC sinD] =  [cosA cosB + sinA sinB] [sinC cosD + cosC sinD] #### Question 1: cos 40° + cos 80° + cos 160° + cos 240° = (a) 0 (b) 1 (c) $\frac{1}{2}$ (d) $-\frac{1}{2}$ (d) $-\frac{1}{2}$ #### Question 2: sin 163° cos 347° + sin 73° sin 167° = (a) 0 (b) $\frac{1}{2}$ (c) 1 (d) None of these (b) $\frac{1}{2}$ #### Question 3: If sin 2 θ + sin 2 Ï• = $\frac{1}{2}$ and cos 2 θ + cos 2 Ï• = $\frac{3}{2}$, then cos2 (θ − Ï•) = (a) $\frac{3}{8}$ (b) $\frac{5}{8}$ (c) $\frac{3}{4}$ (d) $\frac{5}{4}$ (b) $\frac{5}{8}$ Given: sin 2θ + sin 2Ï• = $\frac{1}{2}$                  .....(i) and cos 2θ + cos 2Ï• = $\frac{3}{2}$         .....(ii) Squaring and adding (i) and (ii), we get: (sin 2θ + sin 2Ï•)2 + (cos 2θ + cos 2Ï•)2 = $\frac{1}{4}+\frac{9}{4}$ #### Question 4: The value of cos 52° + cos 68° + cos 172° is (a) 0 (b) 1 (c) 2 (d) 3/2 (a) 0 #### Question 5: The value of sin 78° − sin 66° − sin 42° + sin 60° is (a) $\frac{1}{2}$ (b) $-\frac{1}{2}$ (c) −1 (d) None of these (d) None of these #### Question 6: If sin α + sin β = a and cos α − cos β = b, then tan $\frac{\mathrm{\alpha }-\mathrm{\beta }}{2}$= (a) $-\frac{a}{b}$ (b) $-\frac{b}{a}$ (c) $\sqrt{{a}^{2}+{b}^{2}}\phantom{\rule{0ex}{0ex}}$ (d) None of these (b) $-\frac{b}{a}$ Given: sin α + sin β = a                  .....(i) cos α − cos β = b                .....(ii) Dividing (i) by (ii): #### Question 7: cos 35° + cos 85° + cos 155° = (a) 0 (b) $\frac{1}{\sqrt{3}}$ (c) $\frac{1}{\sqrt{2}}$ (d) cos 275° (a) 0 #### Question 8: The value of sin 50° − sin 70° + sin 10° is equal to (a) 1 (b) 0 (c) 1/2 (d) 2 (b) 0 #### Question 9: sin 47° + sin 61° − sin 11° − sin 25° is equal to (a) sin 36° (b) cos 36° (c) sin 7° (d) cos 7° (d) cos 7° #### Question 10: If cos A = m cos B, then = (a) $\frac{m-1}{m+1}$ (b) $\frac{m+2}{m-2}$ (c)$\frac{m+1}{m-1}$ (d) None of these (c)$\frac{m+1}{m-1}$ #### Question 11: If A, B, C are in A.P., then = (a) tan B (b) cot B (c) tan 2 B (d) None of these (b) cot B Since A,B and C are in A.P, $=\frac{\mathrm{sin}\left(\frac{A-C}{2}\right)\mathrm{cos}\left(\frac{A+C}{2}\right)}{\mathrm{sin}\left(\frac{A+C}{2}\right)\mathrm{sin}\left(\frac{A-C}{2}\right)}\phantom{\rule{0ex}{0ex}}=\frac{\mathrm{cos}\left(\frac{A+C}{2}\right)}{\mathrm{sin}\left(\frac{A+C}{2}\right)}\phantom{\rule{0ex}{0ex}}=\frac{\mathrm{cos}B}{\mathrm{sin}B}\phantom{\rule{0ex}{0ex}}=\mathrm{cot}B$ #### Question 12: If sin (B + CA), sin (C + AB), sin (A + BC) are in A.P., then cot A, cot B and cot C are in (a) GP (b) HP (c) AP (d) None of these (b) HP Given: sin (B + CA), sin (C + AB) and sin (A + BC) are in A.P. Hence, cotA, cotB and cotC are in HP. #### Question 13: If sin x + sin y = $\sqrt{3}$ (cos y − cos x), then sin 3x + sin 3y = (a) 2 sin 3x (b) 0 (c) 1 (d) none of these We have, sin x + sin y = $\sqrt{3}$ (cos y − cos x) $\mathrm{Case}-\mathrm{I}$ ${\mathrm{sin}}3x+\mathrm{sin}3y=\mathrm{sin}\left(-3y\right)+\mathrm{sin}3y=-\mathrm{sin}3y+\mathrm{sin}3y=0$ #### Question 14: If $\mathrm{tan}\alpha =\frac{x}{x+1}$ and $\mathrm{tan}\beta =\frac{1}{2x+1}$, then $\alpha +\beta$ is equal to (a) $\frac{\mathrm{\pi }}{2}$                                  (a) $\frac{\mathrm{\pi }}{3}$                                  (a) $\frac{\mathrm{\pi }}{6}$                                  (a) $\frac{\mathrm{\pi }}{4}$ It is given that $\mathrm{tan}\alpha =\frac{x}{x+1}$ and $\mathrm{tan}\beta =\frac{1}{2x+1}$. Now, $=\frac{2{x}^{2}+2x+1}{2{x}^{2}+2x+1}\phantom{\rule{0ex}{0ex}}=1$ $\therefore \mathrm{tan}\left(\alpha +\beta \right)=1=\mathrm{tan}\frac{\mathrm{\pi }}{4}\phantom{\rule{0ex}{0ex}}⇒\alpha +\beta =\frac{\mathrm{\pi }}{4}$ Hence, the correct answer is option D. #### Question 1: Express each of the following as the sum or difference of sines and cosines: (i) 2 sin 3x cos x (ii) 2 cos 3x sin 2x (iii) 2 sin 4x sin 3x (iv) 2 cos 7x cos 3x (i) (ii) (iii) (iv) #### Question 2: Prove that: (i) $2\mathrm{sin}\frac{5\mathrm{\pi }}{12}\mathrm{sin}\frac{\mathrm{\pi }}{12}=\frac{1}{2}$ (ii) $2\mathrm{cos}\frac{5\mathrm{\pi }}{12}\mathrm{cos}\frac{\mathrm{\pi }}{12}=\frac{1}{2}$ (iii) $2\mathrm{sin}\frac{5\mathrm{\pi }}{12}\mathrm{cos}\frac{\mathrm{\pi }}{12}=\frac{\sqrt{3}+2}{2}$ (i) (ii) (iii) Show that : (i) (ii) (i) (ii) #### Question 5: Prove that: (i) cos 10° cos 30° cos 50° cos 70° = $\frac{3}{16}$ (ii) cos 40° cos 80° cos 160° = $-\frac{1}{8}$ (iii) sin 20° sin 40° sin 80° = $\frac{\sqrt{3}}{8}$ (iv) cos 20° cos 40° cos 80° = $\frac{1}{8}$ (v) tan 20° tan 40° tan 60° tan 80° = 3 (vi) tan 20° tan 30° tan 40° tan 80° = 1 (vii) sin 10° sin 50° sin 60° sin 70° = $\frac{\sqrt{3}}{16}$ (viii) sin 20° sin 40° sin 60° sin 80° = $\frac{3}{16}$ (i) (ii) (iii) (iv) (v) LHS = tan 20° tan 40° tan 60° tan 80° (vi) LHS = tan 20° tan 30° tan 60° tan 80° (vii) (viii) #### Question 6: Show that: (i) sin A sin (BC) + sin B sin (CA) + sin C sin (AB) = 0 (ii) sin (BC) cos (AD) + sin (CA) cos (BD) + sin (AB) cos (CD) = 0 (i) (ii) #### Question 7: Prove that $\frac{\pi }{3}=60°$ $=\frac{\mathrm{sin}x\left({\mathrm{sin}}^{2}60°-{\mathrm{sin}}^{2}x\right)}{\mathrm{cos}x\left({\mathrm{cos}}^{2}60°-{\mathrm{sin}}^{2}x\right)}\phantom{\rule{0ex}{0ex}}=\frac{\mathrm{sin}x\left(\frac{3}{4}-{\mathrm{sin}}^{2}x\right)}{\mathrm{cos}x\left(\frac{1}{4}-{\mathrm{sin}}^{2}x\right)}\phantom{\rule{0ex}{0ex}}=\frac{\mathrm{sin}x\left(3-4{\mathrm{sin}}^{2}x\right)}{\mathrm{cos}x\left(1-4{\mathrm{sin}}^{2}x\right)}$ $=\frac{3\mathrm{sin}x-4{\mathrm{sin}}^{3}x}{4{\mathrm{cos}}^{3}x-3\mathrm{cos}x}\phantom{\rule{0ex}{0ex}}=\frac{\mathrm{sin}3x}{\mathrm{cos}3x}\phantom{\rule{0ex}{0ex}}=\mathrm{tan}3x=\mathrm{RHS}$ #### Question 8: If α + β = $\frac{\pi }{2}$, show that the maximum value of cos α cos β is $\frac{1}{2}$. $\frac{\pi }{2}=90°$
4,598
11,242
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 85, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.40625
4
CC-MAIN-2019-43
longest
en
0.568206
https://www.atariarchives.org/deli/computer_chess.php
1,553,323,341,000,000,000
text/html
crawl-data/CC-MAIN-2019-13/segments/1552912202728.21/warc/CC-MAIN-20190323060839-20190323082839-00301.warc.gz
682,515,050
7,317
COMPUTER CHESS by Dan and Kathe Spracklen Dan and Kathe Spracklen are the authors of Sargon, the most popular chess playing program for home computers. Ever wonder how a computer can play a game of chess? Just think about it. Chess is a strategy game. You have to study the board and plan how to maneuver your pieces. Finally, after devising a complex strategy, you play the decisive move-checkmate! How can a computer study a chess board? How can it make plans? How can it checkmate? Taken all at once, the problem might seem unsolvable. But the job can be broken down into three main parts: 1) The mechanics phase: describing a chess board and pieces in a way the computer can understand and teaching it how to move the pieces according to the rules of chess. 2) The search phase: looking ahead at what can happen as the game progresses. The computer still can't make real plans the way a human does, but instead must look at thousands of possibilities-many times more than the human mind can handle. 3) The evaluation phase: sizing up the merits of a particular position on the chess board. It wouldn't do any good for the computer to look at all those positions if it didn't know which ones were better than the others. Sure, it might know when the other guy is checkmated. But if mate isn't in sight how does it know what to do? We'll take a look at how the Sargon chess program handles each of these phases, then discuss some of the computer chess player's limitations and how they might be conquered. In this famous game between two past world champions, White mounts an attack against Black's King. Nine moves later, faced with inevitable mate, Black resigns. How can a computer simulate this complex activity of the human mind? Setting Up the Board A human can look at a chess board and see black and white squares with pieces sitting in various places. A computer can only deal with numbers. To the computer, the chess board is an area of memory-64 locations set aside to represent the real board. A value of zero in a particular location indicates that the square is empty. A chess piece must also be a number for the computer. For instance, we might assign the following codes to the chess pieces: 1- White Pawn 2 - White Knight 3 - White Bishop 4 - White Rook 5 - White Queen 6 - White King 7 - Black Pawn 8 - Black Knight 9 - Black Bishop 10 - Black Rook 11- Black Queen 12 - Black King Thus the chess board and pieces would look like this: Moving the chess pieces, then, becomes an arithmetic process. If we number the board squares from 1 to 64, moving a White Pawn up two squares can be accomplished by adding 16 to its starting location. A Rook can be moved one square to the left by subtracting 1 from its current square number. Other pieces can be moved by simple addition and subtraction operations as well. Unfortunately, there is a problem associated with this simple scheme. Take, for example, a Black Rook standing on square 57. It can't move to the left because it's already at the left edge of the board. If we try to move it one square to the left by subtracting 1 from its current square number, we get square 56. That's a valid board square under this system, but it's not a legal Rook move so a border is added to the board. The extra locations are filled with a new code: -1 = Off Board Using this new code solves the problem. Moves are still addition and subtraction operations, but the amounts added and subtracted are a little different from the other numbering. You might notice that the border is two squares wide. That's because Knights can jump that far. With the border, the board now looks like this: In trying to decide on a strategy, a human chess player tries to combine general concepts such as putting pressure on a weak point with specific variations ("If I go here, then he'll go there"). The number of specific variations a human can handle is quite limited; the rest must be left to intuition and experience. The computer, with its vast calculating capability, can look at many thousands of variations while making a single move choice. It is this calculating power that first frightened human chess masters and still overwhelms the novice to intermediate player when faced with a computer opponent. But why can't computers outcalculate the masters and look ahead to a won position from the earliest moves of the game? The answer to this question is the fundamental problem of computer chess. Let's look again at the position from the Alekhine-Lasker game shown at the beginning of this article. The game continued as follows: 1. Q-Q6 2. R(KB1)-Ql 3. Q-N3 4. Q-N5! 5. N-Q6 6. P-K4 7. R-Q3 8. N-B5ch 9. QXP! N(K4)-Q2 R(Rl)-Ql P-N3 K-R1 K-N2 N-KN1 P-B3 K-R1 Resigns In this final position, if Black does not take the Queen, White will mate with Q-N7. If Black does take the Queen, White mates with R-R3. You might think a computer capable of examining thousands of positions per move would certainly be able to look nine moves ahead and find the winning line. But you'd be wrong! Here's why it can't. In the starting position, White has 9 legal Pawn moves, 6 legal Bishop moves, 7 legal Knight moves, 8 legal Rook moves, 13 legal Queen moves, and 1 legal King move for a total of 44 legal moves. Suppose we start the look-ahead search by trying P-QR3. In the resulting position, Black has 6 legal Pawn moves, 12 legal Knight moves, 9 legal Rook moves, 7 legal Queen moves, and 1 legal King move for a total of 35 legal moves. If we assume that each of White's possibilities will give Black the same number of legal responses, the computer would have to examine about 44 x 35 = 1,540 positions just to look one full move ahead. To look nine moves ahead would require examining approximately 44 x35 x44 x35 x44 x35 x44 x35 x44 x35 x44 x35 x44 x35 x44 x35 x44 x35 different positions. Though this is an unthinkable task for the computer, Alekhine surely saw the outcome of his plan. With the limited calculating power of the human mind, he could perform the needed look-ahead-a task beyond the vast calculating power of the electronic brain. Undoubtedly the reason Alekhine succeeded was that he didn't have to consider all 44 legal moves in the first position. For example, a mating attack seldom begins with 1.P-QR3. This process of elimination, called forward pruning, is the basis of one solution to his dilemma. Typically, 7 to 9 moves might be chosen from the 30 to 40 possibilities. The problems of forward pruning are twofold: on the one hand, 7x9x7x9x7x9x7x9x7x9x7x9 x7 x9 x7 x9 x7 x9 is still an unmanageable number; on the other hand, one of the moves not looked at might well be the best line of play. Another approach to the explosion problem is to take advantage of some of the mathematical properties of the search to rule out examining certain branches. Called alpha-beta pruning, this technique uses the assumption that if the computer can, for example, force the win of a Pawn along some line of play, it will not choose instead to lose a Bishop. These "loser" lines can thus be cut short. The principal advantage of alpha-beta pruning is that it produces the same move that the computer would play with no pruning of any kind, and it does this in about the square root of the time it would take the non-pruning search. With alpha-beta pruning, the Alekhine-Lasker position is still out of reach, but the level of play is strong enough to give tough competition to all but the expert or master-strength player. Positional Judgment Each time the computer evaluates a new position, it must assign a numerical score indicating how "good" the position is. Checkmating the opponent is infinitely good; being checkmated is infinitely bad. In between is the hard part. The Sargon program's evaluation function is heavily dependent on the material balance. Many times, chess masters will willingly give up a Pawn in exchange for an advantage in position. By contrast, Sargon will give up nearly every positional advantage it possesses in order to win a Pawn or avoid losing one. If material and position were weighted more evenly, the program would give up Pawns inappropriately and thus lose for lack of material. Quantifying chess concepts like mobility, attacking potential or piece placement is far more difficult than encoding pieces and moves. The rules of chess are fixed and inviolable, but the principles for good chess play are full of conditions and exceptions. Computer chess programs are boxed in by two interrelated difficulties. On one hand, the explosion in move possibilities limits how far ahead the program can calculate; on the other hand, evaluating the positions the computer can reach is a shaky approximation at best. The two problems are interrelated, since improving the evaluation might well take so much processor time that the depth of search is cut short. One easy solution is to wait patiently for computers to become faster. Although this will surely happen, we should realize that a computer running about thirty-six times as fast as current machines would only get about one move deeper in the look-ahead. A more tractable approach is to concentrate on improving the evaluation function (while keeping a close watch on the time it consumes). For instance, a chess player's "sense of danger" will certainly be aroused by an unprotected King. Programming a "sense of danger" could involve nothing more than reducing the evaluation score for the same situation. Gaining a better understanding of the worth of a Pawn, compared to certain positional criteria, could produce a marked improvement in the machine's play. IMPROVING A CHESS CLASSIC When we wrote Sargon III, the sequel to our Sargon II chess program for microcomputers, we included the following two substantive changes that greatly enhanced its overall chess-playing abilities. • Rank and file. An improved mathematical model represents the board, employing a grid system with square designations that can be broken down into two parts: the rank (row of the chess board) and the file (column of the board). Such a system greatly simplifies encoding chess strategy. Where previously the program had to calculate the new square number before it could look at the board array to see what was stored at that square, in Sargon III the square value itself tells if a piece has attempted to move off the board. • Capture search. Speed is the greatest single limiting factor in the look-ahead process. The more quickly a given position can be created and evaluated, the more positions can be examined and the deeper the search can explore. Sargon II stopped at every move to be evaluated and took a long, hard look at the possible piece exchanges and the expected outcome of each; called a "static exchange evaluator," this process could examine about twenty-three positions per second on the Apple II. Sargon III uses a faster, more accurate method, the "capture search," which determines the value of trades by actually making them on the board. Once it reaches a position that it would like to evaluate, this process generates a restricted set of moves: captures only. It keeps generating captures until 1) there are no more captures possible, or 2) the remaining captures available lose more than they gain. This, with other improvements in the search, means that Sargon III on the Apple II can evaluate some 250 positions per second.     Sargon III's evaluation function benefited from the revised method of storing the board in memory and from the improved method of examining exchanges. But improvements in positional analysis cannot be summed up in one or two sweeping changes that make everything simpler or faster. It is always difficult to translate abstract human knowledge into a mathematical formula. ("Keep up pressure in the center," the master says. But how do you describe "pressure" to a computer?) In the end, refinements are the sum of more and more little pieces of knowledge finally translated into algorithms.     Guided mostly by your comments and letters about Sargon II, we also addressed the questions of how the program presents itself to the user, what you can do with it and how easily you can do it. Since we made a great many changes in this respect, you might say this is the part of the program that you improved. D. AND K. SPRACKLEN
2,693
12,334
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.6875
3
CC-MAIN-2019-13
longest
en
0.952983
https://www.edplace.com/worksheet_info/maths/keystage4/year10/topic/1242/7214/which-chart-to-use
1,722,969,332,000,000,000
text/html
crawl-data/CC-MAIN-2024-33/segments/1722640497907.29/warc/CC-MAIN-20240806161854-20240806191854-00413.warc.gz
601,604,286
12,797
# Select an Appropriate Chart In this worksheet, students will decide which graph to use for a given situation. Key stage:  KS 4 Year:  GCSE GCSE Subjects:   Maths GCSE Boards:   AQA, Eduqas, Pearson Edexcel, OCR, Curriculum topic:   Statistics Curriculum subtopic:   Statistics Interpreting and Representing Data Difficulty level: #### Worksheet Overview There are a number of different types of graph that you can use in statistics. The question you will sometimes need to answer is: Which one should I use? There are a number of situations where you will be told exactly which graph to use - for example, if it is a histogram, scatter graph, or cumulative frequency diagram. However, we have to be able to decide between the four other types - bar charts, pie charts, pictograms and vertical bar charts. Each of the different types of graph have certain advantages and disadvantages, let's look at them: Advantage Disadvantage Bar chart Clear and easy to read Good if you have two sets of data (dual) Does not show proportions Pie chart Great for comparing proportions We don't know the exact values of each piece unless we are told it Shouldn't be used for grouped data Pictogram Good visual impact for simple data Can be difficult to read accurately due to not being able to split pictures up Vertical bar chart Clear and easy to read Good if you have two sets of data Quick to draw Does not show proportions Types of data There are two types of data: discrete and continuous data. Discrete data is things that can't be split up into smaller units (eg. colour, shoe size, rooms in a house). Continuous data is things that can be measured on a scale (eg. height, weight). Continuous data should be displayed using a bar chart or vertical bar chart. Discrete data can be displayed on any of the graphs. Let's have a go at some questions now. ### What is EdPlace? We're your National Curriculum aligned online education content provider helping each child succeed in English, maths and science from year 1 to GCSE. With an EdPlace account you’ll be able to track and measure progress, helping each child achieve their best. We build confidence and attainment by personalising each child’s learning at a level that suits them. Get started
489
2,265
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.6875
4
CC-MAIN-2024-33
latest
en
0.909066
https://rosettacode.org/wiki/Word_wheel
1,721,416,273,000,000,000
text/html
crawl-data/CC-MAIN-2024-30/segments/1720763514917.3/warc/CC-MAIN-20240719170235-20240719200235-00651.warc.gz
429,664,091
77,057
# Word wheel Word wheel You are encouraged to solve this task according to the task description, using any language you may know. A "word wheel" is a type of word game commonly found on the "puzzle" page of newspapers. You are presented with nine letters arranged in a circle or 3×3 grid. The objective is to find as many words as you can using only the letters contained in the wheel or grid. Each word must contain the letter in the centre of the wheel or grid. Usually there will be a minimum word length of 3 or 4 characters. Each letter may only be used as many times as it appears in the wheel or grid. An example N D E O K G E L W Write a program to solve the above "word wheel" puzzle. Specifically: • Find all words of 3 or more letters using only the letters in the string   ndeokgelw. • All words must contain the central letter   K. • Each letter may be used only as many times as it appears in the string. • For this task we'll use lowercase English letters exclusively. A "word" is defined to be any string contained in the file located at   http://wiki.puzzlers.org/pub/wordlists/unixdict.txt. If you prefer to use a different dictionary,   please state which one you have used. Optional extra Word wheel puzzles usually state that there is at least one nine-letter word to be found. Using the above dictionary, find the 3x3 grids with at least one nine-letter solution that generate the largest number of words of three or more letters. ## 11l Translation of: Python ```V GRID = ‘N D E O K G E L W’ F getwords() R words.filter(w -> w.len C 3..9) F solve(grid, dictionary) DefaultDict[Char, Int] gridcount L(g) grid gridcount[g]++ F check_word(word) DefaultDict[Char, Int] lcount L(l) word lcount[l]++ L(l, c) lcount I c > @gridcount[l] R 1B R 0B V mid = grid[4] R dictionary.filter(word -> @mid C word & !@check_word(word)) V chars = GRID.lowercase().split_py().join(‘’) V found = solve(chars, dictionary' getwords()) print(found.join("\n"))``` Output: ```eke elk keel keen keg ken keno knee kneel knew know knowledge kong leek week wok woke ``` ## 8080 Assembly This program runs under CP/M, and takes the dictionary file and wheel definition as arguments. The file is processed block by block, so it can be arbitrarily large (the given ~206kb `unixdict.txt` works fine). ```puts: equ 9 ; CP/M syscall to print string fopen: equ 15 ; CP/M syscall to open a file FCB1: equ 5Ch ; First FCB (input file) DTA: equ 80h ; Disk transfer address org 100h ;;; Make wheel (2nd argument) lowercase and store it lxi d,DTA+1 ; Start of command line arguments scan: inr e ; Scan until we find a space ldax d cpi ' ' ; Found it? jnz scan ; If not, try again inx d ; If so, wheel starts 1 byte onwards lxi h,wheel ; Space for wheel lxi b,920h ; B=9 (chars), C=20 (case bit) whlcpy: ldax d ; Get wheel character ora c ; Make lowercase mov m,a ; Store inx d ; Increment both pointers inx h dcr b ; Decrement counter jnz whlcpy ; While not zero, copy next character ;;; Open file in FCB1 mvi e,FCB1 ; D is already 0 mvi c,fopen call 5 ; Returns A=FF on error inr a ; If incrementing A gives zero, jz err ; then print error and stop lxi h,word ; Copy into word ;;; Read a 128-byte block from the file block: push h ; Keep word pointer lxi d,FCB1 ; Read from file call 5 pop h ; Restore word pointer dcr a ; A=1 = EOF rz ; If so, stop. inr a ; Otherwise, A<>0 = error jnz err lxi d,DTA ; Start reading at DTA char: ldax d ; Get character mov m,a ; Store in word cpi 26 ; EOF reached? rz ; Then stop cpi 10 ; End of line reached? jz ckword ; Then we have a full word inx h ; Increment word pointer nxchar: inr e ; Increment DTA pointer (low byte) jz block ; If rollover, get next block jmp char ; Otherwise, handle next character in block ;;; Check if current word is valid ckword: push d ; Keep block pointer lxi d,wheel ; Copy the wheel lxi h,wcpy mvi c,9 ; 9 characters cpyw: ldax d ; Get character mov m,a ; Store in copy inx h ; Increment pointers inx d dcr c ; Decrement counters jnz cpyw ; Done yet? lxi d,word ; Read from current word wrdch: ldax d ; Get character cpi 32 ; Check if <32 jc wdone ; If so, the word is done lxi h,wcpy ; Check against the wheel letters mvi b,9 wlch: cmp m ; Did we find it? jz findch inx h ; If not, try next character in wheel dcr b ; As long as there are characters jnz wlch ; If no match, this word is invalid wnext: pop d ; Restore block pointer lxi h,word ; Start reading new word jmp nxchar ; Continue with character following word findch: mvi m,0 ; Found a match - set char to 0 inx d ; And look at next character in word jmp wrdch wdone: lda wcpy+4 ; Word is done - check if middle char used ana a ; If not, the word is invalid jnz wnext lxi h,wcpy ; See how many characters used lxi b,9 ; C=9 (counter), B=0 (used) whtest: mov a,m ; Get wheel character ana a ; Is it zero? jnz \$+4 ; If not, skip next instr inr b ; If so, count it inx h ; Next wheel character dcr c ; Decrement counter jnz whtest mvi a,2 ; At least 3 characters must be used cmp b jnc wnext ; If not, the word is invalid xchg ; If so, the word _is_ valid, pointer in HL inx h mvi m,10 ; and LF inx h mvi m,'\$' ; and the CP/M string terminator lxi d,word ; Then print the word mvi c,puts call 5 jmp wnext err: lxi d,errs ; Print file error mvi c,puts jz 5 errs: db 'File error\$' ; Error message wheel: ds 9 ; Room for wheel wcpy: ds 9 ; Copy of wheel (to mark characters used) word: equ \$ ; Room for current word``` Output: ```A>wheel unixdict.txt ndeokgelw eke elk keel keen keg ken keno knee kneel knew know knowledge kong leek week wok woke ``` ```with Ada.Characters.Handling; use Ada.Characters.Handling; procedure Wordwheel is Compulsory_Ix : constant Positive := 5; Min_Length : constant Positive := 3; function Char_Posn (Str : Unbounded_String; C : Character) return Natural is begin for Ix in 1 .. Length (Str) loop if Element (Str, Ix) = C then return Ix; end if; end loop; return 0; end Char_Posn; procedure Search (Dict_Filename : String; Wheel : String) is Dict_File : File_Type; Allowed : constant String := To_Lower (Wheel); Required_Char : constant String (1 .. 1) := "" & Allowed (Compulsory_Ix); Available, Dict_Word : Unbounded_String; Dict_Word_Len : Positive; Matched : Boolean; Posn : Natural; begin Open (File => Dict_File, Mode => In_File, Name => Dict_Filename); while not End_Of_File (Dict_File) loop Dict_Word := Get_Line (Dict_File); Dict_Word_Len := Length (Dict_Word); if Dict_Word_Len >= Min_Length and then Dict_Word_Len <= Wheel'Length and then then Available := To_Unbounded_String (Allowed); Matched := True; for i in 1 .. Dict_Word_Len loop Posn := Char_Posn (Available, Element (Dict_Word, i)); if Posn > 0 then Delete (Source => Available, From => Posn, Through => Posn); else Matched := False; exit; end if; end loop; if Matched then Put_Line (Dict_Word); end if; end if; end loop; Close (Dict_File); end Search; begin Search ("unixdict.txt", "ndeokgelw"); end Wordwheel; ``` Output: ```eke elk keel keen keg ken keno knee kneel knew know knowledge kong leek week wok woke ``` ## APL Works with: Dyalog APL ```wordwheel←{ words←((~∊)∘⎕TC⊆⊢) 80 ¯1⎕MAP ⍵ match←{ 0=≢⍵:1 ~(⊃⍵)∊⍺:0 ⍺[(⍳⍴⍺)~⍺⍳⊃⍵]∇1↓⍵ } middle←(⌈0.5×≢)⊃⊢ words←((middle ⍺)∊¨words)/words words←(⍺∘match¨words)/words (⍺⍺≤≢¨words)/words } ``` Output: ``` 'ndeokgelw' (3 wordwheel) 'unixdict.txt' eke elk keel keen keg ken keno knee kneel knew know knowledge kong leek week wok woke ``` ## AppleScript ```use AppleScript version "2.4" use framework "Foundation" ------------------------ WORD WHEEL ---------------------- -- wordWheelMatches :: NSString -> [String] -> String -> String on wordWheelMatches(lexicon, wordWheelRows) set wheelGroups to group(sort(characters of ¬ concat(wordWheelRows))) script isWheelWord on |λ|(w) script available on |λ|(a, b) length of a ≤ length of b end |λ| end script script used on |λ|(grp) w contains item 1 of grp end |λ| end script all(my identity, ¬ zipWith(available, ¬ group(sort(characters of w)), ¬ filter(used, wheelGroups))) end |λ| end script set matches to filter(isWheelWord, ¬ filteredLines(wordWheelPreFilter(wordWheelRows), lexicon)) (length of matches as text) & " matches:" & ¬ linefeed & linefeed & unlines(matches) end wordWheelMatches -- wordWheelPreFilter :: [String] -> String on wordWheelPreFilter(wordWheelRows) set pivot to item 2 of item 2 of wordWheelRows set charSet to nub(concat(wordWheelRows)) "(2 < self.length) and (self contains '" & pivot & "') " & ¬ "and not (self matches '^.*[^" & charSet & "].*\$') " end wordWheelPreFilter --------------------------- TEST ------------------------- on run set fpWordList to scriptFolder() & "unixdict.txt" if doesFileExist(fpWordList) then {"nde", "okg", "elw"}) else linefeed & tab & fpWordList end if end run ----------- GENERIC :: FILTERED LINES FROM FILE ---------- -- doesFileExist :: FilePath -> IO Bool on doesFileExist(strPath) set ca to current application set oPath to (ca's NSString's stringWithString:strPath)'s ¬ stringByStandardizingPath set {bln, int} to (ca's NSFileManager's defaultManager's ¬ fileExistsAtPath:oPath isDirectory:(reference)) bln and (int ≠ 1) end doesFileExist -- filteredLines :: String -> NString -> [a] on filteredLines(predicateString, s) -- A list of lines filtered by an NSPredicate string set ca to current application set predicate to ca's NSPredicate's predicateWithFormat:predicateString set array to ca's NSArray's ¬ arrayWithArray:(s's componentsSeparatedByString:(linefeed)) (array's filteredArrayUsingPredicate:(predicate)) as list end filteredLines -- readFile :: FilePath -> IO NSString set ca to current application set e to reference set {s, e} to (ca's NSString's ¬ stringWithContentsOfFile:((ca's NSString's ¬ stringWithString:strPath)'s ¬ stringByStandardizingPath) ¬ encoding:(ca's NSUTF8StringEncoding) |error|:(e)) if missing value is e then s else (localizedDescription of e) as string end if -- scriptFolder :: () -> IO FilePath on scriptFolder() -- The path of the folder containing this script try tell application "Finder" to ¬ POSIX path of ((container of (path to me)) as alias) on error display dialog "Script file must be saved" end try end scriptFolder ------------------------- GENERIC ------------------------ -- Tuple (,) :: a -> b -> (a, b) on Tuple(a, b) -- Constructor for a pair of values, -- possibly of two different types. {type:"Tuple", |1|:a, |2|:b, length:2} end Tuple -- all :: (a -> Bool) -> [a] -> Bool on all(p, xs) -- True if p holds for every value in xs tell mReturn(p) set lng to length of xs repeat with i from 1 to lng if not |λ|(item i of xs, i, xs) then return false end repeat true end tell end all -- concat :: [[a]] -> [a] -- concat :: [String] -> String on concat(xs) set lng to length of xs if 0 < lng and string is class of (item 1 of xs) then set acc to "" else set acc to {} end if repeat with i from 1 to lng set acc to acc & item i of xs end repeat acc end concat -- eq (==) :: Eq a => a -> a -> Bool on eq(a, b) a = b end eq -- filter :: (a -> Bool) -> [a] -> [a] on filter(p, xs) tell mReturn(p) set lst to {} set lng to length of xs repeat with i from 1 to lng set v to item i of xs if |λ|(v, i, xs) then set end of lst to v end repeat if {text, string} contains class of xs then lst as text else lst end if end tell end filter -- foldl :: (a -> b -> a) -> a -> [b] -> a on foldl(f, startValue, xs) tell mReturn(f) set v to startValue set lng to length of xs repeat with i from 1 to lng set v to |λ|(v, item i of xs, i, xs) end repeat return v end tell end foldl -- group :: Eq a => [a] -> [[a]] on group(xs) script eq on |λ|(a, b) a = b end |λ| end script groupBy(eq, xs) end group -- groupBy :: (a -> a -> Bool) -> [a] -> [[a]] on groupBy(f, xs) -- Typical usage: groupBy(on(eq, f), xs) set mf to mReturn(f) script enGroup on |λ|(a, x) if length of (active of a) > 0 then set h to item 1 of active of a else set h to missing value end if if h is not missing value and mf's |λ|(h, x) then {active:(active of a) & {x}, sofar:sofar of a} else {active:{x}, sofar:(sofar of a) & {active of a}} end if end |λ| end script if length of xs > 0 then set dct to foldl(enGroup, {active:{item 1 of xs}, sofar:{}}, rest of xs) if length of (active of dct) > 0 then sofar of dct & {active of dct} else sofar of dct end if else {} end if end groupBy -- identity :: a -> a on identity(x) -- The argument unchanged. x end identity -- length :: [a] -> Int on |length|(xs) set c to class of xs if list is c or string is c then length of xs else (2 ^ 29 - 1) -- (maxInt - simple proxy for non-finite) end if end |length| -- min :: Ord a => a -> a -> a on min(x, y) if y < x then y else x end if end min -- mReturn :: First-class m => (a -> b) -> m (a -> b) on mReturn(f) -- 2nd class handler function -- lifted into 1st class script wrapper. if script is class of f then f else script property |λ| : f end script end if end mReturn -- map :: (a -> b) -> [a] -> [b] on map(f, xs) -- The list obtained by applying f -- to each element of xs. tell mReturn(f) set lng to length of xs set lst to {} repeat with i from 1 to lng set end of lst to |λ|(item i of xs, i, xs) end repeat return lst end tell end map -- nub :: [a] -> [a] on nub(xs) nubBy(eq, xs) end nub -- nubBy :: (a -> a -> Bool) -> [a] -> [a] on nubBy(f, xs) set g to mReturn(f)'s |λ| script notEq property fEq : g on |λ|(a) script on |λ|(b) not fEq(a, b) end |λ| end script end |λ| end script script go on |λ|(xs) if (length of xs) > 1 then set x to item 1 of xs {x} & go's |λ|(filter(notEq's |λ|(x), items 2 thru -1 of xs)) else xs end if end |λ| end script go's |λ|(xs) end nubBy -- sort :: Ord a => [a] -> [a] on sort(xs) ((current application's NSArray's arrayWithArray:xs)'s ¬ sortedArrayUsingSelector:"compare:") as list end sort -- take :: Int -> [a] -> [a] -- take :: Int -> String -> String on take(n, xs) if 0 < n then items 1 thru min(n, length of xs) of xs else {} end if end take -- unlines :: [String] -> String on unlines(xs) -- A single string formed by the intercalation -- of a list of strings with the newline character. set {dlm, my text item delimiters} to ¬ {my text item delimiters, linefeed} set s to xs as text set my text item delimiters to dlm s end unlines -- zipWith :: (a -> b -> c) -> [a] -> [b] -> [c] on zipWith(f, xs, ys) set lng to min(|length|(xs), |length|(ys)) if 1 > lng then return {} set xs_ to take(lng, xs) -- Allow for non-finite set ys_ to take(lng, ys) -- generators like cycle etc set lst to {} tell mReturn(f) repeat with i from 1 to lng set end of lst to |λ|(item i of xs_, item i of ys_) end repeat return lst end tell end zipWith ``` Output: ```17 matches: eke elk keel keen keg ken keno knee kneel knew know knowledge kong leek week wok woke``` ## AutoHotkey ```letters := ["N", "D", "E", "O", "K", "G", "E", "L", "W"] result := "" for word in Word_wheel(wList, letters, 3) result .= word "`n" MsgBox % result return Word_wheel(wList, letters, minL){ oRes := [] for i, w in StrSplit(wList, "`n", "`r") { if (StrLen(w) < minL) continue word := w for i, l in letters w := StrReplace(w, l,,, 1) if InStr(word, letters[5]) && !StrLen(w) oRes[word] := true } return oRes } ``` Output: ```eke elk k keel keen keg ken keno knee kneel knew know knowledge kong leek ok week wok woke``` ## AWK ```# syntax: GAWK -f WORD_WHEEL.AWK letters unixdict.txt # the required letter must be first # # example: GAWK -f WORD_WHEEL.AWK Kndeogelw unixdict.txt # BEGIN { letters = tolower(ARGV[1]) required = substr(letters,1,1) size = 3 ARGV[1] = "" } { word = tolower(\$0) leng_word = length(word) if (word ~ required && leng_word >= size) { hits = 0 for (i=1; i<=leng_word; i++) { if (letters ~ substr(word,i,1)) { hits++ } } if (leng_word == hits && hits >= size) { for (i=1; i<=leng_word; i++) { c = substr(word,i,1) if (gsub(c,"&",word) > gsub(c,"&",letters)) { next } } words++ printf("%s ",word) } } } END { printf("\nletters: %s, '%s' required, %d words >= %d characters\n",letters,required,words,size) exit(0) } ``` Output: ```eke elk keel keen keg ken keno knee kneel knew know knowledge kong leek week wok woke letters: kndeogelw, 'k' required, 17 words >= 3 characters ``` ## BASIC ```10 DEFINT A-Z 20 DATA "ndeokgelw","unixdict.txt" 40 OPEN "I",1,F\$ 50 IF EOF(1) THEN CLOSE 1: END 60 C\$ = WH\$ 70 LINE INPUT #1, W\$ 80 FOR I=1 TO LEN(W\$) 90 FOR J=1 TO LEN(C\$) 100 IF MID\$(W\$,I,1)=MID\$(C\$,J,1) THEN MID\$(C\$,J,1)="@": GOTO 120 110 NEXT J: GOTO 50 120 NEXT I 130 IF MID\$(C\$,(LEN(C\$)+1)/2,1)<>"@" GOTO 50 140 C=0: FOR I=1 TO LEN(C\$): C=C-(MID\$(C\$,I,1)="@"): NEXT 150 IF C>=3 THEN PRINT W\$, 160 GOTO 50 ``` Output: ```eke elk keel keen keg ken keno knee kneel knew know knowledge kong leek week wok woke ``` ## BCPL ```get "libhdr" // Read word from selected input \$( let ch = ? v%0 := 0 \$( ch := rdch() if ch = endstreamch then resultis false if ch = '*N' then resultis true v%0 := v%0 + 1 v%(v%0) := ch \$) repeat \$) // Test word against wheel let match(wheel, word) = valof \$( let wcopy = vec 2+9/BYTESPERWORD for i = 0 to wheel%0 do wcopy%i := wheel%i for i = 1 to word%0 do \$( let idx = ? test valof \$( for j = 1 to wcopy%0 do if word%i = wcopy%j then \$( idx := j resultis true \$) resultis false \$) then wcopy%idx := 0 // we've used this letter else resultis false // word cannot be made \$) resultis wcopy%((wcopy%0+1)/2)=0 & // middle letter must be used 3 <= valof // at least 3 letters must be used \$( let count = 0 for i = 1 to wcopy%0 do if wcopy%i=0 then count := count + 1 resultis count \$) \$) // Test unixdict.txt against ndeokgelw let start() be \$( let word = vec 2+64/BYTESPERWORD let file = findinput("unixdict.txt") let wheel = "ndeokgelw" selectinput(file) if match(wheel, word) do writef("%S*N", word) \$)``` Output: ```eke elk keel keen keg ken keno knee kneel knew know knowledge kong leek week wok woke``` ## C ```#include <stdbool.h> #include <stdio.h> #define MAX_WORD 80 #define LETTERS 26 bool is_letter(char c) { return c >= 'a' && c <= 'z'; } int index(char c) { return c - 'a'; } void word_wheel(const char* letters, char central, int min_length, FILE* dict) { int max_count[LETTERS] = { 0 }; for (const char* p = letters; *p; ++p) { char c = *p; if (is_letter(c)) ++max_count[index(c)]; } char word[MAX_WORD + 1] = { 0 }; while (fgets(word, MAX_WORD, dict)) { int count[LETTERS] = { 0 }; for (const char* p = word; *p; ++p) { char c = *p; if (c == '\n') { if (p >= word + min_length && count[index(central)] > 0) printf("%s", word); } else if (is_letter(c)) { int i = index(c); if (++count[i] > max_count[i]) { break; } } else { break; } } } } int main(int argc, char** argv) { const char* dict = argc == 2 ? argv[1] : "unixdict.txt"; FILE* in = fopen(dict, "r"); if (in == NULL) { perror(dict); return 1; } word_wheel("ndeokgelw", 'k', 3, in); fclose(in); return 0; } ``` Output: ```eke elk keel keen keg ken keno knee kneel knew know knowledge kong leek week wok woke ``` ## C++ Library: Boost The puzzle parameters can be set with command line options. The default values are as per the task description. ```#include <array> #include <iostream> #include <fstream> #include <map> #include <string> #include <vector> #include <boost/program_options.hpp> // A multiset specialized for strings consisting of lowercase // letters ('a' to 'z'). class letterset { public: letterset() { count_.fill(0); } explicit letterset(const std::string& str) { count_.fill(0); for (char c : str) } bool contains(const letterset& set) const { for (size_t i = 0; i < count_.size(); ++i) { if (set.count_[i] > count_[i]) return false; } return true; } unsigned int count(char c) const { return count_[index(c)]; } bool is_valid() const { return count_[0] == 0; } ++count_[index(c)]; } private: static bool is_letter(char c) { return c >= 'a' && c <= 'z'; } static int index(char c) { return is_letter(c) ? c - 'a' + 1 : 0; } // elements 1..26 contain the number of times each lowercase // letter occurs in the word // element 0 is the number of other characters in the word std::array<unsigned int, 27> count_; }; template <typename iterator, typename separator> std::string join(iterator begin, iterator end, separator sep) { std::string result; if (begin != end) { result += *begin++; for (; begin != end; ++begin) { result += sep; result += *begin; } } return result; } using dictionary = std::vector<std::pair<std::string, letterset>>; dictionary load_dictionary(const std::string& filename, int min_length, int max_length) { std::ifstream in(filename); if (!in) throw std::runtime_error("Cannot open file " + filename); std::string word; dictionary result; while (getline(in, word)) { if (word.size() < min_length) continue; if (word.size() > max_length) continue; letterset set(word); if (set.is_valid()) result.emplace_back(word, set); } return result; } void word_wheel(const dictionary& dict, const std::string& letters, char central_letter) { letterset set(letters); if (central_letter == 0 && !letters.empty()) central_letter = letters.at(letters.size()/2); std::map<size_t, std::vector<std::string>> words; for (const auto& pair : dict) { const auto& word = pair.first; const auto& subset = pair.second; if (subset.count(central_letter) > 0 && set.contains(subset)) words[word.size()].push_back(word); } size_t total = 0; for (const auto& p : words) { const auto& v = p.second; auto n = v.size(); total += n; std::cout << "Found " << n << " " << (n == 1 ? "word" : "words") << " of length " << p.first << ": " << join(v.begin(), v.end(), ", ") << '\n'; } std::cout << "Number of words found: " << total << '\n'; } void find_max_word_count(const dictionary& dict, int word_length) { size_t max_count = 0; std::vector<std::pair<std::string, char>> max_words; for (const auto& pair : dict) { const auto& word = pair.first; if (word.size() != word_length) continue; const auto& set = pair.second; dictionary subsets; for (const auto& p : dict) { if (set.contains(p.second)) subsets.push_back(p); } letterset done; for (size_t index = 0; index < word_length; ++index) { char central_letter = word[index]; if (done.count(central_letter) > 0) continue; size_t count = 0; for (const auto& p : subsets) { const auto& subset = p.second; if (subset.count(central_letter) > 0) ++count; } if (count > max_count) { max_words.clear(); max_count = count; } if (count == max_count) max_words.emplace_back(word, central_letter); } } std::cout << "Maximum word count: " << max_count << '\n'; std::cout << "Words of " << word_length << " letters producing this count:\n"; for (const auto& pair : max_words) std::cout << pair.first << " with central letter " << pair.second << '\n'; } constexpr const char* option_filename = "filename"; constexpr const char* option_wheel = "wheel"; constexpr const char* option_central = "central"; constexpr const char* option_min_length = "min-length"; constexpr const char* option_part2 = "part2"; int main(int argc, char** argv) { const int word_length = 9; int min_length = 3; std::string letters = "ndeokgelw"; std::string filename = "unixdict.txt"; char central_letter = 0; bool do_part2 = false; namespace po = boost::program_options; po::options_description desc("Allowed options"); (option_filename, po::value<std::string>(), "name of dictionary file") (option_wheel, po::value<std::string>(), "word wheel letters") (option_central, po::value<char>(), "central letter (defaults to middle letter of word)") (option_min_length, po::value<int>(), "minimum word length") (option_part2, "include part 2"); try { po::variables_map vm; po::store(po::parse_command_line(argc, argv, desc), vm); po::notify(vm); if (vm.count(option_filename)) filename = vm[option_filename].as<std::string>(); if (vm.count(option_wheel)) letters = vm[option_wheel].as<std::string>(); if (vm.count(option_central)) central_letter = vm[option_central].as<char>(); if (vm.count(option_min_length)) min_length = vm[option_min_length].as<int>(); if (vm.count(option_part2)) do_part2 = true; auto dict = load_dictionary(filename, min_length, word_length); // part 1 word_wheel(dict, letters, central_letter); // part 2 if (do_part2) { std::cout << '\n'; find_max_word_count(dict, word_length); } } catch (const std::exception& ex) { std::cerr << ex.what() << '\n'; return EXIT_FAILURE; } return EXIT_SUCCESS; } ``` Output: Output including optional part 2: ```Found 5 words of length 3: eke, elk, keg, ken, wok Found 10 words of length 4: keel, keen, keno, knee, knew, know, kong, leek, week, woke Found 1 word of length 5: kneel Found 1 word of length 9: knowledge Number of words found: 17 Maximum word count: 215 Words of 9 letters producing this count: claremont with central letter a spearmint with central letter a ``` ### Without external libraries ```#include <algorithm> #include <cstdint> #include <fstream> #include <iostream> #include <string> #include <vector> int main() { const std::string word_wheel_letters = "ndeokgelw"; const std::string middle_letter = word_wheel_letters.substr(4, 1); std::vector<std::string> words; std::fstream file_stream; file_stream.open("../unixdict.txt"); std::string word; while ( file_stream >> word ) { words.emplace_back(word); } std::vector<std::string> correct_words; for ( const std::string& word : words ) { if ( 3 <= word.length() && word.length() <= 9 && word.find(middle_letter) != std::string::npos && word.find_first_not_of(word_wheel_letters) == std::string::npos ) { correct_words.emplace_back(word); } } for ( const std::string& correct_word : correct_words ) { std::cout << correct_word << std::endl; } int32_t max_words_found = 0; std::vector<std::string> best_words9; std::vector<char> best_central_letters; std::vector<std::string> words9; for ( const std::string& word : words ) { if ( word.length() == 9 ) { words9.emplace_back(word); } } for ( const std::string& word9 : words9 ) { std::vector<char> distinct_letters(word9.begin(), word9.end()); std::sort(distinct_letters.begin(), distinct_letters.end()); distinct_letters.erase(std::unique(distinct_letters.begin(), distinct_letters.end()), distinct_letters.end()); for ( const char& letter : distinct_letters ) { int32_t words_found = 0; for ( const std::string& word : words ) { if ( word.length() >= 3 && word.find(letter) != std::string::npos ) { std::vector<char> letters = distinct_letters; bool valid_word = true; for ( const char& ch : word ) { std::vector<char>::iterator iter = std::find(letters.begin(), letters.end(), ch); int32_t index = ( iter == letters.end() ) ? -1 : std::distance(letters.begin(), iter); if ( index == -1 ) { valid_word = false; break; } letters.erase(letters.begin() + index); } if ( valid_word ) { words_found++; } } } if ( words_found > max_words_found ) { max_words_found = words_found; best_words9.clear(); best_words9.emplace_back(word9); best_central_letters.clear(); best_central_letters.emplace_back(letter); } else if ( words_found == max_words_found ) { best_words9.emplace_back(word9); best_central_letters.emplace_back(letter); } } } std::cout << "\n" << "Most words found = " << max_words_found << std::endl; std::cout << "The nine letter words producing this total are:" << std::endl; for ( uint64_t i = 0; i < best_words9.size(); ++i ) { std::cout << best_words9[i] << " with central letter '" << best_central_letters[i] << "'" << std::endl; } } ``` ```eke elk keel keen keg kellogg ken kennel keno knee kneel knell knew knoll know knowledge known kong kowloon leek look nook onlook week weekend wok woke Most words found = 215 The nine letter words producing this total are: claremont with central letter 'a' spearmint with central letter 'a' ``` ## Delphi Translation of: Wren ```program Word_wheel; {\$APPTYPE CONSOLE} {\$R *.res} uses System.SysUtils, System.Classes; function IsInvalid(s: string): Boolean; var c: char; leters: set of char; firstE: Boolean; begin Result := (s.Length < 3) or (s.IndexOf('k') = -1) or (s.Length > 9); if not Result then begin leters := ['d', 'e', 'g', 'k', 'l', 'n', 'o', 'w']; firstE := true; for c in s do begin if c in leters then if (c = 'e') and (firstE) then firstE := false else Exclude(leters, AnsiChar(c)) else exit(true); end; end; end; var dict: TStringList; i: Integer; begin dict := TStringList.Create; for i := dict.count - 1 downto 0 do if IsInvalid(dict[i]) then dict.Delete(i); Writeln('The following ', dict.Count, ' words are the solutions to the puzzle:'); Writeln(dict.Text); dict.Free; end. ``` ## F# ```// Word Wheel: Nigel Galloway. May 25th., 2021 let fG k n g=g|>Seq.exists(fun(n,_)->n=k) && g|>Seq.forall(fun(k,g)->Map.containsKey k n && g<=n.[k]) let wW n g=let fG=fG(Seq.item 4 g)(g|>Seq.countBy id|>Map.ofSeq) in seq{use n=System.IO.File.OpenText(n) in while not n.EndOfStream do yield n.ReadLine()}|>Seq.filter(fun n->2<(Seq.length n)&&(Seq.countBy id>>fG)n) wW "unixdict.txt" "ndeokgelw"|>Seq.iter(printfn "%s") ``` Output: ```eke elk keel keen keg ken keno knee kneel knew know knowledge kong leek week wok woke ``` ## Factor Works with: Factor version 0.99 2020-07-03 ```USING: assocs io.encodings.ascii io.files kernel math math.statistics prettyprint sequences sorting ; ! Only consider words longer than two letters and words that ! contain elt. : pare ( elt seq -- new-seq ) [ [ member? ] keep length 2 > and ] with filter ; : words ( input-str path -- seq ) [ [ midpoint@ ] keep nth ] [ ascii file-lines pare ] bi* ; : ?<= ( m n/f -- ? ) dup f = [ nip ] [ <= ] if ; ! Can we make sequence 1 with the elements in sequence 2? : can-make? ( seq1 seq2 -- ? ) [ histogram ] bi@ [ swapd at ?<= ] curry assoc-all? ; : solve ( input-str path -- seq ) [ words ] keepd [ can-make? ] curry filter ; "ndeokgelw" "unixdict.txt" solve [ length ] sort-with . ``` Output: ```{ "eke" "elk" "keg" "ken" "wok" "keel" "keen" "keno" "knee" "knew" "know" "kong" "leek" "week" "woke" "kneel" "knowledge" } ``` ## FreeBASIC ```#include "file.bi" Function String_Split(s_in As String,chars As String,result() As String) As Long Dim As Long ctr,ctr2,k,n,LC=Len(chars) Dim As boolean tally(Len(s_in)) #macro check_instring() n=0 While n<Lc If chars[n]=s_in[k] Then tally(k)=true If (ctr2-1) Then ctr+=1 ctr2=0 Exit While End If n+=1 Wend #endmacro #macro split() If tally(k) Then If (ctr2-1) Then ctr+=1:result(ctr)=Mid(s_in,k+2-ctr2,ctr2-1) ctr2=0 End If #endmacro '================== LOOP TWICE ======================= For k =0 To Len(s_in)-1 ctr2+=1:check_instring() Next k if ctr=0 then if len(s_in) andalso instr(chars,chr(s_in[0])) then ctr=1':beep end if If ctr Then Redim result(1 To ctr): ctr=0:ctr2=0 Else Return 0 For k =0 To Len(s_in)-1 ctr2+=1:split() Next k '===================== Last one ======================== If ctr2>0 Then Redim Preserve result(1 To ctr+1) result(ctr+1)=Mid(s_in,k+1-ctr2,ctr2) End If Return Ubound(result) End Function Function loadfile(file As String) As String Dim As Long f=Freefile Open file For Binary Access Read As #f Dim As String text If Lof(f) > 0 Then text = String(Lof(f), 0) Get #f, , text End If Close #f Return text End Function Function tally(SomeString As String,PartString As String) As Long Dim As Long LenP=Len(PartString),count Dim As Long position=Instr(SomeString,PartString) If position=0 Then Return 0 While position>0 count+=1 position=Instr(position+LenP,SomeString,PartString) Wend Return count End Function Sub show(g As String,file As String,byref matches as long,minsize as long,mustdo as string) Redim As String s() g=lcase(g) string_split(L,Chr(10),s()) For m As Long=minsize To len(g) For n As Long=Lbound(s) To Ubound(s) If Len(s(n))=m Then For k As Long=0 To m-1 If Instr(g,Chr(s(n)[k]))=0 Then Goto lbl Next k If Instr(s(n),mustdo) Then For j As Long=0 To Len(s(n))-1 If tally(s(n),Chr(s(n)[j]))>tally(g,Chr(s(n)[j])) Then Goto lbl Next j Print s(n) matches+=1 End If End If lbl: Next n Next m End Sub dim as long matches dim as double t=timer show("ndeokgelw","unixdict.txt",matches,3,"k") print print "Overall time taken ";timer-t;" seconds" print matches;" matches" Sleep``` Output: ```eke elk keg ken wok keel keen keno knee knew know kong leek week woke kneel knowledge Overall time taken 0.02187220007181168 seconds 17 matches ``` ## FutureBasic ```#plist NSAppTransportSecurity @{NSAllowsArbitraryLoads:YES} include "NSLog.incl" local fn CountCharacterInString( string as CFStringRef, character as CFStringRef ) as NSUInteger end fn = len(string) - len( fn StringByReplacingOccurrencesOfString( string, character, @"" ) ) local fn IsLegal( wordStr as CFStringRef ) as BOOL NSUInteger i, count = len( wordStr ) CFStringRef letters = @"ndeokgelw" if count < 3 || fn StringContainsString( wordStr, @"k" ) == NO then exit fn = NO for i = 0 to count - 1 if fn CountCharacterInString( letters, mid( wordStr, i, 1 ) ) < fn CountCharacterInString( wordStr, mid( wordStr, i, 1 ) ) exit fn = NO end if next end fn = YES local fn ArrayOfDictionaryWords as CFArrayRef CFURLRef url = fn URLWithString( @"http://wiki.puzzlers.org/pub/wordlists/unixdict.txt" ) CFStringRef string = lcase( fn StringWithContentsOfURL( url, NSUTF8StringEncoding, NULL ) ) CFArrayRef wordArr = fn StringComponentsSeparatedByCharactersInSet( string, fn CharacterSetNewlineSet ) end fn = wordArr void local fn FindWheelWords CFArrayRef wordArr = fn ArrayOfDictionaryWords CFStringRef wordStr CFMutableStringRef mutStr = fn MutableStringNew for wordStr in wordArr if fn IsLegal( wordStr ) then MutableStringAppendFormat( mutStr, fn StringWithFormat( @"%@\n", wordStr ) ) next NSLog( @"%@", mutStr ) end fn fn FindWheelWords HandleEvents``` Output: ```eke elk keel keen keg ken keno knee kneel knew know knowledge kong leek week wok woke ``` ## Go Translation of: Wren ```package main import ( "bytes" "fmt" "io/ioutil" "log" "sort" "strings" ) func main() { if err != nil { } letters := "deegklnow" wordsAll := bytes.Split(b, []byte{'\n'}) // get rid of words under 3 letters or over 9 letters var words [][]byte for _, word := range wordsAll { word = bytes.TrimSpace(word) le := len(word) if le > 2 && le < 10 { words = append(words, word) } } var found []string for _, word := range words { le := len(word) if bytes.IndexByte(word, 'k') >= 0 { lets := letters ok := true for i := 0; i < le; i++ { c := word[i] ix := sort.Search(len(lets), func(i int) bool { return lets[i] >= c }) if ix < len(lets) && lets[ix] == c { lets = lets[0:ix] + lets[ix+1:] } else { ok = false break } } if ok { found = append(found, string(word)) } } } fmt.Println("The following", len(found), "words are the solutions to the puzzle:") fmt.Println(strings.Join(found, "\n")) // optional extra mostFound := 0 var mostWords9 []string var mostLetters []byte // extract 9 letter words var words9 [][]byte for _, word := range words { if len(word) == 9 { words9 = append(words9, word) } } // iterate through them for _, word9 := range words9 { letterBytes := make([]byte, len(word9)) copy(letterBytes, word9) sort.Slice(letterBytes, func(i, j int) bool { return letterBytes[i] < letterBytes[j] }) // get distinct bytes distinctBytes := []byte{letterBytes[0]} for _, b := range letterBytes[1:] { if b != distinctBytes[len(distinctBytes)-1] { distinctBytes = append(distinctBytes, b) } } distinctLetters := string(distinctBytes) for _, letter := range distinctLetters { found := 0 letterByte := byte(letter) for _, word := range words { le := len(word) if bytes.IndexByte(word, letterByte) >= 0 { lets := string(letterBytes) ok := true for i := 0; i < le; i++ { c := word[i] ix := sort.Search(len(lets), func(i int) bool { return lets[i] >= c }) if ix < len(lets) && lets[ix] == c { lets = lets[0:ix] + lets[ix+1:] } else { ok = false break } } if ok { found = found + 1 } } } if found > mostFound { mostFound = found mostWords9 = []string{string(word9)} mostLetters = []byte{letterByte} } else if found == mostFound { mostWords9 = append(mostWords9, string(word9)) mostLetters = append(mostLetters, letterByte) } } } fmt.Println("\nMost words found =", mostFound) fmt.Println("Nine letter words producing this total:") for i := 0; i < len(mostWords9); i++ { fmt.Println(mostWords9[i], "with central letter", string(mostLetters[i])) } } ``` Output: ```The following 17 words are the solutions to the puzzle: eke elk keel keen keg ken keno knee kneel knew know knowledge kong leek week wok woke Most words found = 215 Nine letter words producing this total: claremont with central letter a spearmint with central letter a ``` ```import Data.Char (toLower) import Data.List (sort) ------------------------ WORD WHEEL ---------------------- gridWords :: [String] -> [String] -> [String] gridWords grid = filter ( ((&&) . (2 <) . length) <*> (((&&) . elem mid) <*> wheelFit wheel) ) where cs = toLower <\$> concat grid wheel = sort cs mid = cs !! 4 wheelFit :: String -> String -> Bool wheelFit wheel = go wheel . sort where go _ [] = True go [] _ = False go (w : ws) ccs@(c : cs) | w == c = go ws cs | otherwise = go ws ccs --------------------------- TEST ------------------------- main :: IO () main = >>= ( mapM_ putStrLn . gridWords ["NDE", "OKG", "ELW"] . lines ) ``` Output: ```eke elk keel keen keg ken keno knee kneel knew know knowledge kong leek week wok woke``` ## J ```require'stats' wwhe=: {{ ref=. /:~each words=. cutLF tolower fread 'unixdict.txt' y=.,y assert. 9=#y ch0=. 4{y chn=. (<<<4){y r=. '' for_i.2}.i.9 do. target=. <"1 ~./:~"1 ch0,.(i comb 8){chn ;:inv r=. r,words #~ ref e. target end. }} ``` ``` wwhe'ndeokgelw' eke elk keg ken wok keel keen keno knee knew know kong leek week woke kneel knowledge ``` ## Java ```import java.io.BufferedReader; import java.io.IOException; import java.io.InputStream; import java.net.URI; import java.util.ArrayList; import java.util.List; import java.util.function.Predicate; public final class WordWheelExtended { public static void main(String[] args) throws IOException { String wordWheel = "N D E" + "O K G" + "E L W"; String url = "http://wiki.puzzlers.org/pub/wordlists/unixdict.txt"; InputStream stream = URI.create(url).toURL().openStream(); String allLetters = wordWheel.toLowerCase().replace(" ", ""); String middleLetter = allLetters.substring(4, 5); Predicate<String> firstFilter = word -> word.contains(middleLetter) && 2 < word.length() && word.length() < 10; Predicate<String> secondFilter = word -> word.chars().allMatch( ch -> allLetters.indexOf(ch) >= 0 ); Predicate<String> correctWords = firstFilter.and(secondFilter); words.stream().filter(correctWords).forEach(System.out::println); int maxWordsFound = 0; List<String> bestWords9 = new ArrayList<String>(); List<Character> bestCentralLetters = new ArrayList<Character>(); List<String> words9 = words.stream().filter( word -> word.length() == 9 ).toList(); for ( String word9 : words9 ) { List<Character> distinctLetters = word9.chars().mapToObj( i -> (char) i ).distinct().toList(); for ( char letter : distinctLetters ) { int wordsFound = 0; for ( String word : words ) { if ( word.length() >= 3 && word.indexOf(letter) >= 0 ) { List<Character> letters = new ArrayList<Character>(distinctLetters); boolean validWord = true; for ( char ch : word.toCharArray() ) { final int index = letters.indexOf(ch); if ( index == -1 ) { validWord = false; break; } letters.remove(index); } if ( validWord ) { wordsFound += 1; } } } if ( wordsFound > maxWordsFound ) { maxWordsFound = wordsFound; bestWords9.clear(); bestCentralLetters.clear(); } else if ( wordsFound == maxWordsFound ) { } } } System.out.println(System.lineSeparator() + "Most words found = " + maxWordsFound); System.out.println("The nine letter words producing this total are:"); for ( int i = 0; i < bestWords9.size(); i++ ) { System.out.println(bestWords9.get(i) + " with central letter '" + bestCentralLetters.get(i) + "'"); } } } ``` ```eke elk keel keen keg kellogg ken kennel keno knee kneel knell knew knoll know knowledge known kong kowloon leek look nook onlook week weekend wok woke Most words found = 215 The nine letter words producing this total are: claremont with central letter 'a' spearmint with central letter 'a' ``` ## JavaScript A version using local access to the dictionary, through the macOS JavaScript for Automation API. Works with: JXA ```(() => { "use strict"; // ------------------- WORD WHEEL -------------------- // gridWords :: [String] -> [String] -> [String] const gridWords = grid => lexemes => { const wheel = sort(toLower(grid.join(""))), wSet = new Set(wheel), mid = wheel[4]; return lexemes.filter(w => { const cs = [...w]; return 2 < cs.length && cs.every( c => wSet.has(c) ) && cs.some(x => mid === x) && ( wheelFit(wheel, cs) ); }); }; // wheelFit :: [Char] -> [Char] -> Bool const wheelFit = (wheel, word) => { const go = (ws, cs) => 0 === cs.length ? ( true ) : 0 === ws.length ? ( false ) : ws[0] === cs[0] ? ( go(ws.slice(1), cs.slice(1)) ) : go(ws.slice(1), cs); return go(wheel, sort(word)); }; // ---------------------- TEST ----------------------- // main :: IO () const main = () => gridWords(["NDE", "OKG", "ELW"])( ) .join("\n"); // ---------------- GENERIC FUNCTIONS ---------------- // lines :: String -> [String] const lines = s => // A list of strings derived from a single string // which is delimited by \n or by \r\n or \r. Boolean(s.length) ? ( s.split(/\r\n|\n|\r/u) ) : []; // readFile :: FilePath -> IO String const readFile = fp => { // The contents of a text file at the // given file path. const e = \$(), ns = \$.NSString .stringWithContentsOfFileEncodingError( \$(fp).stringByStandardizingPath, \$.NSUTF8StringEncoding, e ); return ObjC.unwrap( ns.isNil() ? ( e.localizedDescription ) : ns ); }; // sort :: Ord a => [a] -> [a] const sort = xs => Array.from(xs).sort(); // toLower :: String -> String const toLower = s => // Lower-case version of string. s.toLocaleLowerCase(); // MAIN --- return main(); })(); ``` Output: ```eke elk keel keen keg ken keno knee kneel knew know knowledge kong leek week wok woke``` ## jq Works with: jq Also works with gojq, the Go implementation of jq, and with fq provided `keys_unsorted` is replaced `by keys` ```# remove words with fewer than 3 or more than 9 letters def words: inputs | select(length | . > 2 and . < 10); # The central letter in `puzzle` should be the central letter of the word wheel def solve(puzzle): def chars: explode[] | [.] | implode; def profile(s): reduce s as \$c (null; .[\$c] += 1); profile(puzzle[]) as \$profile | def ok(\$prof): all(\$prof|keys_unsorted[]; . as \$k | \$prof[\$k] <= \$profile[\$k]); (puzzle | .[ (length - 1) / 2]) as \$central | words | select(index(\$central) and ok( profile(chars) )) ; "The solutions to the puzzle are as follows:", solve( ["d", "e", "e", "g", "k", "l", "n", "o", "w"] )``` Invocation: < unixdict.txt jq -Rnr -f word-wheel.jq Output: ```The solutions to the puzzle are as follows: eke elk keel keen keg ken keno knee kneel knew know knowledge kong leek week wok woke ``` ## Julia ```using Combinatorics const wordlist = Dict(w => 1 for w in split(read(tfile, String), r"\s+")) function wordwheel(wheel, central) returnlist = String[] for combo in combinations([string(i) for i in wheel]) if central in combo && length(combo) > 2 for perm in permutations(combo) word = join(perm) if haskey(wordlist, word) && !(word in returnlist) push!(returnlist, word) end end end end return returnlist end println(wordwheel("ndeokgelw", "k")) ``` Output: ```["ken", "keg", "eke", "elk", "wok", "keno", "knee", "keen", "knew", "kong", "know", "woke", "keel", "leek", "week", "kneel", "knowledge"] ``` ### Faster but less general version ```const tfile = download("http://wiki.puzzlers.org/pub/wordlists/unixdict.txt") const wordarraylist = [[string(c) for c in w] for w in split(read(tfile, String), r"\s+")] function wordwheel2(wheel, central) warr, maxlen = [string(c) for c in wheel], length(wheel) returnarraylist = filter(a -> 2 < length(a) <= maxlen && central in a && all(c -> sum(x -> x == c, a) <= sum(x -> x == c, warr), a), wordarraylist) return join.(returnarraylist) end println(wordwheel2("ndeokgelw", "k")) ``` Output: ```["eke", "elk", "keel", "keen", "keg", "ken", "keno", "knee", "kneel", "knew", "know", "knowledge", "kong", "leek", "week", "wok", "woke"] ``` ## Lua ```LetterCounter = { new = function(self, word) local t = { word=word, letters={} } for ch in word:gmatch(".") do t.letters[ch] = (t.letters[ch] or 0) + 1 end return setmetatable(t, self) end, contains = function(self, other) for k,v in pairs(other.letters) do if (self.letters[k] or 0) < v then return false end end return true end } LetterCounter.__index = LetterCounter grid = "ndeokgelw" midl = grid:sub(5,5) ltrs = LetterCounter:new(grid) file = io.open("unixdict.txt", "r") for word in file:lines() do if #word >= 3 and word:find(midl) and ltrs:contains(LetterCounter:new(word)) then print(word) end end ``` Output: ```eke elk keel keen keg ken keno knee kneel knew know knowledge kong leek week wok woke``` ### No metatables, simple ```-- Algorithm is from Ruby implementation. local wheel = arg[1] or 'ndeoKgelw' -- wheel is 1st argument wheel = wheel:lower() local middle = wheel:sub(5, 5) assert(#middle == 1) for line in io.lines() do -- get dictionary from standard input local word = line:lower() if word:find(middle) and #word >= 3 then for wheel_char in wheel:gmatch('.') do word = word:gsub(wheel_char, '', 1) end -- for if #word == 0 then io.write(line:lower() .. ' ') end end -- if end -- for print '' ``` Shell command `\$ < unixdict.txt lua ./word-wheel.lua` Output: ```eke elk keel keen keg ken keno knee kneel knew know knowledge kong leek week wok woke ``` ## Mathematica / Wolfram Language ```ClearAll[possible] possible[letters_List][word_String] := Module[{c1, c2, m}, c1 = Counts[Characters@word]; c2 = Counts[letters]; m = Merge[{c1, c2}, Identity]; Length[Select[Select[m, Length /* GreaterThan[1]], Apply[Greater]]] == 0 ] chars = Characters@"ndeokgelw"; words = Import["http://wiki.puzzlers.org/pub/wordlists/unixdict.txt", "String"]; words = StringSplit[ToLowerCase[words], "\n"]; words //= Select[StringLength /* GreaterEqualThan[3]]; words //= Select[StringContainsQ["k"]]; words //= Select[StringMatchQ[Repeated[Alternatives @@ chars]]]; words //= Select[possible[chars]]; words ``` Output: `{eke,elk,keel,keen,keg,ken,keno,knee,kneel,knew,know,knowledge,kong,leek,week,wok,woke}` ## Nim ```import strutils, sugar, tables const Grid = """N D E O K G E L W""" let letters = Grid.toLowerAscii.splitWhitespace.join() let words = collect(newSeq): for word in "unixdict.txt".lines: if word.len in 3..9: word let midLetter = letters[4] let gridCount = letters.toCountTable for word in words: block checkWord: if midLetter in word: for ch, count in word.toCountTable.pairs: if count > gridCount[ch]: break checkWord echo word ``` Output: ```eke elk keel keen keg ken keno knee kneel knew know knowledge kong leek week wok woke``` ## Pascal Works with: Free Pascal ```program WordWheel; {\$mode objfpc}{\$H+} uses SysUtils; const WheelSize = 9; MinLength = 3; WordListFN = 'unixdict.txt'; procedure search(Wheel : string); var Allowed, Required, Available, w : string; Len, i, p : integer; WordFile : TextFile; Match : boolean; begin AssignFile(WordFile, WordListFN); try Reset(WordFile); except writeln('Could not open dictionary file: ' + WordListFN); exit; end; Allowed := LowerCase(Wheel); Required := copy(Allowed, 5, 1); { central letter is required } while not eof(WordFile) do begin Len := length(w); if (Len < MinLength) or (Len > WheelSize) then continue; if pos(Required, w) = 0 then continue; Available := Allowed; Match := True; for i := 1 to Len do begin p := pos(w[i], Available); if p > 0 then { prevent re-use of letter } delete(Available, p, 1) else begin Match := False; break; end; end; if Match then writeln(w); end; CloseFile(WordFile); end; { exercise the procedure } begin search('NDE' + 'OKG' + 'ELW'); end. ``` Output: ```eke elk keel keen keg ken keno knee kneel knew know knowledge kong leek week wok woke ``` ## Perl UPDATED: this version builds a single regex that will select all valid words straight from the file string. ```#!/usr/bin/perl use strict; # https://rosettacode.org/wiki/Word_wheel use warnings; \$_ = <<END; N D E O K G E L W END my \$file = do { local(@ARGV, \$/) = 'unixdict.txt'; <> }; my \$length = my @letters = lc =~ /\w/g; my \$center = \$letters[@letters / 2]; my \$toomany = (join '', sort @letters) =~ s/(.)\1*/ my \$count = length "\$1\$&"; "(?!(?:.*\$1){\$count})" /ger; my \$valid = qr/^(?=.*\$center)\$toomany([@letters]{3,\$length}\$)\$/m; my @words = \$file =~ /\$valid/g; print @words . " words for\n\$_\n@words\n" =~ s/.{60}\K /\n/gr; ``` Output: ```17 words for N D E O K G E L W eke elk keel keen keg ken keno knee kneel knew know knowledge kong leek week wok woke ``` ## Phix ```with javascript_semantics requires("1.0.1") -- (fixed another glitch in unique()) constant wheel = "ndeokgelw", musthave = wheel[5] sequence words = unix_dict(), word9 = {} -- (for the optional extra part) integer found = 0 for i=1 to length(words) do string word = lower(words[i]) integer lw = length(word) if lw>=3 then if lw<=9 then word9 = append(word9,word) end if if find(musthave,word) then string remaining = wheel while lw do integer k = find(word[lw],remaining) if k=0 then exit end if remaining[k] = '\0' -- (prevent re-use) lw -= 1 end while if lw=0 then found += 1 words[found] = word end if end if end if end for string jbw = join_by(words[1..found],1,9," ","\n ") printf(1, "The following %d words were found:\n %s\n",{found,jbw}) -- optional extra if platform()!=JS then -- (works but no progress/blank screen for 2min 20s) -- (the "working" won't show, even w/o the JS check) integer mostFound = 0 sequence mostWheels = {}, mustHaves = {} for i=1 to length(word9) do string try_wheel = word9[i] if length(try_wheel)=9 then string musthaves = unique(try_wheel) for j=1 to length(musthaves) do found = 0 for k=1 to length(word9) do string word = word9[k] if find(musthaves[j],word) then string rest = try_wheel bool ok = true for c=1 to length(word) do integer ix = find(word[c],rest) if ix=0 then ok = false exit end if rest[ix] = '\0' end for found += ok end if end for if platform()!=JS then -- (wouldn't show up anyway) printf(1,"working (%s)\r",{try_wheel}) end if if found>mostFound then mostFound = found mostWheels = {try_wheel} mustHaves = {musthaves[j]} elsif found==mostFound then mostWheels = append(mostWheels,try_wheel) mustHaves = append(mustHaves,musthaves[j]) end if end for end if end for printf(1,"Most words found = %d\n",mostFound) printf(1,"Nine letter words producing this total:\n") for i=1 to length(mostWheels) do printf(1,"%s with central letter '%c'\n",{mostWheels[i],mustHaves[i]}) end for end if ``` Output: (Only the first three lines are shown under pwa/p2js) ```The following 17 words were found: eke elk keel keen keg ken keno knee kneel knew know knowledge kong leek week wok woke Most words found = 215 Nine letter words producing this total: claremont with central letter 'a' spearmint with central letter 'a' ``` ## Picat ```main => MinLen = 3, MaxLen = 9, Chars = "ndeokgelw", MustContain = 'k', WordList = "unixdict.txt", Res = word_wheel(Chars,Words,MustContain,MinLen, MaxLen), println(Res), println(len=Res.len), nl. word_wheel(Chars,Words,MustContain,MinLen,MaxLen) = Res.reverse => Chars := to_lowercase(Chars), D = make_hash(Chars), Res = [], foreach(W in Words, W.len >= MinLen, W.len <= MaxLen, membchk(MustContain,W)) WD = make_hash(W), Check = true, foreach(C in keys(WD), break(Check == false)) if not D.has_key(C) ; WD.get(C,0) > D.get(C,0) then Check := false end end, if Check == true then Res := [W|Res] end end. % Returns a map of the elements and their occurrences % in the list L. make_hash(L) = D => D = new_map(), foreach(E in L) D.put(E,D.get(E,0)+1) end.``` Output: ```[eke,elk,keel,keen,keg,ken,keno,knee,kneel,knew,know,knowledge,kong,leek,week,wok,woke] len = 17``` Optimal word(s): ```main => WordList = "unixdict.txt", MinLen = 3, MaxLen = 9, Words = [Word : Word in read_file_lines(WordList), Word.len >= MinLen, Word.len <= MaxLen], TargetWords = [Word : Word in Words, Word.len == MaxLen], MaxResWord = [], MaxResLen = 0, foreach(Word in TargetWords) foreach(MustContain in Word.remove_dups) Res = word_wheel(Word,Words,MustContain,MinLen, MaxLen), Len = Res.len, if Len >= MaxResLen then if Len == MaxResLen then MaxResWord := MaxResWord ++ [[word=Word,char=MustContain]] else MaxResWord := [[word=Word,char=MustContain]], MaxResLen := Len end end end end, println(maxLResen=MaxResLen), println(maxWord=MaxResWord).``` Output: ```maxReLen = 215 maxWord = [[word = claremont,char = a],[word = spearmint,char = a]] ``` ## PureBasic ```Procedure.b check_word(word\$) Shared letters\$ If Len(word\$)<3 Or FindString(word\$,"k")<1 ProcedureReturn #False EndIf For i=1 To Len(word\$) If CountString(letters\$,Mid(word\$,i,1))<CountString(word\$,Mid(word\$,i,1)) ProcedureReturn #False EndIf Next ProcedureReturn #True EndProcedure CloseFile(0) EndIf If OpenConsole() letters\$="ndeokgelw" wordcount=1 Repeat buf\$=StringField(txt\$,wordcount,~"\n") wordcount+1 If check_word(buf\$)=#False Continue EndIf PrintN(buf\$) : r+1 Until buf\$="" PrintN("- Finished: "+Str(r)+" words found -") Input() EndIf End``` Output: ```eke elk keel keen keg ken keno knee kneel knew know knowledge kong leek week wok woke - Finished: 17 words found - ``` ## Python ```import urllib.request from collections import Counter GRID = """ N D E O K G E L W """ def getwords(url='http://wiki.puzzlers.org/pub/wordlists/unixdict.txt'): "Return lowercased words of 3 to 9 characters" return (w for w in words if 2 < len(w) < 10) def solve(grid, dictionary): gridcount = Counter(grid) mid = grid[4] return [word for word in dictionary if mid in word and not (Counter(word) - gridcount)] if __name__ == '__main__': chars = ''.join(GRID.strip().lower().split()) found = solve(chars, dictionary=getwords()) print('\n'.join(found)) ``` Output: ```eke elk keel keen keg ken keno knee kneel knew know knowledge kong leek week wok woke``` Or, using a local copy of the dictionary, and a recursive test of wheel fit: ```'''Word wheel''' from os.path import expanduser # gridWords :: [String] -> [String] -> [String] def gridWords(grid): '''The subset of words in ws which contain the central letter of the grid, and can be completed by single uses of some or all of the remaining letters in the grid. ''' def go(ws): cs = ''.join(grid).lower() wheel = sorted(cs) wset = set(wheel) mid = cs[4] return [ w for w in ws if 2 < len(w) and (mid in w) and ( all(c in wset for c in w) ) and wheelFit(wheel, w) ] return go # wheelFit :: String -> String -> Bool def wheelFit(wheel, word): '''True if a given word can be constructed from (single uses of) some subset of the letters in the wheel. ''' def go(ws, cs): return True if not cs else ( False if not ws else ( go(ws[1:], cs[1:]) if ws[0] == cs[0] else ( go(ws[1:], cs) ) ) ) return go(wheel, sorted(word)) # -------------------------- TEST -------------------------- # main :: IO () def main(): '''Word wheel matches for a given grid in a copy of http://wiki.puzzlers.org/pub/wordlists/unixdict.txt ''' print('\n'.join( gridWords(['NDE', 'OKG', 'ELW'])( ) )) # ------------------------ GENERIC ------------------------- # readFile :: FilePath -> IO String '''The contents of any file at the path derived by expanding any ~ in fp. ''' with open(expanduser(fp), 'r', encoding='utf-8') as f: # MAIN --- if __name__ == '__main__': main() ``` Output: ```eke elk keel keen keg ken keno knee kneel knew know knowledge kong leek week wok woke``` ## Quackery ``` [ over find swap found ] is has ( \$ c --> b ) [ over find split 1 split swap drop join ] is remove ( \$ c --> \$ ) \$ "rosetta/unixdict.txt" sharefile drop nest\$ [] swap witheach [ dup size 3 < iff drop done dup size 9 > iff drop done dup char k has not iff drop done dup \$ "ndeokgelw" witheach remove \$ "" != iff drop done nested join ] 30 wrap\$``` Output: ```eke elk keel keen keg ken keno knee kneel knew know knowledge kong leek week wok woke ``` ## q ```ce:count each lc:ce group@ / letter count dict:"\n"vs .Q.hg "http://wiki.puzzlers.org/pub/wordlists/unixdict.txt" // dictionary of 3-9 letter words d39:{x where(ce x)within 3 9}{x where all each x in .Q.a}dict solve:{[grid;dict] i:where(grid 4)in'dict; dict i where all each 0<=(lc grid)-/:lc each dict i }[;d39] ``` ```q)`\$solve "ndeokglew" `eke`elk`keel`keen`keg`ken`keno`knee`kneel`knew`know`knowledge`kong`leek`week`wok`woke ``` A naive solution to the second question is simple ```bust:{[dict] grids:distinct raze(til 9)rotate\:/:dict where(ce dict)=9; wc:(count solve@)each grids; grids where wc=max wc } ``` but inefficient. Better: ```best:{[dict] dlc:lc each dict; / letter counts of dictionary words ig:where(ce dict)=9; / find grids (9-letter words) igw:where each(all'')0<=(dlc ig)-/:\:dlc; / find words composable from each grid (length ig) grids:raze(til 9)rotate\:/:dict ig; / 9 permutations of each grid iaz:(.Q.a)!where each .Q.a in'\:dict; / find words containing a, b, c etc ml:4 rotate'dict ig; / mid letters for each grid wc:ce raze igw inter/:'iaz ml; / word counts for grids distinct grids where wc=max wc } / grids with most words ``` ```q)show w:best d39 "ntclaremo" "tspearmin" q)ce solve each w 215 215 ``` Full discussion at code.kx.com ## Raku Works with: Rakudo version 2020.05 Everything is adjustable through command line parameters. Defaults to task specified wheel, unixdict.txt, minimum 3 letters. Using Terminal::Boxer from the Raku ecosystem. ```use Terminal::Boxer; my %*SUB-MAIN-OPTS = :named-anywhere; unit sub MAIN (\$wheel = 'ndeokgelw', :\$dict = './unixdict.txt', :\$min = 3); my \$must-have = \$wheel.comb[4].lc; my \$has = \$wheel.comb».lc.Bag; my %words; \$dict.IO.slurp.words».lc.map: { next if not .contains(\$must-have) or .chars < \$min; %words{.chars}.push: \$_ if .comb.Bag ⊆ \$has; }; say "Using \$dict, minimum \$min letters."; print rs-box :3col, :3cw, :indent("\t"), \$wheel.comb».uc; say "{sum %words.values».elems} words found"; printf "%d letters: %s\n", .key, .value.sort.join(', ') for %words.sort; ``` Output: Using defaults ```raku word-wheel.raku ``` ```Using ./unixdict.txt, minimum 3 letters. ╭───┬───┬───╮ │ N │ D │ E │ ├───┼───┼───┤ │ O │ K │ G │ ├───┼───┼───┤ │ E │ L │ W │ ╰───┴───┴───╯ 17 words found 3 letters: eke, elk, keg, ken, wok 4 letters: keel, keen, keno, knee, knew, know, kong, leek, week, woke 5 letters: kneel 9 letters: knowledge``` Larger dictionary Using the much larger dictionary words.txt file from https://github.com/dwyl/english-words ```raku word-wheel.raku --dict=./words.txt ``` ```Using ./words.txt, minimum 3 letters. ╭───┬───┬───╮ │ N │ D │ E │ ├───┼───┼───┤ │ O │ K │ G │ ├───┼───┼───┤ │ E │ L │ W │ ╰───┴───┴───╯ 86 words found 3 letters: dkg, dkl, eek, egk, eke, ekg, elk, gok, ked, kee, keg, kel, ken, keo, kew, kln, koe, kol, kon, lek, lgk, nek, ngk, oke, owk, wok 4 letters: deek, deke, doek, doke, donk, eked, elke, elko, geek, genk, gonk, gowk, keel, keen, keld, kele, kend, keno, keon, klee, knee, knew, know, koel, koln, kone, kong, kwon, leek, leke, loke, lonk, okee, oken, week, welk, woke, wolk, wonk 5 letters: dekle, dekow, gleek, kedge, kendo, kleon, klong, kneed, kneel, knowe, konde, oklee, olnek, woken 6 letters: gowked, keldon, kelwen, knowle, koleen 8 letters: weeklong 9 letters: knowledge``` Top 5 maximum word wheels with at least one 9 letter word Using unixdict.txt: ```Wheel words eimnaprst: 215 celmanort: 215 ceimanrst: 210 elmnaoprt: 208 ahlneorst: 201``` Using words.txt: ```Wheel words meilanrst: 1329 deilanrst: 1313 ceilanrst: 1301 peilanrst: 1285 geilanrst: 1284``` ## REXX Quite a bit of boilerplate was included in this REXX example. No assumption was made as the "case" of the words (upper/lower/mixed case).   Duplicate words were detected and eliminated   (god and God),   as well as words that didn't contain all Roman (Latin) letters. The number of minimum letters can be specified,   as well as the dictionary fileID and the letters in the word wheel (grid). Additional information is also provided concerning how many words have been skipped due to the various filters. ```/*REXX pgm finds (dictionary) words which can be found in a specified word wheel (grid).*/ parse arg grid minL iFID . /*obtain optional arguments from the CL*/ if grid==''|grid=="," then grid= 'ndeokgelw' /*Not specified? Then use the default.*/ if minL==''|minL=="," then minL= 3 /* " " " " " " */ if iFID==''|iFID=="," then iFID= 'UNIXDICT.TXT' /* " " " " " " */ oMinL= minL; minL= abs(minL) /*if negative, then don't show a list. */ gridU= grid; upper gridU /*get an uppercase version of the grid.*/ Lg= length(grid); Hg= Lg % 2 + 1 /*get length of grid & the middle char.*/ ctr= substr(grid, Hg, 1); upper ctr /*get uppercase center letter in grid. */ wrds= 0 /*# words that are in the dictionary. */ wees= 0 /*" " " " too short. */ bigs= 0 /*" " " " too long. */ dups= 0 /*" " " " duplicates. */ ills= 0 /*" " " contain "not" letters.*/ good= 0 /*" " " contain center letter. */ nine= 0 /*" wheel─words that contain 9 letters.*/ say ' Reading the file: ' iFID /*align the text. */ @.= . /*uppercase non─duplicated dict. words.*/ \$= /*the list of dictionary words in grid.*/ do recs=0 while lines(iFID)\==0 /*process all words in the dictionary. */ u= space( linein(iFID), 0); upper u /*elide blanks; uppercase the word. */ L= length(u) /*obtain the length of the word. */ if @.u\==. then do; dups= dups+1; iterate; end /*is this a duplicate? */ if L<minL then do; wees= wees+1; iterate; end /*is the word too short? */ if L>Lg then do; bigs= bigs+1; iterate; end /*is the word too long? */ if \datatype(u,'M') then do; ills= ills+1; iterate; end /*has word non─letters? */ @.u= /*signify that U is a dictionary word*/ wrds= wrds + 1 /*bump the number of "good" dist. words*/ if pos(ctr, u)==0 then iterate /*word doesn't have center grid letter.*/ good= good + 1 /*bump # center─letter words in dict. */ if verify(u, gridU)\==0 then iterate /*word contains a letter not in grid. */ if pruned(u, gridU) then iterate /*have all the letters not been found? */ if L==9 then nine= nine + 1 /*bump # words that have nine letters. */ \$= \$ u /*add this word to the "found" list. */ end /*recs*/ say say ' number of records (words) in the dictionary: ' right( commas(recs), 9) say ' number of ill─formed words in the dictionary: ' right( commas(ills), 9) say ' number of duplicate words in the dictionary: ' right( commas(dups), 9) say ' number of too─small words in the dictionary: ' right( commas(wees), 9) say ' number of too─long words in the dictionary: ' right( commas(bigs), 9) say ' number of acceptable words in the dictionary: ' right( commas(wrds), 9) say ' number center─letter words in the dictionary: ' right( commas(good), 9) say ' the minimum length of words that can be used: ' right( commas(minL), 9) say ' the word wheel (grid) being used: ' grid say ' center of the word wheel (grid) being used: ' right('↑', Hg) say; #= words(\$); \$= strip(\$) say ' number of word wheel words in the dictionary: ' right( commas(# ), 9) say ' number of nine-letter wheel words found: ' right( commas(nine), 9) if #==0 | oMinL<0 then exit # say say ' The list of word wheel words found:'; say copies('─', length(\$)); say lower(\$) exit # /*stick a fork in it, we're all done. */ /*──────────────────────────────────────────────────────────────────────────────────────*/ lower: arg aa; @='abcdefghijklmnopqrstuvwxyz'; @u=@; upper @u; return translate(aa,@,@U) commas: parse arg _; do jc=length(_)-3 to 1 by -3; _=insert(',', _, jc); end; return _ /*──────────────────────────────────────────────────────────────────────────────────────*/ pruned: procedure; parse arg aa,gg /*obtain word to be tested, & the grid.*/ do n=1 for length(aa); p= pos( substr(aa,n,1), gg); if p==0 then return 1 gg= overlay(., gg, p) /*"rub out" the found character in grid*/ end /*n*/; return 0 /*signify that the AA passed the test*/ ``` output   when using the default inputs: ``` Reading the file: UNIXDICT.TXT number of records (lines) in the dictionary: 25,105 number of ill─formed words in the dictionary: 123 number of duplicate words in the dictionary: 0 number of too─small words in the dictionary: 159 number of too─long words in the dictionary: 4,158 number of acceptable words in the dictionary: 20,664 number center─letter words in the dictionary: 1,630 the minimum length of words that can be used: 3 the word wheel (grid) being used: ndeokgelw center of the word wheel (grid) being used: ↑ number of word wheel words in the dictionary: 17 number of nine-letter wheel words found: 1 The list of word wheel words found: ───────────────────────────────────────────────────────────────────────────────────── eke elk keel keen keg ken keno knee kneel knew know knowledge kong leek week wok woke ``` Note:   my "personal" dictionary that I built   (over   915,000   947,359   words),   there are   178   words that are in the (above) word wheel. output   when using the inputs:     satRELinp   -3 (I am trying for a maximum word wheel count for the   UNIXDICT   dictionary; the negative minimum word length indicates to   not   list the words found.) Thanks to userid   Paddy3118,   a better grid was found. ``` Reading the file: UNIXDICT.TXT number of records (lines) in the dictionary: 25,105 number of ill─formed words in the dictionary: 123 number of duplicate words in the dictionary: 0 number of too─small words in the dictionary: 159 number of too─long words in the dictionary: 4,158 number of acceptable words in the dictionary: 20,664 number center─letter words in the dictionary: 11,623 the minimum length of words that can be used: 3 the word wheel (grid) being used: satRELinp center of the word wheel (grid) being used: ↑ number of word wheel words in the dictionary: 234 number of nine-letter wheel words found: 0 ``` output   when using the inputs:     setRALinp   -3 Thanks to userid   Simonjsaunders,   a better grid was found. ``` Reading the file: UNIXDICT.TXT number of records (words) in the dictionary: 25,104 number of ill─formed words in the dictionary: 123 number of duplicate words in the dictionary: 0 number of too─small words in the dictionary: 159 number of too─long words in the dictionary: 4,158 number of acceptable words in the dictionary: 20,664 number center─letter words in the dictionary: 10,369 the minimum length of words that can be used: 3 the word wheel (grid) being used: setRALinp center of the word wheel (grid) being used: ↑ number of word wheel words in the dictionary: 248 number of nine-letter wheel words found: 0 ``` ## Ruby ```wheel = "ndeokgelw" middle, wheel_size = wheel[4], wheel.size res = File.open("unixdict.txt").each_line.select do |word| w = word.chomp next unless w.size.between?(3, wheel_size) next unless w.match?(middle) wheel.each_char{|c| w.sub!(c, "") } #sub! substitutes only the first occurrence (gsub would substitute all) w.empty? end puts res ``` Output: ```eke elk keel keen keg ken keno knee kneel knew know knowledge kong leek week wok woke ``` ## Transd ```#lang transd MainModule: { maxwLen: 9, minwLen: 3, dict: Vector<String>(), subWords: Vector<String>(), procGrid: (λ grid String() cent String() subs Bool() (with cnt 0 (sort grid) (for w in dict where (and (neq (index-of w cent) -1) (match w "^[[:alpha:]]+\$")) do (if (is-subset grid (sort (cp w))) (+= cnt 1) (if subs (append subWords w)) ) ) (ret cnt) )), _start: (λ locals: res 0 maxRes 0 (with fs FileStream() (open-r fs "/mnt/proj/res/unixdict.txt") where (within (size w) minwLen maxwLen) do (append dict w)) ) (procGrid "ndeokgelw" "k" true) (lout "Number of words: " (size subWords) ";\nword list: " subWords) (for w in dict where (eq (size w) maxwLen) do (for centl in (split (unique (sort (cp w))) "") do (if (>= (= res (procGrid (cp w) centl false)) maxRes) (= maxRes res) (lout "New max. number: " maxRes ", word: " w ", central letter: " centl) ) ) ) ) } ``` Output: ```Main part of task: Number of words: 17; word list: ["eke", "elk", "keel", "keen", "keg", "ken", "keno", "knee", "kneel", "knew", "know", "knowledge", "kong", "leek", "week", "wok", "woke"] New max. number: 100, word: abdominal, central letter: a New max. number: 117, word: abernathy, central letter: a New max. number: 119, word: abhorrent, central letter: r New max. number: 121, word: absorbent, central letter: e New max. number: 123, word: adsorbate, central letter: a New max. number: 125, word: adventure, central letter: e New max. number: 155, word: advertise, central letter: e New max. number: 161, word: alongside, central letter: a New max. number: 170, word: alongside, central letter: l New max. number: 182, word: ancestral, central letter: a New max. number: 182, word: arclength, central letter: a New max. number: 185, word: beplaster, central letter: e New max. number: 215, word: claremont, central letter: a New max. number: 215, word: spearmint, central letter: a ``` ## VBScript ```Const wheel="ndeokgelw" Sub print(s): On Error Resume Next WScript.stdout.WriteLine (s) If err= &h80070006& Then WScript.Echo " Please run this script with CScript": WScript.quit End Sub Dim oDic Set oDic = WScript.CreateObject("scripting.dictionary") Dim cnt(127) Dim fso Set fso = WScript.CreateObject("Scripting.Filesystemobject") Set ff=fso.OpenTextFile("unixdict.txt") i=0 print "reading words of 3 or more letters" While Not ff.AtEndOfStream If Len(x)>=3 Then If Not odic.exists(x) Then oDic.Add x,0 End If Wend print "remaining words: "& oDic.Count & vbcrlf ff.Close Set ff=Nothing Set fso=Nothing Set re=New RegExp print "removing words with chars not in the wheel" re.pattern="[^"& wheel &"]" For Each w In oDic.Keys If re.test(w) Then oDic.remove(w) Next print "remaining words: "& oDic.Count & vbcrlf print "ensuring the mandatory letter "& Mid(wheel,5,1) & " is present" re.Pattern=Mid(wheel,5,1) For Each w In oDic.Keys If Not re.test(w) Then oDic.remove(w) Next print "remaining words: "& oDic.Count & vbcrlf print "checking number of chars" Dim nDic Set nDic = WScript.CreateObject("scripting.dictionary") For i=1 To Len(wheel) x=Mid(wheel,i,1) If nDic.Exists(x) Then a=nDic(x) nDic(x)=Array(a(0)+1,0) Else End If Next For Each w In oDic.Keys For Each c In nDic.Keys ndic(c)=Array(nDic(c)(0),0) Next For ii = 1 To len(w) c=Mid(w,ii,1) a=nDic(c) If (a(0)=a(1)) Then oDic.Remove(w):Exit For End If nDic(c)=Array(a(0),a(1)+1) Next Next print "Remaining words "& oDic.count For Each w In oDic.Keys print w Next``` Output: ```reading words of 3 or more letters remaining words: 24945 removing words with chars not in the wheel remaining words: 163 ensuring the mandatory letter k is present remaining words: 27 checking number of chars Remaining words 17 eke elk keel keen keg ken keno knee kneel knew know knowledge kong leek week wok woke ``` ## Wren Library: Wren-sort Library: Wren-seq ```import "io" for File import "./sort" for Sort, Find import "./seq" for Lst var letters = ["d", "e", "e", "g", "k", "l", "n", "o","w"] // get rid of words under 3 letters or over 9 letters words = words.where { |w| w.count > 2 && w.count < 10 }.toList var found = [] for (word in words) { if (word.indexOf("k") >= 0) { var lets = letters.toList var ok = true for (c in word) { var ix = Find.first(lets, c) if (ix == - 1) { ok = false break } lets.removeAt(ix) } } } System.print("The following %(found.count) words are the solutions to the puzzle:") System.print(found.join("\n")) // optional extra var mostFound = 0 var mostWords9 = [] var mostLetters = [] // iterate through all 9 letter words in the dictionary for (word9 in words.where { |w| w.count == 9 }) { letters = word9.toList Sort.insertion(letters) // get distinct letters var distinctLetters = Lst.distinct(letters) // place each distinct letter in the middle and see what we can do with the rest for (letter in distinctLetters) { found = 0 for (word in words) { if (word.indexOf(letter) >= 0) { var lets = letters.toList var ok = true for (c in word) { var ix = Find.first(lets, c) if (ix == - 1) { ok = false break } lets.removeAt(ix) } if (ok) found = found + 1 } } if (found > mostFound) { mostFound = found mostWords9 = [word9] mostLetters = [letter] } else if (found == mostFound) { } } } System.print("\nMost words found = %(mostFound)") System.print("Nine letter words producing this total:") for (i in 0...mostWords9.count) { System.print("%(mostWords9[i]) with central letter '%(mostLetters[i])'") } ``` Output: ```The following 17 words are the solutions to the puzzle: eke elk keel keen keg ken keno knee kneel knew know knowledge kong leek week wok woke Most words found = 215 Nine letter words producing this total: claremont with central letter 'a' spearmint with central letter 'a' ``` ## XPL0 ```string 0; \use zero-terminated strings int I, Set, HasK, HasOther, HasDup, ECnt, Ch; char Word(25); def LF=\$0A, CR=\$0D, EOF=\$1A; [FSet(FOpen("unixdict.txt", 0), ^I); OpenI(3); repeat I:= 0; HasK:= false; HasOther:= false; ECnt:= 0; Set:= 0; HasDup:= false; loop [repeat Ch:= ChIn(3) until Ch # CR; \remove possible CR if Ch=LF or Ch=EOF then quit; Word(I):= Ch; I:= I+1; if Ch = ^k then HasK:= true; case Ch of ^k,^n,^d,^e,^o,^g,^l,^w: [] \assume all lowercase other HasOther:= true; if Ch = ^e then ECnt:= ECnt+1 else [if Set & 1<<(Ch-^a) then HasDup:= true; Set:= Set ! 1<<(Ch-^a); ]; ]; Word(I):= 0; \terminate string if I>=3 & HasK & ~HasOther & ~HasDup & ECnt<=2 then [Text(0, Word); CrLf(0); ]; until Ch = EOF; ]``` Output: ```eke elk keel keen keg ken keno knee kneel knew know knowledge kong leek week wok woke ```
23,993
75,918
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.71875
3
CC-MAIN-2024-30
latest
en
0.86466
https://books-world.net/machines-and-mechanisms-theory/
1,660,555,381,000,000,000
text/html
crawl-data/CC-MAIN-2022-33/segments/1659882572163.61/warc/CC-MAIN-20220815085006-20220815115006-00557.warc.gz
162,446,754
36,704
اسم المؤلف John J. Uicker, Gordon R. Pennock, Joseph E. Shigley التاريخ 1 أبريل 2016 المشاهدات 98 التقييم Theory of Machines and Mechanisms Fifth Edition John J. Uicker, Jr. Professor Emeritus of Mechanical Engineering Gordon R. Pennock Associate Professor of Mechanical Engineering Purdue University Joseph E. Shigley Late Professor Emeritus of Mechanical Engineering The University of Michigan Contents PREFACE xvii Part 1 KINEMATICS AND MECHANISMS 1 1 The World of Mechanisms 3 1.1 Introduction 3 1.2 Analysis and Synthesis 4 1.3 Science of Mechanics 4 1.4 Terminology, Definitions, and Assumptions 6 1.5 Planar, Spheric, and Spatial Mechanisms 10 1.6 Mobility 12 1.7 Characteristics of Mechanisms 17 1.8 Kinematic Inversion 32 1.9 Grashof’s Law 33 1.11 References 39 Problems 40 2 Position, Posture, and Displacement 48 2.1 Locus of a Moving Point 48 2.2 Position of a Point 51 2.3 Position Difference Between Two Points 53 2.4 Apparent Position of a Point 54 2.5 Absolute Position of a Point 55 2.6 Posture of a Rigid Body 56 2.7 Loop-Closure Equations 57 2.8 Graphic Posture Analysis 62 2.9 Algebraic Posture Analysis 69 2.10 Complex-Algebraic Solutions of Planar Vector Equations 73 2.11 Complex Polar Algebra 74 2.12 Posture Analysis Techniques 78 2.13 Coupler-Curve Generation 86 viiviii CONTENTS 2.14 Displacement of a Moving Point 89 2.15 Displacement Difference Between Two Points 89 2.16 Translation and Rotation 91 2.17 Apparent Displacement 92 2.18 Absolute Displacement 94 2.19 Apparent Angular Displacement 94 2.20 References 98 Problems 99 3 Velocity 105 3.1 Definition of Velocity 105 3.2 Rotation of a Rigid Body 106 3.3 Velocity Difference Between Points of a Rigid Body 109 3.4 Velocity Polygons; Velocity Images 111 3.5 Apparent Velocity of a Point in a Moving Coordinate System 119 3.6 Apparent Angular Velocity 126 3.7 Direct Contact and Rolling Contact 126 3.8 Systematic Strategy for Velocity Analysis 128 3.9 Algebraic Velocity Analysis 129 3.10 Complex-Algebraic Velocity Analysis 131 3.11 Method of Kinematic Coefficients 135 3.12 Instantaneous Centers of Velocity 145 3.13 Aronhold-Kennedy Theorem of Three Centers 147 3.14 Locating Instantaneous Centers of Velocity 149 3.15 Velocity Analysis Using Instant Centers 153 3.16 Angular-Velocity-Ratio Theorem 156 3.17 Relationships Between First-Order Kinematic Coefficients and Instant Centers 157 3.18 Freudenstein’s Theorem 160 3.19 Indices of Merit; Mechanical Advantage 162 3.20 Centrodes 164 3.21 References 166 Problems 167 4 Acceleration 180 4.1 Definition of Acceleration 180 4.2 Angular Acceleration 183 4.3 Acceleration Difference Between Points of a Rigid Body 183 4.4 Acceleration Polygons; Acceleration Images 192 4.5 Apparent Acceleration of a Point in a Moving Coordinate System 196CONTENTS ix 4.6 Apparent Angular Acceleration 205 4.7 Direct Contact and Rolling Contact 206 4.8 Systematic Strategy for Acceleration Analysis 212 4.9 Algebraic Acceleration Analysis 213 4.10 Complex-Algebraic Acceleration Analysis 214 4.11 Method of Kinematic Coefficients 216 4.12 Euler-Savary Equation 225 4.13 Bobillier Constructions 230 4.14 Instantaneous Center of Acceleration 234 4.15 Bresse Circle (or de La Hire Circle) 235 4.16 Radius of Curvature of a Point Trajectory Using Kinematic Coefficients 239 4.17 Cubic of Stationary Curvature 242 4.18 References 249 Problems 250 5 Multi-Degree-of-Freedom Mechanisms 258 5.1 Introduction 258 5.2 Posture Analysis; Algebraic Solution 262 5.3 Velocity Analysis; Velocity Polygons 263 5.4 Instantaneous Centers of Velocity 265 5.5 First-Order Kinematic Coefficients 268 5.6 Method of Superposition 273 5.7 Acceleration Analysis; Acceleration Polygons 276 5.8 Second-Order Kinematic Coefficients 278 5.9 Path Curvature of a Coupler Point Trajectory 285 5.10 Finite Difference Method 289 5.11 Reference 292 Problems 292 Part 2 DESIGN OF MECHANISMS 295 6 Cam Design 297 6.1 Introduction 297 6.2 Classification of Cams and Followers 298 6.3 Displacement Diagrams 300 6.4 Graphic Layout of Cam Profiles 303 6.5 Kinematic Coefficients of Follower 307 6.6 High-Speed Cams 312 6.7 Standard Cam Motions 313x CONTENTS 6.8 Matching Derivatives of Displacement Diagrams 323 6.9 Plate Cam with Reciprocating Flat-Face Follower 327 6.10 Plate Cam with Reciprocating Roller Follower 332 6.11 Rigid and Elastic Cam Systems 350 6.12 Dynamics of an Eccentric Cam 351 6.13 Effect of Sliding Friction 355 6.14 Dynamics of Disk Cam with Reciprocating Roller Follower 356 6.15 Dynamics of Elastic Cam Systems 359 6.16 Unbalance, Spring Surge, and Windup 362 6.17 References 363 Problems 363 7 Spur Gears 369 7.1 Terminology and Definitions 369 7.2 Fundamental Law of Toothed Gearing 372 7.3 Involute Properties 373 7.4 Interchangeable Gears; AGMA Standards 375 7.5 Fundamentals of Gear-Tooth Action 376 7.6 Manufacture of Gear Teeth 381 7.7 Interference and Undercutting 384 7.8 Contact Ratio 386 7.9 Varying Center Distance 388 7.10 Involutometry 389 7.11 Nonstandard Gear Teeth 393 7.12 Parallel-Axis Gear Trains 401 7.13 Determining Tooth Numbers 404 7.14 Epicyclic Gear Trains 405 7.15 Analysis of Epicyclic Gear Trains by Formula 407 7.16 Tabular Analysis of Epicyclic Gear Trains 417 7.17 References 421 Problems 421 8 Helical Gears, Bevel Gears, Worms, and Worm Gears 427 8.1 Parallel-Axis Helical Gears 427 8.2 Helical Gear Tooth Relations 428 8.3 Helical Gear Tooth Proportions 430 8.4 Contact of Helical Gear Teeth 431 8.5 Replacing Spur Gears with Helical Gears 432 8.6 Herringbone Gears 433 8.7 Crossed-Axis Helical Gears 434CONTENTS xi 8.8 Straight-Tooth Bevel Gears 436 8.9 Tooth Proportions for Bevel Gears 440 8.10 Bevel Gear Epicyclic Trains 440 8.11 Crown and Face Gears 443 8.12 Spiral Bevel Gears 443 8.13 Hypoid Gears 445 8.14 Worms and Worm Gears 445 8.15 Summers and Differentials 449 8.16 All-Wheel Drive Train 453 8.17 Note 455 Problems 455 9.1 Type, Number, and Dimensional Synthesis 458 9.2 Function Generation, Path Generation, and Body Guidance 459 9.3 Two Finitely Separated Postures of a Rigid Body (N = 2) 460 9.4 Three Finitely Separated Postures of a Rigid Body (N = 3) 465 9.5 Four Finitely Separated Postures of a Rigid Body (N = 4) 474 9.6 Five Finitely Separated Postures of a Rigid Body (N = 5) 481 9.7 Precision Postures; Structural Error; Chebyshev Spacing 481 9.8 Overlay Method 483 9.9 Coupler-Curve Synthesis 485 9.10 Cognate Linkages; Roberts-Chebyshev Theorem 489 9.11 Freudenstein’s Equation 491 9.12 Analytic Synthesis Using Complex Algebra 495 9.13 Synthesis of Dwell Linkages 499 9.14 Intermittent Rotary Motion 500 9.15 References 504 Problems 504 10 Spatial Mechanisms and Robotics 507 10.1 Introduction 507 10.2 Exceptions to the Mobility Criterion 509 10.3 Spatial Posture-Analysis Problem 513 10.4 Spatial Velocity and Acceleration Analyses 518 10.5 Euler Angles 524 10.6 Denavit-Hartenberg Parameters 528 10.7 Transformation-Matrix Posture Analysis 530 10.8 Matrix Velocity and Acceleration Analyses 533 10.9 Generalized Mechanism Analysis Computer Programs 538xii CONTENTS 10.10 Introduction to Robotics 541 10.11 Topological Arrangements of Robotic Arms 542 10.12 Forward Kinematics Problem 543 10.13 Inverse Kinematics Problem 550 10.14 Inverse Velocity and Acceleration Analyses 553 10.15 Robot Actuator Force Analysis 558 10.16 References 561 Problems 562 Part 3 DYNAMICS OF MACHINES 567 11 Static Force Analysis 569 11.1 Introduction 569 11.2 Newton’s Laws 571 11.3 Systems of Units 571 11.4 Applied and Constraint Forces 573 11.5 Free-Body Diagrams 576 11.6 Conditions for Equilibrium 578 11.7 Two- and Three-Force Members 579 11.8 Four- and More-Force Members 589 11.9 Friction-Force Models 591 11.10 Force Analysis with Friction 594 11.11 Spur- and Helical-Gear Force Analysis 597 11.12 Straight-Tooth Bevel-Gear Force Analysis 604 11.13 Method of Virtual Work 608 11.14 Introduction to Buckling 611 11.15 Euler Column Formula 612 11.17 Critical Unit Load and Slenderness Ratio 618 11.18 Johnson’s Parabolic Equation 619 11.19 References 645 Problems 646 12 Dynamic Force Analysis 658 12.1 Introduction 658 12.2 Centroid and Center of Mass 658 12.3 Mass Moments and Products of Inertia 663 12.4 Inertia Forces and d’Alembert’s Principle 666 12.5 Principle of Superposition 674 12.6 Planar Rotation about a Fixed Center 680CONTENTS xiii 12.7 Shaking Forces and Moments 682 12.8 Complex-Algebraic Approach 683 12.9 Equation of Motion from Power Equation 692 12.10 Measuring Mass Moment of Inertia 702 12.11 Transformation of Inertia Axes 705 12.12 Euler’s Equations of Motion 710 12.13 Impulse and Momentum 714 12.14 Angular Impulse and Angular Momentum 714 12.15 References 724 Problems 725 13 Vibration Analysis 743 13.1 Differential Equations of Motion 743 13.2 A Vertical Model 747 13.3 Solution of the Differential Equation 748 13.4 Step Input Forcing 752 13.5 Phase-Plane Representation 755 13.6 Phase-Plane Analysis 757 13.7 Transient Disturbances 760 13.8 Free Vibration with Viscous Damping 764 13.9 Damping Obtained by Experiment 766 13.10 Phase-Plane Representation of Damped Vibration 768 13.11 Response to Periodic Forcing 772 13.12 Harmonic Forcing 776 13.13 Forcing Caused by Unbalance 780 13.14 Relative Motion 781 13.15 Isolation 782 13.16 Rayleigh’s Method 785 13.17 First and Second Critical Speeds of a Shaft 787 13.18 Torsional Systems 793 13.19 References 795 Problems 796 14 Dynamics of Reciprocating Engines 804 14.1 Engine Types 804 14.2 Indicator Diagrams 811 14.3 Dynamic Analysis—General 814 14.4 Gas Forces 814 14.5 Equivalent Masses 816 14.6 Inertia Forces 818xiv CONTENTS 14.7 Bearing Loads in a Single-Cylinder Engine 821 14.8 Shaking Forces of Engines 824 14.9 Computation Hints 825 Problems 828 15 Balancing 830 15.1 Static Unbalance 830 15.2 Equations of Motion 831 15.3 Static Balancing Machines 834 15.4 Dynamic Unbalance 835 15.5 Analysis of Unbalance 837 15.6 Dynamic Balancing 846 15.7 Dynamic Balancing Machines 848 15.8 Field Balancing with a Programmable Calculator 851 15.9 Balancing a Single-Cylinder Engine 854 15.10 Balancing Multi-Cylinder Engines 858 15.11 Analytic Technique for Balancing Multi-Cylinder Engines 862 15.13 Balancing of Machines 874 15.14 References 875 Problems 875 16 Flywheels, Governors, and Gyroscopes 885 16.1 Dynamic Theory of Flywheels 885 16.2 Integration Technique 887 16.3 Multi-Cylinder Engine Torque Summation 890 16.4 Classification of Governors 890 16.5 Centrifugal Governors 892 16.6 Inertia Governors 893 16.7 Mechanical Control Systems 894 16.8 Standard Input Functions 895 16.9 Solution of Linear Differential Equations 897 16.10 Analysis of Proportional-Error Feedback Systems 901 16.11 Introduction to Gyroscopes 905 16.12 Motion of a Gyroscope 906 16.13 Steady or Regular Precession 908 16.14 Forced Precession 911 16.15 References 917 Problems 917CONTENTS xv APPENDIXES APPENDIX A: Tables 919 Table 1 Standard SI Prefixes 919 Table 2 Conversion from US Customary Units to SI Units 920 Table 3 Conversion from SI Units to US Customary Units 920 Table 4 Properties of Areas 921 Table 5 Mass Moments of Inertia 922 Table 6 Involute Function 923 APPENDIX B: Answers to Selected Problems 925 INDEX 935 INDEX Absolute: Acceleration, 181 Displacement, 94 Motion, 32 Position, 55 System of units, 572 Velocity, 105 Acceleration: Absolute, 181 Angular, 183 Apparent, 198 Angular, 183 Average, 180 Components of: Centripetal component (see Normal, component of acceleration) Coriolis component, 199 Normal component, 182, 199 Rolling-contact component, 206 Tangential component, 183, 199 Definition, 180 Difference, 183 Image, 192 Instant center of, 234 Normal component of, 182 Pole, 234 Polygon, 192, 276 Rolling-contact component of, 206 Tangential component of, 183 Action, line of, 377 Actuator, linear, 18 Mechanical Systems), 539 Circle, 370 AGMA (American Gear Manufacturers Association), 375, 375n Air-standard cycle, 811 Alford, H. H., 724n Algebraic analysis, 213 Acceleration, 213 Dynamic force, 683 Loop-closure cases, 73 Posture, 78, 262 Static force, 569 Velocity, 128, 131 All wheel drive train, 453 Alwerdt, J. J., 421n American Institute of Steel Construction (AISC), 617 Ampère, A. M., 5, 5n Analysis: Dynamic force, 658 Elastic body, 744 Rigid body, 570 Static force, 569 Angular: Acceleration, 183 Apparent, 205 Bevel gears, 436 Displacement, 94 Impulse, 714 Momentum, 714 Velocity, 106 Apparent, 126 Ratio theorem, 156 Annular gear, 380 ANSI (American National Standards Institute), 76n ANSYS, 539 Apparent: Acceleration, 196, 198 Angular, 205 Displacement, 92, 106 Angular, 94 Position, 54 Velocity, 119 Angular, 126 Applied force, 573 Approach, 384 Angle, 384 Arc of, 384 Arc: of Approach, 384 of Recess, 385 Area, properties of, 921 Area moment of inertia, 613, 615, 663 935936 INDEX Arm of couple, 574 Aronhold, S. H., 148n Aronhold-Kennedy theorem, 147 Aronhold theorem, 148n Articulated arm, 542 Articulated connecting rod, 806 ATAN2(y,x), 76n Automotive: All wheel drive train, 453 Cruise-control, 905 Differential, 451 Limited slip, 452 Transmission, 404 Average: Acceleration, 180 Velocity, 105 Axes: Body-fixed, 524 Collineation, 161, 230 Instantaneous screw, 145n Principal, 664 Spin, 907 Transformation of inertia, 705 Axial pitch, 447 Axodes, 165n Back cone, 439 Backlash, 372 Baker, J. E., 561n Balancing: Definition, 830 Direct method, 843 Dynamic, 846 Field, 851 Machines: Mechanical compensation, 850 Nodal-Point, 848 of Machines, 874 of Multi-cylinder engines, 858 Numeric analysis, 845 Scalar equations, 845 of Single-cylinder engines, 854 Static, 834 Ball, R. S., 145n, 167n Ball-and-socket joint, 9 Ball’s point, 244 Barrel cam, 298 Base: Circle: of Cam, 304 of Gear, 377 Cylinder, 377 Pitch, 379 Basic units, 572 Beer, F. P., 645n, 724n Bennett, G. T., 561n Berkhof, R. S., 868, 875n Bernoulli, J., 608 Bevel gear, 427 Angular, 436 Epicyclic trains, 440 Forces on, 604 Spiral, 443 Straight-tooth, 436 Tooth proportions, 440 Zerol, 444 Beyer, R., 39n, 167n, 249n, 504n Bhat, R. B., 795n Binormal unit vector, 124 Bistable mechanism, 18 Bobillier constructions, 230 Bobillier theorem, 230 Body-fixed axes, 524 Body guidance, 459 Bohenberger, J. G. F., 917n Bollinger, J. G., 917n Bore-to-stroke ratio, 812 Bottema, O., 167n Branch defect, 483 Bresse circle, 235 Bridgman, P. W., 795n Buckling, 611 Burmester points, 481 Calahan, D. A., 561n Cam: Barrel, 298 Base circle, 304 Circle-arc, 312 Conjugate, 300 Cylindric, 298 Definition of, 298 Disk, 298 Displacement diagram, 300 Dual, 300 Eccentric, 351 Elastic body, 350 End, 298 Face, 298INDEX 937 Follower: Curved-shoe, 298 Flat-face, 298 Knife-edge, 298 Offset, 300 Oscillating, 300 Reciprocating, 298 Roller, 298 Spheric-face, 298 Trace-point, 304 High-speed, 312 Inverse, 298 Motion: Cycloidal, 302, 314 Dwell, 301 Eighth-order polynomial, 315 Half-return, 320 Half-rise, 319 Kinematic coefficients of, 307 Parabolic, 302, 311 Polydyne, 319 Return, 301 Rise, 300 Simple-harmonic, 302, 314 Uniform, 301 Nomenclature, 298 Plate, 298 Pressure angle, 332 Maximum, 333 Prime circle, 304 Profile, 303 Coordinates, 331 Graphic layout, 303 Rigid body, 350 Roller, size of, 336 Standard motions, 313 Tangent, 312 Types: Barrel, 298 Circle-arc, 312 Conjugate, 300 Cylindric, 298 Disk, 298 Dual, 300 Eccentric, 351 End, 298 Face, 298 Inverse, 298 Plate, 298 Tangent, 312 Wedge, 298 Undercut, 327, 335 Wedge, 298 Capek, K., 541 Cardan joint, 529 Cardan suspension, 905 Card factor, 812 Cartesian coordinates, 49 Cayley, A., 490n Cayley diagram, 490 Center-distance modification, 394 Center of curvature, 183n, 225 Center of mass, 658 Center of percussion, 682 Center point, 475 Center point curve, 475 Centrifugal governors, 892 Centripetal component of acceleration (see Normal, component of acceleration) Centrode, 164 Fixed, 165 Moving, 165 Normal, 227 Tangent, 227 Centroid, 658 Definition, 660 Chace, M. A., 513, 539, 561n Chace approach: Loop-closure cases, 513 Posture analysis, 513 Chain, kinematic, 7 Chebychev, P. L., 489 Chebychev spacing, 481 Chen, F. Y., 363n Chuang, J. C., 561n Circle-arc cam, 312 Circle point, 474 Circle point curve, 475 Circling-point curve, 242 Circular: Pitch, 369, 372 Normal, 428 Transverse, 428 Circumscribing circle, 471 Clamping mechanisms, 18 Classification of mechanism, 17 Clausen, W. E., 724n Clearance, 370, 394 Closed kinematic chain, 7 Closed-loop control system, 894 Coefficient: of Friction, 592938 INDEX Coefficient: (continued) Kinematic: First-order, 135, 268 Second-order, 216, 278 of Speed fluctuation, 888 of Viscous damping, 745 Collineation axis, 161, 230 Complex algebraic analysis: Acceleration, 213 Dynamic force, 683 Loop-closure cases, 73 Posture, 78 Velocity, 131 Complex polar algebra, 74 Components of acceleration: Centripetal component (see Normal, component of acceleration) Coriolis component, 199 Normal component, 182 Rolling-contact component, 206 Tangential component, 183 Compound-closed chain, 7 Compound gear train, 404 Compression, 811 Ratio, 813 Computer programs, 538 Concurrency, point of, 582 Concurrent forces, 580 Conjugate: Cams, 300 Points, 225 Profiles, 372 Connecting rod, 807 Articulated, 806 Force, 822 Master, 806 Connectors, 27 Conservation of: Angular momentum, 717 Momentum, 714 Constraint, 59 Force, 512, 573 General, 511 Redundant, 512 Contact: Direct, 126 Gear teeth, 384 Helical gear teeth, 431 Path of, 384 Ratio, 386 Formula, 387 Helical gears Axial, 431 Face, 431 Normal, 431 Total, 432 Transverse, 431 Rolling, 126 Acceleration, 206 Displacement, 95 Velocity, 126 Control systems, mechanical, 894 Conversion of units: SI to U.S. customary, 920 U.S. customary to SI, 920 Coordinates, complex, 73 Coordinate systems, 52 Coplanar motion, 10 Coriolis component of acceleration, 199 Correction planes, 838 Costanzo, F., 645n, 724n Coulomb friction, 592 Counterweight, 871 Couple, 574 Arm of, 574 Characteristics of, 574 Coupler curve, 29 Generation, 86 Synthesis, 485 Coupler point, 285 Couplings, 27 Crane, C., 561n Crankpin force, 823 Limit position of, 34 Spatial, 514 Spheric, 509 Synthesis, 460 Crankshaft, 808 Force, 823 Torque, 824 Critical damping, 832 Coefficient of, 832 Critical speed, 787 Crossed-axis helical gears, 433 Pitch diameters of, 433 Crossed posture, 68 Crown gear, 443 Crown rack, 444 Cubic of stationary curvature, 242 Degenerate forms, 244 Curvature, 182 Center of, 125 Curved-shoe follower, 298 Curve generator, 29 Curvilinear translation, 91n Cycloid, definition, 302 Cycloidal cam motion, 302 Cylinder wall force, 823 Cylindric: Cam, 298 Coordinates, 49 Pair, 10 System), 539 D’Alembert, J., 668 D’Alembert principle, 666 Damping: Coefficient, 745 Viscous, 745 Critical, 832 Factor, 743 Measurement of, 766 Ratio, 832 Viscous, 832 Dedendum, 370 Circle, 370 Deformable body, 570 Analysis, 744 Degrees of freedom, 12 Lower Pairs, 9 Multiple, 258 de Jonge, A. E. R., 249n de La Hire circle, 235 Denavit, J., 9, 39n, 167n, 249n, 489, 504n, 528, 561n Denavit-Hartenberg parameters, 528 Derived unit, 572 Design, definition of, 297 Diagram: Displacement, 300 Free-body, 576 Schematic, 7 Diametral pitch, 369 Normal, 429 Transverse, 429 Diesel-cycle engines, 804 Difference: Displacement, 107 Position, 53 Velocity, 110 Differential, 449 Automotive, 451 All wheel drive train, 453 Limited slip, 452 TORSEN, 453 Worm gear, 453 Chinese, 451 Limited-slip, 453 Mechanism, 450 Screw, 18 Spur gear, 450 TORSEN, 453 Worm gear, 453 Dimensional synthesis, 458 Direct contact, 126 Direction cosines, 49 Disk cam, 298 Displacement, 89 Absolute, 94 Angular, 94 Apparent, 92 Apparent angular, 94 Definition, 89 Diagram, 300 Difference, 89 Virtual, 608 Volume, 813 Disturbance, 760 Transient, 760 Division by complex number, 76 Dobbs, H. H., 453 Double-helical gear, 433 Driver, 6 Dual number, 513 Duffie, N. A., 917n Duffy, J., 561n Dunkerley’s method, 789 Synthesis of, 499 Dwell motion, 301 Dynamic balancing, 846 Dynamic balancing machines, 848 Dynamic equilibrium, 658 Dynamic force analysis, 658 Dynamics: Cam systems, 351 Definition, 4 Reciprocating engines, 804 Eccentric cam, 351 Eccentricity in cam system, 333 Edge mill, 913 Eighth-order polynomial cam motion, 315 Eisenberg, E. R., 645n Elastic-body analysis, 744940 INDEX Elliptical gears, 166 End effecter, 542 Engine, 804 Crank arrangement, 805 Cycle, 804 Diesel-cycle, 804 Firing order, 805 Five-cylinder, 806 Four-cylinder, 859 Indicator, 811 In line, 805 Opposed piston, 806 Otto-cycle, 804 Shaking force, 824 Single cylinder, 854 Six cylinder, 808 Three-cylinder, 805 Two-cylinder, 858 Types, 804 V-type, 805 Epicyclic gear, 405 Epicyclic gear train types, 407 Formula analysis, 407 Tabular analysis, 417 Equation of motion, 692 Euler’s, 710 Equilibrium, 578 Conditions, 578 Dynamic, 578, 597 Static, 578 Equivalent: Gear, 430 Mass, 816 Erdman, A. G., 504n, 561n Error: Graphic, 482 Mechanical, 482 Structural, 482 Escapements, 18 Graham’s, 19 Euler, L., 4, 4n, 39n, 917n Euler angles, 524 Euler column formula, 612 Euler equation, 75 Euler-Savary equation, 225, 230 Euler’s equations of motion, 710 Exhaust, 805 Expansion, 811 Extreme positions of crank-rocker linkage, 34 Extreme values of velocity, 161 Face cam, 298 Face gear, 443 Face width: of Cam follower, 330 of Helical gears, 431 of Worm gear, 448 Fagerstrom, W. B., 851n Feedback control system, 894 Ferguson, J., 419n Fillet, 370 Finite difference method, 287 Finitely separated postures of a rigid body, 460 Center point, 468 Circle point, 474 Five posture synthesis, 481 Four posture synthesis, 474 Center point curve, 475 Circle point curve, 475 Pole, 462 Pole Triangle, 465 Two posture synthesis, 460 Three posture synthesis, 465 Firing order, 805 First-order kinematic coefficients, 135 Relationship to instant centers of velocity, 157 Fisher, F. E., 724n Geared, 259 Five-cylinder engine, 806 Fixed centrode, 164–165 Flat-face follower, 298, 306 Flat pair, 10 Flip-flop mechanism, 18 Float in cam systems, 353 Flywheels, 885 Follower, 6 Force, 570 Applied, 573 Characteristics of, 573 Constraint, 573 External, 576 Friction, 591–592 Indeterminate, 512 Inertia, 666 Internal, 576 Polygon, 582 Transmitted, 598 Unit of, 569 Vector, 570INDEX 941 Force analysis: Analytic, 583 of Bevel gears, 604 with Friction, 594 Graphic, 580 of Helical gears, 597 of Robot actuators, 558 of Spur gears, 597 Forced precession, 911 Form cutter, 381 Forward kinematics, 543 Foster, D. E., 167n Foucault, L., 905 Algebraic posture analysis, 72 Analysis of, 67 Angular velocity relations, 156 Inversions of, 33 Spatial, 510 Spheric, 510 Four-circle method, 235 Four-cylinder engine, 859 Four-force member, 589 Four-stroke engine cycle, 805 Frame, 7 Free-body diagram, 576 Freedom: Degrees of, 12 Idle, 511 Free vector, 575 Free vibration, 764 with viscous damping, 764 Frequency, 743 Freudenstein, F., 167n, 504n, 561n Freudenstein’s equation, 491 Freudenstein’s theorem, 160 Friction, 591–592 Angle, 593 Coefficient of, 592 Coulomb, 592 Force, 592 Force models, 591–592 Sliding, 355, 592 Static, 592 Viscous, 593 Full depth, 376 Full-return cam motion, 316 Full-rise cam motion, 314 Function generation, 459 Fundamental law of toothed gearing, 372 Ganter, M. A., 363n Gantry robot, 542 Gas force, 814 Gas law, 811 Gear, 369 Differentials, 449 Graphical layout, 377 Manufacture, 381 Tooth action, 376 Tooth sizes, 375 Tooth terminology, 370 TORSEN, 413 Train: Compound, 404 Epicyclic, 405 Analysis by formula, 407 Analysis by table, 417 Bevel gear, 440 Planetary, 406 Tabular analysis, 417 Reverted, 404 Series connected, 403 Type of: Annular, 380 Bevel, 427 Angular, 436 Spiral, 443 Straight-tooth, 436 Tooth proportions, 440 Crossed-axis helical, 433 Crown, 443 Double-helical, 433 Elliptical, 166 Epicyclic, 405 Face, 443 Helical, 427 Herringbone, 433 Hypoid, 445 Internal, 380 Miter, 436 Planet, 406 Ring, 451 Spiral, 433 Spur, 369 Sun, 406 Worm, 427 Zerol bevel, 444 Worm: Differential, 453 Generalized mechanism analysis programs, 538 Generating cutter, 381 Generating line, 372 Generators: Curve, 29 Function, 459 Straight-line, 31 Geneva mechanism, 20942 INDEX Geneva wheel, 20 Gleasman, V., 413 Globular pair, 10 Goldberg, M., 561n Goodman, T. P., 504n Gough, V. E., 561n Governors, 890 Centrifugal, 892 Electronic, 905 Inertia, 893 Graham’s escapement, 19 Graphic error, 482 Grashof’s law, 33 Gravitational system of units, 572 Gravity, standard, 572 Gray, G. L., 645n, 724n Grodzinsky, P., 504n Grübler, M. F, 39n Grübler’s criterion, 14 Gustavson, R. E., 504n, 561n Gyroscope: Definition of, 905 Motion of, 906 Gyroscopic moment, 911 Hain, K., 167n, 249n, 485n, 504n Half-cycloidal cam motion, 321 Half-harmonic cam motion, 319 Hall, A. S., Jr., 161n, 167n, 249n, 504n Hand and thrust relations of helical gears, 433 Harmonic forcing, 776 Harmonic motion, 314 Harrisberger, L., 509, 561n Hartenberg, R. S., 9, 39n, 249n, 489, 504n, 528, 561n Hartmann construction, 227 Haug, E. J., 539, 561n Helical gears: Contact ratio: Axial, 431 Face, 431 Normal, 431 Total, 432 Transverse, 431 Crossed-axis: Hand and thrust relations, 433 Tooth proportions, 435 Double, 433 Face width, 431 Forces on, 599 Helix angle, 428 Overlap, 432 Parallel-axis, 427 Tooth proportions, 430 Pitch: Axial, 428 Normal circular, 428 Normal diametral, 429 Transverse circular, 428 Transverse diametral, 429 Pressure angle: Normal, 429 Transverse, 429 Replacing spur gears with, 432 Helical motion, 50 Helical pair, 9 Helix angle, 428 Herringbone gears, 433 Hertz, H. R., 751n Hesitation mechanisms, 29 Hesitation motion, 506 Higher pair, 8 Hindley worm, 446 Hinkle, R. T., 489, 504n Hirschhorn, J., 249n, 504n Hob, 342 Hobbing, 342 Hodges, H., 455 Holowenko, A. R., 39n Holzer tabulation method, 795 Hooke universal joint, 28, 529 Hrones, J. A., 30, 39n Hrones and Nelson atlas, 30 Humpage’s reduction gear, 440 Hunt, K. H., 504n, 561n Hypoid gears, 445 Idle freedom, 511 Idler, 403 Image: Acceleration, 192 Point, 468 Pole, 468 Velocity, 114 Imaginary-mass method of balancing, 856 IMP (Integrated Mechanisms Program), 539 Impulse, 714 Angular, 714 Indeterminate force, 512 Indexing mechanisms, 20 Indicator: Diagram, 811 Engine, 811 Indices of merit, 162 Inertia: Axes, principal, 664 Axes, transformation of, 705INDEX 943 Definition, 570 Force, 666 in Engines, 818 Governors, 893 Primary, 820 Secondary, 820 Mass moment of, 663, 922 Mass product of, 663 Measurement of, 702 Tensor, 664 Torque, 820 Inflection circle, 228 Inflection pole, 228 Influence coefficients, 787 In-line engine, 805 Instantaneous: Acceleration, 180 Center: of Acceleration, 234 Four-circle method of locating, 235 of Velocity, 145 Locating, 149 in Multi-degree-of-freedom planar Number of, 147 Relationship to first-order kinematic coefficients, 157 Use for velocity analysis, 153 Screw Axis, 145n Velocity, 105 Integration by Simpson’s rule, 887 Interference, 384 Internal gear, 380 International Standards Organization (ISO), 76n International System (SI) of units, 572 Inverse: Acceleration analysis, 553 Cam, 298 Kinematics, 550 Velocity analysis, 553 Inversion: Kinematic, 32 Involute: Curve, 374 Function, 390, 923 Generation of, 374 Helicoid, 427 Properties, 372 Involutometry, 390 Isolation, 782 Jacobian, 164 Jamming, 37 Jerk, 311 Johnson, J. B., 618n Johnson parabolic equation, 618 Johnston, E. R., 724n Johnston, E. R., Jr., 645n Joint, types of: Balanced, 584 Cardan, 529 Hooke, 529 Turning, 9 Universal, 529 Wrapping, 10 Jump, in cam systems, 353 Jump speed, 353 KAM (Kinematic Analysis Method), 539 Kaufman, R. E., 541, 561n Kennedy, A. B. W., 6n, 39n, 148n Kennedy circle, 148 Kennedy’s theorem, 148 Kinematic chain, kind, 7 Kinematic coefficients: First-order, 135 Relationship to instant centers, 157 Velocity analysis, 135 Rolling contact condition, 143 Second-Order, 216 Acceleration analysis, 216 Relationship to radius and center of curvature, 239 Kinematic inversion, 32 Kinematic pair, 6 Kinematics: Definition, 5 Direct, 543 Forward, 543 Inverse, 550 Kinematic synthesis, 458 Kinetic energy, 693 Kinetics, definition, 5 KINSYN (KINematic SYNthesis), 540 Kloomak, M., 363n Knife-edge follower, 298 Kota, S., 504n Kraige, L. G., 724n Krause, R., 160, 167n Kuenzel, H., 504n Kutzbach, K., 39n Kutzbach mobility criterion, 12 Law of gearing, 372 Lévai, Z. L., 421n Lévai epicyclic gear train types, 407 Lever, 18 Lichty, L. C., 875n Lift, 301 Limited slip differential, 452 Limit posture, 37 LINCAGES, 540 Line: of Action, 377 of Centers, 372 Coordinates, 555 Linear actuators, 18 Linearity, 135 Linear system, 134 Binary, 7 Definition of, 6 Function of, 6 Ternary, 7 Balancing of, 868 Definition, 10 Planar, 10 Quick-return, 21 Types of: Bennett, 510 Bricard, 511 Chebychev, 31 Cognate, 489 Crank-rocker, 21 Crank-shaper, 22 Differential screw, 18 Double-crank, 35 Double-rocker, 35 Dwell, 499 Five-bar, 258 Four-bar, 21 Geared five-bar, 259 Geneva, 20 Goldberg, 511 Maltese cross, 20 Pantograph, 32 Peaucillier inversor, 31 Quick return, 21 Reuleaux coupling, 28 Roberts’, 31 Scotch-yoke, 22, 151 Scott-Russell, 32 Six-bar, 22 Slider-crank, 22 Isosceles, 460 Offset, 22 Sliding-block, 78 Spheric, 11 Wanzer needle-bar, 23 Watt’s, 31 Whitworth, 22 Wobble plate, 510 Locational devices, 18 Location of a point, 48 Locus, 48 Logarithmic decrement, 767 Loop-closure equation, 57 Cases of, 66, 513 Lowen, G. G., 868, 875n Lower pair, 8 Machine, definition of, 6 Maleev, V. L., 875n Maltese cross, 20 Manipulator, 258 Mass: Center of, 660 Definition, 570 Equivalent, 816 Moment of inertia, 663, 922 Product of inertia, 663 Unit of, 572 Master connecting rod, 806 Matter, definition, 570 Matthew, G. K., 363n Maxwell’s reciprocity theorem, 788 Mean effective pressure, 812 Mechanical: Compensation balancing method, 850 Control systems, 894 Efficiency, 812 Error, 482 Mechanics: Definition of, 4 Divisions of, 4 Mechanism: Analysis, computer, 538 Definition of, 6 Trains, 401 Bistable, 18 Cam, 21 Cam-and-follower, 97 Clamping, 18 Escapement, 18 Flip-flop, 18 Indexing, 20 Linear actuator, 18 Locational, 18 Oscillator, 20 Planar, 10 Quick-return, 21 Rack and pinion, 97 Ratchet, 18 Reciprocating, 21 Reversing, 27 Rocking, 20 Snap-action, 18 Spatial, 11 Stop, pause, hesitation, 29 Straight-line, 31 Swinging, 20 Toggle, 18 Mehmke, R., 114, 167n Meriam, J. L., 724n Merit indices, 162 M’Ewan, E., 504n Milling of gear teeth, 381 Mischke, C. R., 98n, 421n, 504n, 645n, 795n Miter gears, 436 Mobility, 12 Exceptions to criteria, 13 Kutzbach criterion, 12 Module, 371 Molian, S., 363n Moment: of a Couple, 574 Gyroscopic, 911 of Impulse, 715 of Inertia: Area, 613, 615 Mass, 663 Measurement, 702 Polygon, 838 Shaking, 682 Vector, 575 Momentum, 714 Angular, 714 Movability, definition, 12n Moving centrode, 165 Moving point: Acceleration of, 181 Displacement of, 93 Locus of, 48 Velocity of, 105 MSC Working Model, 539 Muffley, R. V., 363n Müller, R., 167n Multi-degree of freedom, 258 NASTRAN, 539 Natural frequency, 743 Damped, 766 Neale, M. J., 645n Nelson, G. L., 30 Newton (unit), 572 Newton, I., 645n Newton-Raphson method, 70 Newton’s laws, 571 Newton’s notation, 745 Nodal-point balancing method, 848 Normal: Component of acceleration, 182 Unit vector, 124 Notation, complex-rectangular, 74 Offset circle, 305 Offset follower, 300 Open kinematic chain, 7 Open posture, 68 Opposed-piston engine, 806 Order defect, 483 Orlandea, N., 539, 561n Oscillating follower, 300 Osculating circles, 225 Osculating plane, 124 Ostenfeld, A., 618n Otto-cycle engine, 804 Overconstrained, 13 Overdrive unit, 420 Overlay method, 483 Pair: Higher, 8 Lower, 8 Cylindric, 10 Flat, 10 Helical, 9 Prismatic, 9 Revolute, 9 Spheric, 10 Wrapping, 10 Variable, 8 Parabolic motion, 302 Parallel-axis formula, 664 Parallel-axis helical gears, 427 Parker, J. W., 917n Particle, definition, 50 Particle motion, equation of, 50946 INDEX Path: Coordinates, 51 Curvature, 285 Generation, 459 Point, 89 Pause mechanisms, 29 Pawl, 19 Peaucillier inversor, 31 Pendulum: Equation of, 702 Three-string, 704 Torsional, 702 Trifilar, 704 Pennock, G. R., 167n, 421n, 561n Percussion, center of, 682 Periodic forcing, 772 Response to, 772 Period of vibration, 743 Phase, of motion, 751 Phase angle, 751 Phase plane: Analysis, 757 Method, 757 Representation, 755 Phasor, 750 Phillips, J., 512n, 561n Pinion, 369 Pin joint (see Pair, Types of, Lower, Revolute) Piston acceleration, 814 Piston-pin force, 823 Pitch: Angle, 438 Axial, 428, 447 Base, 379 Circle, 369 Circular, 369 Normal, 428 Transverse, 428 Curve, of cam, 304 Diametral, 369 Normal, 429 Transverse, 429 Point, 372 Surface, of bevel gear, 437–438 Planar: Mechanism, 10 Motion, 51 Pair, 10 Vector equations, 64 Plane of couple, 574 Planet: Carrier, 406 Gear, 406 Planetary gear train, 406 Force analysis, 602 Plate cam, 298 Plesha, M. E., 645n, 724n Plücker coordinates, 555 Point: Acceleration: Absolute, 181 Apparent, 196 Difference, 183 Definition, 50 Displacement: Absolute, 89, 94 Apparent, 92 Difference, 89 Pitch, 372 Position: Absolute, 55 Apparent, 54 Difference, 53 Velocity: Absolute, 105 Apparent, 119 Difference, 109 Polar notation, 73 Pole, 462 Pole triangle, 465 Polode, 165n Polydyne cam, 319 Polygon: Acceleration, 192 Force, 582 Moment, 838 Velocity, 113, 263 Pose, of rigid body, 57 Position: Absolute, 55 Apparent, 54 Difference, 53 Rigid body, 56 Posture: Analysis: Algebraic, 69, 262 Graphic, 62 of Spatial mechanism, 513 Techniques, 78 Precision, 481 Rigid body, 56 Potential energy, 694 Power equation, 692 Power stroke, 805INDEX 947 Precession: Forced, 911 Regular, 908 Precision postures, 481 Prefixes, standard SI, 919 Pressure, mean effective, 812 Pressure angle, 164 Maximum, 333 Normal, 429 Transverse, 429 Pressure line, 377 Prime circle, 304 Principal axes, 664 Principle of superposition, 674 Prismatic pair, 9 Products of inertia, 663 Pro/ENGINEER Mechanism Dynamics, 539 Programs, computer, 538 Proportional-error feedback systems, 901 Quaternion, 513 Quick-return mechanism, 21 Rack, 19 of curvature, 182 of cam profile charts for minimum, 337 equation, 328 of gyration, 616 Rapson’s slide, 220 Ratchets, 18 Rathbone, T. C., 795, 875n Ravani, B., 561n Raven, F. H., 167n Raven’s method: for Acceleration, 213 for Posture, 74 for Velocity, 131 Rayleigh, Baron, 795n Rayleigh-Ritz equation, 787 Rayleigh’s method, 785 Recess: Angle, 384 Arc of, 385 Reciprocating: Engines, dynamics of, 804 Follower, 298 Reciprocity, Maxwell’s theorem, 788 RECSYN (RECtified SYNthesis), 540 Rectangular notation, 74 Rectilinear motion, 51 Rectilinear translation, 91n Redundant constraint, 512 Reference system, 48 Regular precession, 908 Relative motion, 32, 781 Resonance, 743 Response curve, 744 Return, motion of cam, 301 Reuleaux, F., 6, 6n Reuleaux coupling, 28 Reverted gear train, 404 Revolute, 9 Rigid body, 570 Cam, 350 Posture, 56 Rotation of, 106 Velocity difference between, 109 Rise, motion of cam, 300 Roberts, S., 489n Roberts-Chebychev theorem, 489 Robot, 541 Robotics, 541 Robot Institute of America (RIA), 542 Roller follower, 298, 305 Rolling contact, 126 Acceleration, 206 Displacement, 95 Velocity, 126 Rosenauer, N., 167n, 249n Rotation: of crossed-helical gears, 433 Definition, 91 of Rigid body, 106 Roth, B., 167n Rothbart, H. A., 504n Roulettes, 165n Sandor, G. N., 504n Sankar, T. S., 795n SCARA robot, 542 Scarborough, J. B., 917n Screw: Axis, instantaneous, 145n948 INDEX Screw: (continued) Differential, 19 Pair (see Pair, Types of, Lower, Helical) Second harmonic forces, 857 Second-order kinematic coefficients, 216 Relationship to radius of and center of curvature, 239 Shaking: Forces, 682 Engine, 824 Moments, 682 Shaping, 381 Sheth, P. N., 539, 561n Shigley, J. E., 421n, 504n, 645n, 795n SI (System International), 572 Conversion to U.S. customary units, 920 Prefixes, 919 Units, 572 Simple-closed chain, 7 Simple gear train, 404 Simple-harmonic cam motion, 302 Simpson’s rule integration, 887 Single cylinder engine, 821 Single plane balancers, 834 Skew curve, 51 Slenderness ratio, 616 Algebraic posture analysis, 69 Analysis of, 66 Inversions of, 33 Limit positions, 22 Offset, 27 Synthesis, 460 Synthesis of, 27 Sliding connectors, 28 Sliding friction, 355, 592 Snap-action mechanism, 18 Soni, A. H., 504n Spatial: Graphic analysis, 514 Mechanism, 507 Motion, 51 Speed fluctuation, coefficient of, 888 Speed ratio, 402 Spheric: Coordinates, 49 Joint, 10 Mechanism, 507 Spheric-face follower, 298 Spheric-slide oscillator, 510 Spin axis, 907 Spiral angle, 444 Spiral gears, 433 Spring: Rate, 351 Stiffness, 745 Surge, 362 Spur gears, 369 Forces on, 598 Standard gear tooth proportions, 375 Standard gravity, 532 Starting transient, 774 Statically indeterminate force, 512 Static balancing machines, 834 Static force analysis, 569 Static friction, 592 Statics, definition, 4 Static unbalance, 830 Stationary curvature, 242 Step input forcing, 752 Stevensen, E. N., Jr., 504n, 856, 874n, 875n Stevensen’s rule, 857 Stiction, 597 Stoddart, D. A., 363n Stop mechanisms, 29 Straight-line generators, 31 Straight-tooth bevel gears, 436 Forces on, 604 Strong„ R. T., 561n Structural error, 482 Structure: Definition, 6 Statically indeterminate, 13 Strutt, J. W., 795n Stub tooth, 376 Suction stroke, 805 Suh, C. H., 504n Summing mechanism, 449 Sun gear, 406 Superposition, 273 Principle of, 674 Swashplate, 708 Synthesis: Coupler-curve, 485 Definition, 4 Dimensional, 458 Kinematic, 458 Number, 458 Type, 458 Tabular analysis of epicyclic gear trains, 417 Tangent cam, 312INDEX 949 Tangential component of acceleration, 183 Tao, D. C., 249n, 504n Tesar, D., 363n Thearle, E. L., 795, 875n Theorem of Mehmke, 114 Three cylinder engine, 805 Three-force member, 579 Three-string pendulum, 704 Toggle: Mechanism, 18 Posture, 37 Tooth proportions: Bevel gears, 440 Helical gears, 430 Spur gears, 376 Tooth sizes, 371 Tooth thickness, 370 Torfason, L. E., 18, 39n Torque characteristics of engines, 810 TORSEN differential, 413 Torsional pendulum, 702 Torsional system, 793 Trace point, 304 Train value, 402 Transfer formula, 664 Transformation matrix, 513 Transient disturbances, 760 Transient vibration, 744 Translation: Curvilinear, 91n Definition, 91 Rectilinear, 91n Transmissibility, 782 Transmission, automotive, 404 Transmission angle, 37, 73 Extremes of, 73 Transmitted force, 598 Tredgold’s approximation, 439 Trifilar pendulum, 704 True toggle mechanism, 18 Turning pair (see Pair, Types of, Lower, Revolute) Two-cylinder engine, 858 Two-force member, 579 Two-stroke engine cycle, 805 Type synthesis, 458 Uicker, J. J., Jr, 363n, 539, 561n Unbalance: Analysis of, 837 in Cam systems, 362 Dynamic, 835 Forcing caused by, 780 Static, 830 Units of, 846 Undercutting: in Cam systems, 327 Elimination of, 330 in Gear systems, 384 Uniform motion, 301 Units: Conversion: SI to U.S. customary, 920 U.S. customary to SI, 920 Systems of, 571 Unit vector: Binormal, 124 Normal, 124 Tangent, 124 Universal joint, 28 Vector: Angular momentum, 715 Approach to rotor balancing, 839 Graphical operations, 62 Loop-closure cases, 66 Subtraction, 62 Tetrahedron equation, 513 Solutions, 513 Type of: Absolute acceleration, 181 Absolute displacement, 89, 94 Absolute position, 55 Absolute velocity, 105 Acceleration, 181 Acceleration difference, 183 Apparent acceleration, 196 Apparent displacement, 92 Apparent position, 54 Apparent velocity, 119 Displacement, 89 Displacement difference, 89 Force, 573 Moment, 574 Position, 55 Position difference, 53 Unit, 52 Velocity, 105 Velocity difference, 109 Velocity, 105 Absolute, 105 Analysis: Graphic, 111 Inverse, 553950 INDEX Velocity, (continued) Polygons, 263 of Spatial mechanism, 518 Systematic strategy for, 128 Using instantaneous centers, 153 Angular, 106 Apparent, 119 Average, 105 Condition for rolling contact, 126 Definition, 105 Difference, 109 Image, 114 Size of, 114 Instantaneous, 105 Instantaneous centers of, 145 Locating, 150 Using, 153 Matrix, 533 Poles, 165n, 225 Polygon, 113, 263 Ratio, Angular, 156 Vibration: Definition, 743 Forced, 743 Free, 743 Isolation, 782 Phase-plane representation of, 768 Virtual displacement, 608 Virtual-rotor method of balancing, 856 Virtual work, 608 Viscous damping: Coefficient of, 745 Free vibration with, 764 V-type engine, 805 Waldron, K. J., 504n, 561n Wanzer. R. M., 23 WATT Mechanism Design Tool, 540 Wedge cam, 298 Weight, definition, 570 Weight/mass controversy, 572, 572n Whole depth, 372 Willis, A. H., 249n Willis, R. W., 917n Windup, 362 Wobble-plate mechanism, 510 Working stroke, 22 Worm, 427 Worm gear, 427, 446 Worm gear differential, 453 Wrapping pair, 10 Wrist-pin force, 816 Yang, A. T., 561n Young’s modulus, 612 Zerol bevel gear, 444 كلمة سر فك الضغط : books-world.net
14,290
40,580
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.65625
3
CC-MAIN-2022-33
latest
en
0.528111
https://www.gradesaver.com/textbooks/math/algebra/algebra-1/chapter-1-foundations-for-algebra-1-7-the-distributive-property-practice-and-problem-solving-exercises-page-52/87
1,571,819,622,000,000,000
text/html
crawl-data/CC-MAIN-2019-43/segments/1570987829507.97/warc/CC-MAIN-20191023071040-20191023094540-00501.warc.gz
883,720,351
13,985
# Chapter 1 - Foundations for Algebra - 1-7 The Distributive Property - Practice and Problem-Solving Exercises - Page 52: 87 $24-2t$ #### Work Step by Step $9(5+t)-7(t+3)$ Distribute: $45+9t-7t-21$ Combine like terms: $45-21+9t-7t$ = $24-2t$ After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
125
405
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.9375
4
CC-MAIN-2019-43
latest
en
0.759763
https://mathematica.stackexchange.com/questions/245267/tensor-multiplication
1,653,735,844,000,000,000
text/html
crawl-data/CC-MAIN-2022-21/segments/1652663016373.86/warc/CC-MAIN-20220528093113-20220528123113-00728.warc.gz
422,560,890
67,432
# Tensor multiplication I have eight Tensors to multiply as follows, $$P=\sum_{all indices}M_{ijkl}M_{mjkl}M_{inkl}M_{mnkl}X_{kl}Y_{kl}X_{kl}Y_{kl}$$ Each M Matrix is say $$2^7\times2^7 \times2^7 \times 2^7$$ size. Is there any efficient way to perform this multiplication? Tensor contract even with Active and Inactive fails due to the exceptionally large Tensor(rank 10-12 ) that gets generated in between which requires huge amount of memory. The greater than 2 repeated indices is not a mistake, it is what it is. • An explicit $M$ is appreciated, I guess. Apr 29, 2021 at 7:38 • You can take it to be a completely randomreal matrix Apr 29, 2021 at 17:28 I think, it is quite feasible. Let $$N$$ be the tensor dimension, $$N=2^7$$ in your case. I claim that the computational cost is $$\mathcal{O}(N^5)$$, which is around $$3.2\times 10^{10}$$, i.e., the tensor contraction can be computed within 1 minute on a laptop. Observe the following: 1. $$M$$ can be contracted with $$X$$ or $$Y$$ beforehand yielding $$A_{ijkl}$$, $$B_{mjkl}$$, $$A_{inkl}$$ and $$B_{mnkl}$$. This is just a $$\mathcal{O}(N^4)$$ operation, use Table for that. 2. Consider first the inner sum over $$i,j,m,n$$. This can be done sequentially via matrix multiplication (use . and Tr) as follows at the $$\mathcal{O}(3 N^3)$$ cost $$T= A.B,\\ T= T.A,\\ T=T.B,\\ x_{kl}=\mathrm{Tr}(T).$$ 1. Finally, one performs the sum $$\sum_{kl}x_{kl}$$ with the $$\mathcal{O}(N^2)$$ cost (Sum or ParallelSum). Total computational cost is $$\mathcal{O}(N^5)$$. The space requirements are also very modest: one needs to store only 2 additional tensors $$A$$ and $$B$$ and a matrix $$T$$. Total additional storage $$N^2(2 N^2+1)$$, i.e., $$\mathcal{O}(N^4)$$. The mathematica code could be as simple as A=Table[..]; B=Table[..]; Sum[a=A[[All,All,k,l]]; b=B[[All,All,k,l]]; Tr[a.b.a.b],{k,N},{l,N}] • This seems easily achievable thanks. I shall do it and check, shall accept it as an answer once I do it. Apr 29, 2021 at 21:35 • @RoopayanGhosh Have you already checked it? May 4, 2021 at 18:11 • Oh thanks for reminding, I did and it worked, accepting the answer. Sorry I forgot May 5, 2021 at 19:32 Making the ideas in @yarchik's solution explicit: using smaller versions of your matrices, t = 4; M = Array[mm, {t, t, t, t}]; X = Array[xx, {t, t}]; Y = Array[yy, {t, t}]; the exact sum you're looking for is S = Sum[M[[i, j, k, l]] M[[m, j, k, l]] M[[i, n, k, l]] M[[m, n, k, l]] X[[k, l]] Y[[k, l]] X[[k, l]] Y[[k, l]], {i, t}, {j, t}, {k, t}, {l, t}, {m, t}, {n, t}]; define intermediates MX and MY: MX = Transpose[Transpose[M, {3, 4, 1, 2}]*X, {3, 4, 1, 2}]; MY = Transpose[Transpose[M, {3, 4, 1, 2}]*Y, {3, 4, 1, 2}]; define an intermediate A: this step can probably also be done with a list-processing (linear algebra) operation instead of Table; but I can't figure it out right now, A = Table[MX[[All, j, k, l]] . MY[[All, n, k, l]], {j, t}, {n, t}, {k, t}, {l, t}]; Now the sum is a scalar product: S == Flatten[Transpose[A]] . Flatten[A] // Expand (* True *) • A is rank-4 and the whole thing scales as yours, as far as I can see. Apr 29, 2021 at 16:47 • This seems to have same complexity to me as well. Thanks for the explicit answer. Apr 29, 2021 at 21:37 • @Roman Sorry, I misunderstood your method. It became completely clear after you added All. Apr 30, 2021 at 5:21
1,153
3,374
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 25, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.578125
4
CC-MAIN-2022-21
longest
en
0.856805
https://math.stackexchange.com/questions/846011/precise-definition-of-the-support-of-a-random-variable/846081
1,560,962,626,000,000,000
text/html
crawl-data/CC-MAIN-2019-26/segments/1560627999003.64/warc/CC-MAIN-20190619163847-20190619185847-00466.warc.gz
511,190,273
41,276
# Precise definition of the support of a random variable $$\newcommand{\F}{\mathcal{F}} \newcommand{\powset}[1]{\mathcal{P}(#1)}$$ I am reading lecture notes which contradict my understanding of random variables. Suppose we have a probability space $$(\Omega, \mathcal{F}, Pr)$$, where • $$\Omega$$ is the set of outcomes • $$\F \subseteq \powset{\Omega}$$ is the collection of events, a $$\sigma$$-algebra • $$\Pr:\Omega\to[0,1]$$ is the mapping outcomes to their probabilities. If we take the standard definition of a random variable $$X$$, it is actually a function from the sample space to real values, i.e. $$X:\Omega \to \mathbb{R}$$. What now confuses me is the precise definition of the term support. the support of a function is the set of points where the function is not zero valued. Now, applying this definition to our random variable $$X$$, these lectures notes say: Random Variables – A random variable is a real valued function defined on the sample space of an experiment. Associated with each random variable is a probability density function (pdf) for the random variable. The sample space is also called the support of a random variable. I am not entirely convinced with the line the sample space is also callled the support of a random variable. Why would $$\Omega$$ be the support of $$X$$? What if the random variable $$X$$ so happened to map some element $$\omega \in \Omega$$ to the real number $$0$$, then that element would not be in the support? What is even more confusing is, when we talk about support, do we mean that of $$X$$ or that of the distribution function $$\Pr$$? It is more accurate to speak of the support of the distribution than that of the support of the random variable. Do we interpret the support to be • the set of outcomes in $$\Omega$$ which have a non-zero probability, • the set of values that $$X$$ can take with non-zero probability? I think being precise is important, although my literature does not seem very rigorous. • The support of a random variable $X$ with values in $\mathbb{R}^n$ is the set $\{x\in\mathbb{R}^n\mid P_X(B(x,r))>0,\text{for all } r>0\}$ where $B(x,r)$ denotes the ball with center at $x$ and radius $r$. In particular, the support is a subset of $\mathbb{R}^n$. – Stefan Hansen Jun 24 '14 at 16:20 • What @StefanHansen said, or the smallest closed set $C$ such that $P_X(C)=1$. – Did Jun 24 '14 at 17:16 • @Did your definition is particularly intuitive. – jII Jul 3 '14 at 2:51 I am not entirely convinced with the line the sample space is also called the support of a random variable That looks quite wrong to me. What is even more confusing is, when we talk about support, do we mean that of $X$ or that of the distribution function $Pr$? In rather informal terms, the "support" of a random variable $X$ is defined as the support (in the function sense) of the density function $f_X(x)$. I say, in rather informal terms, because the density function is a quite intuitive and practical concept for dealing with probabilities, but no so much when speaking of probability in general and formal terms. For one thing, it's not a proper function for "discrete distributions" (again, a practical but loose concept). In more formal/strict terms, the comment of Stefan fits the bill. Do we interpret the support to be - the set of outcomes in Ω which have a non-zero probability, - the set of values that X can take with non-zero probability? Neither, actually. Consider a random variable that has a uniform density in $[0,1]$, with $\Omega = \mathbb{R}$. Then the support is the full interval $[0,1]$ - which is a subset of $\Omega$. But, then, of course, say $x=1/2$ belongs to the support. But the probability that $X$ takes this value is zero. • @StefanHansen For my example (a uniform density in $[0,1]$) $\Omega$ is $\mathbb{R}$ – leonbloy Jun 25 '14 at 0:19 The support of the density $f_x(.)$$is the range of values of the random variable X for which the density function is positive, i.e.$\mathcal{R}_x:= \{{x\in \mathcal{R}_X : fx(x) > 0\}}$Note that$f_x(.)$is the probability density/mass function. • For a density function of a continuous random variable,$f(x)$is$0$for every$x$, so this is not a correct definition – IceFire Jun 6 '18 at 11:40 • @IceFire That's not accurate. What is actually$0$is the probability$\mathrm{Pr}(X = x) = 0$for any$x\in\mathbb{R}$. If the probability density function were$0$for all$x\in\mathbb{R}$, what use would it be? For continuous random variables,$f_X\colon \mathbb{R} \to \mathbb{R}$is defined as the unique function such that$\mathrm{Pr}(X \in [a, b]) = \int_a^b f_X(x) \mathrm{d}x$. Then the fact that$\mathrm{Pr}(X = x) = 0$is a direct consequence:$\mathrm{Pr}(X = x) = \mathrm{Pr}(X \in [x, x]) = \int_x^x f_X(t) \mathrm{d}t = 0\$. So this, albeit not the most formal, is a correct def. – Anakhand Mar 3 at 12:56 I'll start from the beginning to make sure we're using the same definitions. $$\newcommand{\A}{\mathcal{A}} \newcommand{\powset}[1]{\mathcal{P}(#1)} \newcommand{\R}{\mathbb{R}} \newcommand{\deq}{\stackrel{\scriptsize def}{=}} \newcommand{\N}{\mathbb{N}}$$ Let $$(\Omega, \A, \Pr)$$ be a probability space, defined as in the question's body: • $$\Omega$$ is the set of outcomes • $$\A \subseteq \powset{\Omega}$$ is the collection of events, a $$\sigma$$-algebra • $$\Pr\colon\ \Omega\to[0,1]$$ is the mapping outcomes to their probabilities. A random variable $$X$$ is defined as a map $$X\colon\; \Omega \to \R$$ such that, for any $$x\in\R$$, the set $$\{\omega \in \Omega \mid X(\omega) \le x\}$$ is an element of $$\A$$—that is, the probability map is defined for it. This condition is necessary in order to define the following concepts. The probability distribution function of a random variable $$X$$ is defined as the map \begin{align} F_X \colon \quad \R \ &\to\ [0, 1] \\ x\ &\mapsto\ \Pr(X \le x) \deq \Pr(X^{-1}(I_x)) \end{align} We can see that • $$\Pr(X > x) \deq \Pr(\overline{X^{-1}(I_x)}) = 1 - \Pr(X^{-1}(I_x)) = 1 - F_X(x)$$ where $$I_x \deq (-\infty, x]$$, and $$\overline{A}$$ denotes the complement of $$A$$ in $$\Omega$$. Notice that this probability is defined since $$\A$$ is a $$\sigma$$-algebra (and thus closed under set complement). • $$\Pr(X < x) \deq \Pr\left(\bigcup\limits_{n\in\N} X^{-1} \left(I_{x-\frac{1}{n}}\right)\right) = \lim_{t \to x^-} \Pr(X \le t) = \lim_{t \to x^-} F_X(t)$$ since $$X^{-1} \left(I_{x-\frac{1}{n+1}}\right) \subseteq X^{-1} \left(I_{x-\frac{1}{n}}\right)$$ for all $$n\in\N$$. Note again that that union is valid since it is countable and $$\A$$ is a $$\sigma$$-algebra. • $$\Pr(X = x) \deq \Pr(X^{-1}(I_x) \setminus A_{ where $$A_{ and $$F_X(x^-) \deq \lim\limits_{t \to x^-} F_X(t)$$. and so forth. Now, this function is sufficient to uniquely define a probability measure on $$\R$$; that is, a map \begin{align} P_X \colon \quad \mathcal{B} \subset \powset{\R} \ &\to \ [0, 1]\\ A \ &\mapsto \ \Pr(X \in A) \deq \Pr(X^{-1}(A)) \end{align} that assigns to any set $$A \in \mathcal{B}$$ the probability of the corresponding event in $$\A$$. Here $$\mathcal{B}$$ is the Borel $$\sigma$$-algebra in $$\R$$, which is, loosely speaking, the smallest $$\sigma$$-algebra containing all of the semi-intervals $$(-\infty, x]$$. The reason why $$P_X$$ is defined only on those sets is because we only required $$X^{-1}(A) \in \A$$ for the semi-intervals A=(-\infty, x]; thus $$X^{-1}(A)$$ is an element of $$\A$$ only when, loosely speaking, $$A$$ is "generated" by those semi-intervals, their complements, and countable unions/intersections thereof (according to the "rules" of a $$\sigma$$-algebra). Formally, the support of $$X$$ can be defined as the smallest closed set $$R_X \in \mathcal{B}$$ such that $$P_X(R_X) = 1$$, as Did pointed out in their comment. An alternative but equivalent definition is the one given by Stefan Hansen in his comment. The equivalence can be proven as follows: Proof Let $$R_X$$ be the smallest closed set $$R_X \in \mathcal{B}$$ such that $$P_X(R_X) = 1$$. That means that for every $$x \in \overline{R_X}$$, there exists a radius $$r\in\R_+$$ such that the open interval (or open ball in the more general case) $$(x-r, x+r)$$ is contained within $$\R \setminus R_X$$ (since $$R_X$$ is closed). That, in turn, implies that $$P_X((x-r, x + r)) = 0$$ --- otherwise, $$P_X(R_X \cup (x-r,x+r)) = P_X(R_X) + P_X((x-r, x+r)) > P_X(R_X) = 1$$, a contradiction. Conversely, suppose $$P_X((x-r, x+r)) = 0$$ for some $$x\in\R$$, $$r\in\R_+$$. Then $$(x-r, x+r) \subseteq \R \setminus R_x$$. Otherwise $$R_X' \deq R_X \setminus (x-r, x+r)$$ would be a closed set smaller than $$R_X$$ satisfying $$P_X(R_X') = 1$$. This proves $$\R \setminus R_X = \{x\in\R \mid \exists r \in \R_+\colon P_X((x-r, x+r)) = 0\}$$ Negating the predicate, one gets $$R_X = \{x\in\R \mid \forall r \in \R_+ P_X((x-r, x+r)) > 0\}$$ But more often, different definitions are given. ## Alternative definition for discrete r.v. A discrete random variable can be defined as a random variable $$X$$ such that $$X(\Omega)$$ is countable (either finite or countably infinite). Then, for a discrete random variable the support can be defined as $$R_X \deq \{x\in\R \mid \Pr(X = x) > 0\}\,.$$ Note that $$R_X \subseteq X(\Omega)$$ and thus $$R_X$$ is countable. We can prove this by proving its contrapositive: Suppose $$x \in \R$$ and $$x \notin X(\Omega)$$. We can distinguish two cases: either $$x < y$$ $$\forall y \in R_X$$, or $$x > y$$ $$\forall y \in R_X$$, or neither. Suppose $$x < y$$ $$\forall y \in R_X$$. Then $$\Pr(X = x) \le \Pr(X \le x) = \Pr(X^{-1}(I_x)) = \Pr(\emptyset) = 0$$, since $$\forall \omega\in\Omega\ X(\omega) > x$$. Ergo, $$x\notin R_X$$. The case in which $$x > y$$ $$\forall y \in X(\Omega)$$ is analogous. Suppose now $$\exists y_1, y_2 \in X(\Omega)$$ such that $$y_1 < x < y_2$$. Let $$S = \{y\in X(\Omega) \mid y < x\}$$, which is. Thus $$\sup L$$ and exists, and $$\lim_{y \to x^-} F_X(y) = F_X(\sup L)$$ since $$F_X$$ is nondecreasing and bounded above. Thus, since $$\sup L \le x$$, $$F_X(x) \ge F_X(\sup L)$$ and therefore $$\Pr(X=x) = F_X(x) - F_X(x^-) \ge F_X(\sup L) - F_X(x^-) = 0$$. ## Alternative definition for continuous r.v. Notice that for absolutely continuous random variables (that is, random variables whose distribution function is continuous on all of $$\R$$), $$\Pr(X = x) = 0$$ for all $$x\in \R$$—since $$F_X(x) = F_X(x^-)$$. But that doesn't mean that the outcomes of $$X^{-1}({x})$$ are "impossible", informally speaking. Thus, in this case, the support is defined as $$R_X = \{x \in \R \mid f_X(x) > 0\}$$ A random variable is defined as a function that maps outcomes to numerical quantities, typically real numbers, i.e., X: Ω ↦ R. The “domain of a random variable” is the set of possible outcomes. In the case of the coin, there are only two possible outcomes, namely heads or tails. Since one of these outcomes must occur, either the event that the coin lands heads or the event that the coin lands tails must have non-zero probability. For an unbiased coin p(H)=p(T)=1/2. Consider another random variable Y, be the number of heads. So toss a coin twice produce y = 0, 1 and 2 with p(Y) = ¼, ½ and ¼ respectively. The “support of a real-valued function f ” is the subset of the domain containing those elements which are not mapped to zero. Let f(x) be the p.d.f. of normal distribution with support x∈R , so that f(x)≠0. • Welcome to MSE! Please, format your posts using MathJax. – mucciolo Nov 4 '17 at 6:50
3,576
11,532
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 116, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0}
3.5625
4
CC-MAIN-2019-26
latest
en
0.885325
http://www.docstoc.com/docs/43068241/BMT-new-PowerPoint-slide-template
1,435,700,447,000,000,000
text/html
crawl-data/CC-MAIN-2015-27/segments/1435375094501.77/warc/CC-MAIN-20150627031814-00255-ip-10-179-60-89.ec2.internal.warc.gz
402,522,873
43,799
# BMT new PowerPoint slide template by tlc69476 VIEWS: 38 PAGES: 39 • pg 1 ``` Review of 0.9 m/s Vertical Wind Component Criterion for Helicopters Stephen J Rowe Managing Director BMT Fluid Mechanics Limited BMT Fluid Mechanics 44346p01.ppt 1 Helideck Environment Hazards BMT Fluid Mechanics 44346p01.ppt 2 Agenda The 0.9 m/s criterion – where does it come from? Can it be linked to a helicopter performance property? Objectives of the study Three Phase Study: – Phase 1 - Examine the HOMP data archive for evidence of performance-related hazards – Phase 2 – Evaluate violations of the 0.9m/s criterion in the BMT wind tunnel database. – Phase 3 – Correlate the BMT wind tunnel flow data with torque and pilot workload data from the HOMP archive Conclusions/Recommendations Questions/Discussion BMT Fluid Mechanics 44346p01.ppt 3 The 0.9m/s Criterion Currently CAP437 uses the following wording: – As a general rule, the vertical mean wind speed above the helideck should not exceed ± 0.9m/s (1.75 kts) for a wind speed of up to 25 m/s (48.6 kts). This equates to a wind vector slope of 2°. The Helideck Environment report linked the 0.9m/s with a hover-thrust margin of 3%. The report says: – Simple theory suggests that, in the absence of ground effect, a thrust margin of at least 3% would be required to overcome the effects of this magnitude of gust and maintain a hover over the deck in zero wind. However, it should be noted that it is unlikely that with current helideck designs a helicopter could ever experience a 0.9m/s downdraught in the absence of the beneficial effect on thrust margin of a significant horizontal wind component. BMT Fluid Mechanics 44346p01.ppt 4 The 0.9m/s Criterion Questions: – Does violation of the existing 0.9m/s vertical component in the presence of a high horizontal wind speed pose any real hazard to the helicopter? If it does, then what is the nature of the hazard, and does the existing criterion adequately protect against it? If it doesn’t, then flight restrictions currently in place on platforms in these ‘high horizontal flow’ cases should be removed. – If the application of the existing 0.9m/s vertical component criterion is not currently protecting against an identifiable hazard, then what is the nature of the real hazard (if any) in relation to vertical wind component, and how should a new criterion be framed? Should the criterion be framed more in terms of a transient phenomenon (e.g. the spatial variation in mean vertical velocity)? BMT Fluid Mechanics 44346p01.ppt 5 Study Objectives The overall objectives for the project were defined as follows: – To determine whether the existing 0.9m/s vertical flow criterion is protecting offshore helicopters against an identifiable hazard. If so, refine the magnitude of the criterion so that there is a rational link with helicopter performance. – If the existing 0.9m/s criterion cannot be linked to an identifiable hazard, then establish the nature of an associated vertical flow hazard, and develop a new flow criterion that satisfactorily protects against the hazard. Alternatively establish that a vertical flow criterion of this sort is not necessary. BMT Fluid Mechanics 44346p01.ppt 6 Three-Phase Study Phase 1 – Examine the HOMP data archive for evidence of performance-related hazards during the approach, which might be linked to a vertical velocity component. Phase 2 – Evaluate violations of the 0.9m/s criterion in the BMT wind tunnel database in order to understand their relationship with the geometric properties of the platform. Phase 3 – Correlate the BMT wind tunnel flow data with torque and pilot workload data from the HOMP archive. BMT Fluid Mechanics 44346p01.ppt 7 Phase 1 – Analysis of the HOMP Archive BMT Fluid Mechanics 44346p01.ppt 8 The HOMP data Archive HOMP is a ‘live’ system which continuously gathers and analyses flight data. We used an archive of some 32,000 flight sectors operated by Bristow Helicopters in the North Sea between the dates of 1st July 2003 and 31st October 2004. 122 different offshore helidecks had been visited by these flights. Once we had selected the helideck landings, and a there remained about 13,000 valid landings over the 16 month period that could be used in the analysis. BMT Fluid Mechanics 44346p01.ppt 9 HOMP Parameters and Analysis Maximum rotor torque – corrected for weight (and restricted weight range) – Tq2 = Tq1(W2/W1)3/2 Maximum increase in rotor torque (over a 2 second period) – Expressed as a percentage of remaining torque margin BMT Fluid Mechanics 44346p01.ppt 10 Example HOMP Results BMT Fluid Mechanics 44346p01.ppt 11 Example HOMP Results BMT Fluid Mechanics 44346p01.ppt 12 Example HOMP Results BMT Fluid Mechanics 44346p01.ppt 13 Example HOMP Results BMT Fluid Mechanics 44346p01.ppt 14 Example HOMP Results BMT Fluid Mechanics 44346p01.ppt 15 Example HOMP Results Max Torque Max Torque Increase BMT Fluid Mechanics 44346p01.ppt 16 Phase 1 – Analysis of the HOMP Archive There is a noticeable reduction in the torque increase / margin with wind speed when the wind is from open sectors, but this trend is absent for winds from the obstructed or turbulent sectors. Torque increase/margin is higher in high winds from the obstructed sectors. Put another way, high values of torque increase/margin at higher wind speeds are invariably associated with turbulent conditions. BMT Fluid Mechanics 44346p01.ppt 17 Phase 2 – BMT Wind Tunnel Archive BMT Fluid Mechanics 44346p01.ppt 18 BMT Wind Tunnel Archive BMT Wind Tunnel Archive: – 20 platforms – 62 design cases Wind Flow Criteria: Flow property Criterion Source Longitudinal mean wind speed ±5.0 m/s A BMT-derived criterion, developed (at 25 m/s wind speed) from experience of interpreting results of wind tunnel tests. Vertical mean wind speed ±0.9 m/s CAP 437, Fifth edition, August 2005 (at 25 m/s wind speed) Longitudinal turbulence standard 5.0 m/s A BMT-derived criterion, developed deviation from experience of interpreting results of wind tunnel tests. Vertical turbulence standard 2.4 m/s CAP 437, Fifth edition, August 2005 & deviation CAA Paper 2004/03, September 2004 BMT Fluid Mechanics 44346p01.ppt 19 Wind Flow Criteria Vertical turbulence rms: the undisturbed mean wind speed at helideck height at which the vertical turbulence criterion of standard deviation = 2.4 m/s is violated. The green-shaded area indicates where the turbulence criterion is not violated. Longitudinal turbulence rms: the undisturbed mean wind speed at helideck height at which a nominal longitudinal turbulence criterion of standard deviation = 5.0 m/s is violated. The green-shaded area indicates where the criterion is not violated. BMT Fluid Mechanics 44346p01.ppt 20 Wind Flow Criteria Vertical mean wind speed: the undisturbed mean wind speed at helideck height at which the vertical mean wind speed criterion of = 0.9 m/s is violated. The green-shaded area indicates where the criterion is not violated. The violation zone, shown in red, extends to a wind speed of 25 m/s to reflect the fact that the criterion value is defined for this wind speed. Longitudinal mean wind speed: the undisturbed mean wind speed at helideck height at which the nominal longitudinal mean wind speed criterion of 25 +- 5 m/s is violated. The violation zone is shown in red. BMT Fluid Mechanics 44346p01.ppt 21 Example BMT Archive Results BMT Fluid Mechanics 44346p01.ppt 22 Example BMT Archive Results BMT Fluid Mechanics 44346p01.ppt 23 Example BMT Archive Results Single or Wmean (max) Violation of Wmean (max) Platform Multiple for Nature of longitudinal Platform for obstructed size1 platform unobstructed Obstruction mean wind wind directions layout wind directions speed criterion BP Clair Large Single 2.16 0.55 Derrick Yes Dunbar Medium Single Yes without TSV) 1.45 0.59 Derrick Cormorant Large Single Derrick and flare Yes Alpha 2.43 0.66 tower Britannia Large Single 1.77 0.79 Derrick Yes Goodwyn Large Single 2.4 0.80 Derrick Yes BP Andrew Large Single 1.17 0.82 Derrick Yes Scott Large Single 1.81 0.82 Exhaust stacks Yes Buzzard Medium Single2 1.75 0.84 Exhaust stacks Yes Janice Large Single 1.13 0.85 Flare tower Yes Malampaya Large Single 1.62 0.88 Exhaust stacks Marginal Elgin PUQ Large Single 2.22 0.88 Exhaust stacks Yes Njord Large Single 2.29 1.13 Derrick Yes East Brae Large Single 1.55 1.42 Exhaust stacks Yes (Final) platform plus Bunduq Small Multiple 1.98 1.48 blockage No underneath the helideck arkham J6A Medium Single 2.13 1.58 Exhaust stacks Yes (V) Blockage PS4 Small Multiple 1.5 1.73 underneath the No helideck Crane plus blockage de Base Case Small Single 1.36 1.74 No underneath the helideck Ekofisk 2A Small Multiple 0.84 1.75 Marginal platform BMT Fluid Mechanics 44346p01.ppt 24 BMT Archive Results For large and medium single platforms, the highest vertical mean wind speed occurs consistently for unobstructed wind directions. – Only three of the fourteen large/medium single platforms; Njord, East Brae and Markham J6A, violated the 0.9m/s criterion in wind directions from the obstructed sector. – All but one violates the horizontal flow criterion. This is because large platforms generate large wake flows in the vicinity of the helideck that cause significant reductions in overall mean wind speeds. BMT Fluid Mechanics 44346p01.ppt 25 BMT Archive Results For small platforms, which usually form part of multiple platform complexes, the highest vertical speed occurs mostly for obstructed wind directions, with violation of the vertical mean wind speed criterion occurring for both unobstructed and obstructed wind directions. – However, these smaller platforms generate less severe wake flows or allow some wake recovery to take place, resulting in generally higher wind speeds at the helideck. – This is reflected in the longitudinal mean wind speed criterion, which is more often complied with for the small platforms. – Consequently the high vertical mean flow components are accompanied by high horizontal flows, which will tend to greatly enhance helicopter lift performance. The precise nature of the flow in individual cases is strongly dependent on the nature and proximity of adjacent structures, which are BMT Fluid Mechanics 44346p01.ppt 26 Phase 3 – Correlate wind tunnel and HOMP data BMT Fluid Mechanics 44346p01.ppt 27 Correlate wind tunnel and HOMP data Platforms considered: – Britannia – Clair – Cormorant Alpha – East Brae – Scott Data for ‘open’ wind direction sectors plotted BMT Fluid Mechanics 44346p01.ppt 28 Wind Tunnel v HOMP data - Examples Britannia 100 90 80 70 Maximum Torque % 60 50 40 96% values above 0.9m/s 30 >45 kn 35-45kn 25-35kn 20 15-25kn 0-15kn 10 0 0.00 0.50 1.00 1.50 2.00 2.50 3.00 Wmax (m/s) at 25m/s BMT Fluid Mechanics 44346p01.ppt 29 Wind Tunnel v HOMP data - Examples Britannia 30 25 Maximum Incr Torque % 20 15 10 >45kn 5 35-45kn 25-35kn 15-25kn <15kn 0 0.00 0.50 1.00 1.50 2.00 2.50 3.00 Wmax (m/s) at 25m/s BMT Fluid Mechanics 44346p01.ppt 30 Wind Tunnel v HOMP data - Examples Britannia 6 5 4 3 2 >45kn 1 35-45kn 25-35kn 15-25kn <15kn 0 0.00 0.50 1.00 1.50 2.00 2.50 3.00 Wmax (m/s) at 25m/s BMT Fluid Mechanics 44346p01.ppt 31 Wind Tunnel v HOMP data The lack of any correlation with pilot workload suggests that the existence of high mean vertical velocities in open wind sectors does not cause the pilot any difficulties with control. – If spatial variations in the vertical component were causing a control problem, then one would expect such variations to occur during landings in wind directions causing the greatest vertical component over the helideck, and that this in turn would result in high pilot control activity registering a higher workload, but this is certainly not seen in the data. BMT Fluid Mechanics 44346p01.ppt 32 Wind Tunnel v HOMP data The lack of any correlation with rotor maximum torque or maximum torque increase suggests that the existence of high mean vertical velocities does not cause any helicopter lift or performance problems. – Entering a region of high downdraft would be expected to result in a need for increased collective and thus increased rotor torque, but this is not seen in the data. – It is presumed that in high wind speeds the effect is not seen because the presence of high horizontal wind component means that the helicopter has a high margin of lift, and small adjustments in collective are sufficient to compensate. In low wind speeds the actual vertical component of velocity and the effect on the helicopter sink or climb rate is small. BMT Fluid Mechanics 44346p01.ppt 33 Conclusions In Phase-1 it was concluded that: – No evidence of high torque events or large torque increase events (over 2 seconds) associated with higher wind speeds and ‘open’ wind directional sectors was found. – Plots of all valid torque data for all 44 platforms for which sketches were available show that torque values are generally higher for lower wind speeds and for winds from sectors which will have turbulence caused by the upwind structure of the platform. BMT Fluid Mechanics 44346p01.ppt 34 Conclusions (cont.) In Phase-2 it was concluded that: – For large single platforms overall velocity reductions in the wake of obstructions mean that violations of the 0.9m/s criterion are most likely to occur in winds from the open sectors. In fact, for the 14 large platforms analysed from the BMT database, only three violated the 0.9m/s criterion in winds from obstructed or ‘turbulent’ directions. – For smaller platforms and multiple platform configurations, where there are less severe wake effects, violations of the 0.9m/s criterion can occur in winds from all sectors, but are likely to be accompanied by high horizontal wind components with consequent benefits to helicopter lift. BMT Fluid Mechanics 44346p01.ppt 35 Conclusions (cont.) In Phase-3 it was concluded that: – There is no evidence in the HOMP data of the occurrence of high rotor torque or torque increase values associated with high vertical flow components. – Similarly, there is no evidence of high pilot workload associated with high vertical flow components. BMT Fluid Mechanics 44346p01.ppt 36 Conclusions (cont.) Overall it is concluded that violation of the 0.9m/s vertical mean flow criterion cannot be linked to any helicopter performance (i.e. torque-related), or handling (i.e pilot The highest vertical components of flow almost always occur when the wind is from an ‘open’ direction, or from the obstructed direction on small platforms generating little wake. These are conditions when: – the horizontal component of flow is likely to be high ensuring that the helicopter has a high margin of lift, and – when turbulence levels are likely to be low, resulting in BMT Fluid Mechanics 44346p01.ppt 37 Recommendations As the criterion cannot be linked to a helicopter performance or handling hazard, it is recommended that consideration be given to removing the 0.9m/s criterion from the guidance material. It is recommended that the first step should be consultation with the helicopter operators in order to seek their views on the validity or otherwise of the criterion from an operational perspective, and to check whether there may be safety benefits implicit in the criterion that have not been evident in the study. BUT – In trying to achieve some kind of compliance with the criterion, we tend to increase the height of helidecks, increasing the air-gap to large accommodation blocks. This is likely to be good for all the various wind flow features. BMT Fluid Mechanics 44346p01.ppt 38 Review of 0.9 m/s Vertical Wind Component Criterion for Helicopters Stephen J Rowe Managing Director BMT Fluid Mechanics Limited BMT Fluid Mechanics 44346p01.ppt 39 ``` To top
4,326
18,733
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.625
3
CC-MAIN-2015-27
longest
en
0.849826
https://math.stackexchange.com/questions/550991/if-fx-cos-x-explain-without-taking-the-derivative-how-you-would-find-t/550993
1,718,564,467,000,000,000
text/html
crawl-data/CC-MAIN-2024-26/segments/1718198861670.48/warc/CC-MAIN-20240616172129-20240616202129-00620.warc.gz
344,233,140
37,948
# If $f(x) = \cos x$, explain, without taking the derivative, how you would find the $f^{(99)}(x)$? My theory: derivative of $\cos x = - \sin x$ derivative of $-\sin x = -\cos x$ derivative of $-\cos x = \sin x.$ cycle occurs three times but then what do you do?? Is there a good way to solve this? • The period is 4. Divide 99 by four, and see if its remainder is 0, 1, 2, or 3. – Pedro Commented Nov 4, 2013 at 2:59 We have \begin{align*} f(x) &= \cos{x} \\ f^{(1)}(x) &= -\sin{x} \\ f^{(2)}(x) &= - \cos{x} \\ f^{(3)}(x) &= \sin{x} \\ f^{(4)}(x) &= \cos{x} \end{align*} So the cycle has length $4$. In particular, we'll see that $$f^{(8)}(x) = \left(\cos{x}\right)^{(8)} = \left(\cos{x}\right)^{(4)} = \cos{x}$$ Likewise for $12$, $16$, and so on. So now use the fact that $$99 = 4 \cdot 24 + 3$$ Notice $\quad\displaystyle \cos(x) = \frac12 (e^{ix} + e^{-ix})\quad$ and $e^{\pm i x}$ are eigenfunctions of the operator of taking derivative against $x$ with eigenvalues $\pm i$. i.e. $\quad\displaystyle\frac{d}{dx} e^{\pm ix} = \pm i e^{\pm ix}.$ We have $$f^{(99)}(x) = \frac{d^{99}}{dx^{99}} \frac12 (e^{ix} + e^{-ix}) = \frac12 ( i^{99} e^{ix} + (-i)^{99} e^{-ix} ) = \frac12 ( -i e^{ix} + i e^{-ix} ) = \sin(x)$$ • yikes!!!!!!!!!!! Commented Nov 4, 2013 at 3:14 • @Jessica It's actually not that "yikes" once you know complex numbers... Commented Nov 4, 2013 at 8:58 The cycle occurs, not three times but 4 times. Thus every 4 derivatives taken you return to $\cos(x)$. Therefore, since $99=96+3$, you need the $3$rd derivative of $\cos(x)$ which is $\sin(x)$. $f = \cos x , f' = -\sin x , f'' = -\cos x , f''' = \sin x , f^{(4)} = \cos x$ Therefore, the derivatives of sine cycle every $4$. In particular since $99 = 3 (mod 4 )$, so $f^{(99)} = f''' = \sin x$
680
1,786
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.5
4
CC-MAIN-2024-26
latest
en
0.779472
https://fxsolver.com/browse/?like=1032&p=106
1,657,181,779,000,000,000
text/html
crawl-data/CC-MAIN-2022-27/segments/1656104683708.93/warc/CC-MAIN-20220707063442-20220707093442-00423.warc.gz
309,459,503
49,926
' # Search results Found 1101 matches Bejan number (modified form) The modified form of the Bejan number, riginally proposed by Bhattacharjee and Grosshandler for momentum processes, by replacing the dynamic viscosity ... more Current gain In electronics, gain is a measure of the ability of a two-port circuit (often an amplifier) to increase the power or amplitude of a signal from the input ... more Νatural draught flow rate (draft flow) The structure which provides ventilation for hot flue gases or smoke from a boiler, stove, furnace or fireplace to the outside atmosphere causes a flow ... more Henry's law constant (dimensionless) Henry’s law states : “At a constant temperature, the amount of a given gas that dissolves in a given type and volume of liquid is directly ... more Drag equation ( for fluids) Drag (sometimes called air resistance, a type of friction, or fluid resistance, another type of friction or fluid friction) refers to forces acting ... more Worksheet 980 PPI can be calculated from knowing the diagonal size of the screen in inches and the resolution in pixels (width and height). This can be done in two steps Using the Pythagorean theorem, for 3 different screen resolutions: Diagonal Resolution - Pixels Using the Diagonal Resolution from the previous formula we calculate the PPI for 3 corresponding screen sizes : Pixels Per Inch (PPI) Results: 10.1 inch tablet screen of resolution 1024×600 : 117.5PPI 21.5 inch PC monitor of 1080p resolution : 102.46PPI 27 inch PC monitor of 1440p resolution : 108.78PPI Diagonal Resolution - Pixels Pixels per inch (PPI) (or pixels per centimeter (PPCM)) is a measurement of the pixel density ... more Power gain In electronics, gain is a measure of the ability of a two-port circuit (often an amplifier) to increase the power or amplitude of a signal from the input ... more Cost performance index (CPI) Earned value management (EVM), earned value project management, or earned value performance management (... more Oblique Shock An oblique shock wave, unlike a normal shock, is inclined with respect to the incident upstream flow direction. It will occur when a supersonic flow ... more ...can't find what you're looking for? Create a new formula ### Search criteria: Similar to formula Category
515
2,303
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.875
3
CC-MAIN-2022-27
latest
en
0.852302
https://www.jiskha.com/display.cgi?id=1257827663
1,501,269,268,000,000,000
text/html
crawl-data/CC-MAIN-2017-30/segments/1500550977093.96/warc/CC-MAIN-20170728183650-20170728203650-00464.warc.gz
798,745,373
4,196
# Algebra 2 posted by . 4Xsquared+7x+3 find the quadratic equation by factoring to solve. Show work • Algebra 2 - (4x + 3)(x + 1) one step; no work to show ## Similar Questions 1. ### Math - Algebra A. Solve the following quadratic equations. Make sure to show all your work. Do not use any method (e.g. factoring, completing the square, quadratic formula, graphing) more than twice. Use the graphing method at least once. 1. 3x+2 … 3. ### algebra II I'm quite clueless as to how to solve this. I know the quadratic equation but am not sure how to get this equation into the format to plug in to the quadratic... Solve the quadratic equation using the formula [1/(1+x)]-[1/(3-x)]=(6/35) … 4. ### Chopsticks solve the equation for x using the quadratic formula: 3x^2 + 8x + 1 = 0 Work: I tried factoring it, but I can't do it. (3x + ? 5. ### Algebra 4xsquared - 7x - 2 = 0 Solve the quadratic function 6. ### math algebra Part 1: Which part of the quadratic formula tells you whether the quadratic equation can be solved by factoring and why? 7. ### algebra 0. Tell your classmates about your pet to set up the scenario for fencing a rectangle in your yard. 1. You MUST start by choosing an amount of area that you want to fence in for your pet and this MUST be stated first. What amount of … 8. ### algebra Write the equation in quadratic form and solve it by factoring. x2(8x + 65) 63 = x 9. ### Algebra Solve the equation. Use factoring or the quadratic formula, whichever is appropriate. Try factoring first. If you have any difficulty factoring, then go right to the quadratic formula. (Enter your answers as a comma-separated list.) … 10. ### Algebra II Need help please. Thank you. 1. Determine whether the function has a maximum or minimum. State that value. f(x)= -2x^2-4x 2. Determine what c would in order to complete the square. x^2-5x+c 3. Solve the quadratic by factoring. 3x^2-16x+5=0 … More Similar Questions Post a New Question
536
1,962
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.765625
4
CC-MAIN-2017-30
longest
en
0.878288
https://faculty.math.illinois.edu/Macaulay2/doc/Macaulay2-1.18/share/doc/Macaulay2/RationalMaps/html/_inverse__Of__Map.html
1,632,006,210,000,000,000
application/xhtml+xml
crawl-data/CC-MAIN-2021-39/segments/1631780056578.5/warc/CC-MAIN-20210918214805-20210919004805-00526.warc.gz
295,402,523
5,595
# inverseOfMap -- Computes the inverse map of a given birational map between projective varieties. Returns an error if the map is not birational onto its image. ## Synopsis • Usage: f = inverseOfMap(I, J, L) f = inverseOfMap(R, S, L) f = inverseOfMap(g) • Inputs: • I, an ideal, Defining ideal of source • J, an ideal, Defining ideal of target • L, a list, List of polynomials that define the coordinates of your birational map • g, , Your birational map $f : X \to Y$. • Optional inputs: • AssumeDominant => ..., default value false, If true, certain functions assume that the map from X to Y is dominant. • CheckBirational => ..., default value true, If true, functions will check birationality. • HybridLimit => ..., default value 15, An option to control HybridStrategy • MinorsCount => ..., default value null, An option controlling the behavior of isBirational and inverseOfMap (and other functions which call those). • QuickRank => ..., default value true, An option for computing how rank is computed • Strategy => ..., default value HybridStrategy, Determines the desired Strategy in each function. • Verbose => ..., default value true, generate informative output • Outputs: • f, , Inverse function of your birational map, $f(X) \to X$. ## Description Given a map $f : X \to Y$, this finds the inverse of your birational map $f(X) \to X$ (if it is birational onto its image). The target and source must be varieties, in particular their defining ideals must be prime. If AssumeDominant is set to true (default is false) then it assumes that the map of varieties is dominant, otherwise the function will compute the image by finding the kernel of $f$. The Strategy option can be set to HybridStrategy (default), SimisStrategy, ReesStrategy, or SaturationStrategy. Note SimisStrategy will never terminate for non-birational maps. If CheckBirational is set to false (default is true), then no check for birationality will be done. If it is set to true and the map is not birational, an error will be thrown if you are not using SimisStrategy. The option HybridLimit can weight the HybridStrategy between ReesStrategy and SimisStrategy, the default value is 15 and increasing it will weight towards SimisStrategy. i1 : R = ZZ/7[x,y,z]; i2 : S = ZZ/7[a,b,c]; i3 : h = map(R, S, {y*z, x*z, x*y}); o3 : RingMap R <--- S i4 : inverseOfMap (h, Verbose=>false) o4 = map (S, R, {-b*c, -a*c, -a*b}) o4 : RingMap S <--- R Notice that the leading minus signs do not change the projective map. Next let us compute the inverse of the blowup of $P^2$ at a point. i5 : P5 = QQ[a..f]; i6 : M = matrix{{a,b,c},{d,e,f}}; 2 3 o6 : Matrix P5 <--- P5 i7 : blowUpSubvar = P5/(minors(2, M)+ideal(b - d)); i8 : h = map(blowUpSubvar, QQ[x,y,z],{a, b, c}); o8 : RingMap blowUpSubvar <--- QQ[x..z] i9 : g = inverseOfMap(h, Verbose=>false) 2 2 3 2 3 4 3 o9 = map (QQ[x..z], blowUpSubvar, {x y , x*y , x*y z, x*y , y , y z}) o9 : RingMap QQ[x..z] <--- blowUpSubvar i10 : baseLocusOfMap(g) o10 = ideal (y, x) o10 : Ideal of QQ[x..z] i11 : baseLocusOfMap(h) o11 = ideal 1 o11 : Ideal of blowUpSubvar The next example, is a Birational map on $\mathbb{P}^4$. i12 : Q=QQ[x,y,z,t,u]; i13 : phi=map(Q,Q,matrix{{x^5,y*x^4,z*x^4+y^5,t*x^4+z^5,u*x^4+t^5}}); o13 : RingMap Q <--- Q i14 : time inverseOfMap(phi,CheckBirational=>false) Starting inverseOfMapSimis(SimisStrategy or HybridStrategy) inverseOfMapSimis: About to find the image of the map. If you know the image, you may want to use the AssumeDominant option if this is slow. inverseOfMapSimis: Found the image of the map. inverseOfMapSimis: About to compute partial Groebner basis of rees ideal up to degree {1, 1}. inverseOfMapSimis: About to check rank, if this is very slow, you may try turning QuickRank=>false. inverseOfMapSimis: About to compute partial Groebner basis of rees ideal up to degree {1, 2}. inverseOfMapSimis: About to check rank, if this is very slow, you may try turning QuickRank=>false. inverseOfMapSimis: About to compute partial Groebner basis of rees ideal up to degree {1, 4}. inverseOfMapSimis: About to check rank, if this is very slow, you may try turning QuickRank=>false. inverseOfMapSimis: About to compute partial Groebner basis of rees ideal up to degree {1, 7}. inverseOfMapSimis: About to check rank, if this is very slow, you may try turning QuickRank=>false. inverseOfMapSimis: About to compute partial Groebner basis of rees ideal up to degree {1, 11}. inverseOfMapSimis: About to check rank, if this is very slow, you may try turning QuickRank=>false. inverseOfMapSimis: About to compute partial Groebner basis of rees ideal up to degree {1, 16}. inverseOfMapSimis: We give up. Using the previous computations, we compute the whole Groebner basis of the rees ideal. Increase HybridLimit and rerun to avoid this. inverseOfMapSimis: Found Jacobian dual matrix (or a weak form of it), it has 5 columns and about 20 rows. inverseOfMapSimis: Looking for a nonzero minor. If this fails, you may increase the attempts with MinorsCount => # getSubmatrixOfRank: Trying to find a submatrix of rank at least: 4 with attempts = 10. DetStrategy=>Rank internalChooseMinor: Choosing GRevLexSmallest getSubmatrixOfRank: found one, in 1 attempts inverseOfMapSimis: We found a nonzero minor. -- used 0.363278 seconds 125 124 120 5 124 100 25 104 20 108 15 2 112 10 3 116 5 4 120 5 124 125 4 120 8 115 2 12 110 3 16 105 4 20 100 5 24 95 6 28 90 7 32 85 8 36 80 9 40 75 10 44 70 11 48 65 12 52 60 13 56 55 14 60 50 15 64 45 16 68 40 17 72 35 18 76 30 19 80 25 20 84 20 21 88 15 22 92 10 23 96 5 24 100 25 24 100 28 95 32 90 2 36 85 3 40 80 4 44 75 5 48 70 6 52 65 7 56 60 8 60 55 9 64 50 10 68 45 11 72 40 12 76 35 13 80 30 14 84 25 15 88 20 16 92 15 17 96 10 18 100 5 19 104 20 48 75 2 52 70 2 56 65 2 2 60 60 3 2 64 55 4 2 68 50 5 2 72 45 6 2 76 40 7 2 80 35 8 2 84 30 9 2 88 25 10 2 92 20 11 2 96 15 12 2 100 10 13 2 104 5 14 2 108 15 2 72 50 3 76 45 3 80 40 2 3 84 35 3 3 88 30 4 3 92 25 5 3 96 20 6 3 100 15 7 3 104 10 8 3 108 5 9 3 112 10 3 96 25 4 100 20 4 104 15 2 4 108 10 3 4 112 5 4 4 116 5 4 120 5 124 o14 = map (Q, Q, {x , x y, - x y + x z, x y - 5x y z + 10x y z - 10x y z + 5x y z - x z + x t, - y + 25x y z - 300x y z + 2300x y z - 12650x y z + 53130x y z - 177100x y z + 480700x y z - 1081575x y z + 2042975x y z - 3268760x y z + 4457400x y z - 5200300x y z + 5200300x y z - 4457400x y z + 3268760x y z - 2042975x y z + 1081575x y z - 480700x y z + 177100x y z - 53130x y z + 12650x y z - 2300x y z + 300x y z - 25x y z + x z - 5x y t + 100x y z*t - 950x y z t + 5700x y z t - 24225x y z t + 77520x y z t - 193800x y z t + 387600x y z t - 629850x y z t + 839800x y z t - 923780x y z t + 839800x y z t - 629850x y z t + 387600x y z t - 193800x y z t + 77520x y z t - 24225x y z t + 5700x y z t - 950x y z t + 100x y z t - 5x z t - 10x y t + 150x y z*t - 1050x y z t + 4550x y z t - 13650x y z t + 30030x y z t - 50050x y z t + 64350x y z t - 64350x y z t + 50050x y z t - 30030x y z t + 13650x y z t - 4550x y z t + 1050x y z t - 150x y z t + 10x z t - 10x y t + 100x y z*t - 450x y z t + 1200x y z t - 2100x y z t + 2520x y z t - 2100x y z t + 1200x y z t - 450x y z t + 100x y z t - 10x z t - 5x y t + 25x y z*t - 50x y z t + 50x y z t - 25x y z t + 5x z t - x t + x u}) o14 : RingMap Q <--- Q Finally, we do an example of plane Cremona maps whose source is not minimally embedded. i15 : R=QQ[x,y,z,t]/(z-2*t); i16 : F = {y*z*(x-z)*(x-2*y), x*z*(y-z)*(x-2*y),y*x*(y-z)*(x-z)}; i17 : S = QQ[u,v,w]; i18 : h = map(R, S, F); o18 : RingMap R <--- S i19 : g = inverseOfMap h Starting inverseOfMapSimis(SimisStrategy or HybridStrategy) inverseOfMapSimis: About to find the image of the map. If you know the image, you may want to use the AssumeDominant option if this is slow. inverseOfMapSimis: Found the image of the map. inverseOfMapSimis: About to compute partial Groebner basis of rees ideal up to degree {1, 1}. inverseOfMapSimis: About to check rank, if this is very slow, you may try turning QuickRank=>false. inverseOfMapSimis: About to compute partial Groebner basis of rees ideal up to degree {1, 2}. inverseOfMapSimis: About to check rank, if this is very slow, you may try turning QuickRank=>false. inverseOfMapSimis: We computed enough of the Groebner basis. inverseOfMapSimis: Found Jacobian dual matrix (or a weak form of it), it has 3 columns and about 4 rows. inverseOfMapSimis: Looking for a nonzero minor. If this fails, you may increase the attempts with MinorsCount => # getSubmatrixOfRank: Trying to find a submatrix of rank at least: 2 with attempts = 10. DetStrategy=>Rank internalChooseMinor: Choosing LexLargestTerm getSubmatrixOfRank: found one, in 1 attempts inverseOfMapSimis: We found a nonzero minor. 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 o19 = map (S, R, {2u v - 8u v*w + 6u*v w + 8u w - 12u*v*w + 4v w , 2u v - 6u v*w + 4u*v w + 4u w - 6u*v*w + 2v w , 2u v - 6u v*w + 6u*v w + 4u w - 8u*v*w + 4v w , u v - 3u v*w + 3u*v w + 2u w - 4u*v*w + 2v w }) o19 : RingMap S <--- R i20 : use S; i21 : (g*h)(u)*v==(g*h)(v)*u o21 = true i22 : (g*h)(u)*w==(g*h)(w)*u o22 = true i23 : (g*h)(v)*w==(g*h)(w)*v o23 = true Notice the last checks are just verifying that the composition g*h agrees with the identity. ## Caveat Only works for irreducible varieties right now. Also see the function inverseMap in the package Cremona, which for certain types of maps from projective space is sometimes faster. Additionally, also compare with the function invertBirationalMap of the package Parametrization. ## Ways to use inverseOfMap : • "inverseOfMap(Ideal,Ideal,BasicList)" • "inverseOfMap(Ring,Ring,BasicList)" • "inverseOfMap(RingMap)" ## For the programmer The object inverseOfMap is .
3,573
9,720
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.640625
3
CC-MAIN-2021-39
latest
en
0.640339
https://fr.scribd.com/document/235081544/Highways-Platooning-Using-a-Flatbed-Tow-Truck-Model
1,571,428,041,000,000,000
text/html
crawl-data/CC-MAIN-2019-43/segments/1570986684425.36/warc/CC-MAIN-20191018181458-20191018204958-00075.warc.gz
518,098,540
71,514
Vous êtes sur la page 1sur 22 # Highways platooning using a ## flatbed tow truck model Alan ALI Gatan GARCIA Philippe MARTINET Ecole centrale de Nantes IRCCYN 1 Index I. INTRODUCTION Context Why platooning Longitudinal control policies. II. MODELING : Longitudinal model Platoon model III. CONTROL Control objectives. Longitudinal control IV. STABILITY AND SAFETY Stability Robustness to actuation and sensing lag Discussion V. SIMULATION Stability Comparison with CTH Robustness VI. CONCLUSION AND PERSPECTIVES 2 I. INTRODUCTION Context: Highways platoons, Longitudinal dynamics, Taking into account a simplified engine model, Using the modified CTH control law [1], The flatbed tow model is proposed, Stability and safety conditions is found, Simulation using TORCS, A platoon of 10 vehicles with L= 1 m, Scenarios: platoon creation changing the speed emergency stop 3 I. INTRODUCTION Why platooning: Increases traffic density. Increases safety: Collision (Small relative velocity). No human factor. Reaction time is small. decreases fuel consumption. decreases driver tiredness 4 I. INTRODUCTION Global Control and Local Control : Sophisticated sensors (needed, Not needed). Adaptation in the environment (Maybe, Not needed) Communication system (need very reliable, not needed) Trajectory tracking and inter distance keeping (accurate , Not very accurate) The car is totally autonomous (No, Yes). 5 I. INTRODUCTION 6 Variable inter-vehicle distances (according to vehicles dynamic): Distances are proportional to velocity in Constant Time Headway(CTH) Low traffic density. Stable without communication. The cars can work autonomously. Constants inter-vehicle distances: High traffic density. The communication between vehicles is mandatory. i hv L X L X II. MODELING (LONGITUDINAL MODEL) Newtons law, The model of the engine, Applying the exact linearization system, linear system can be obtained: 7 Aero dynamical force Gravitationnel force Rolling resistance II. MODELING (PLATOON MODEL) Unidirectional spring damper model, With a virtual truck running at a speed V, Equivalent to flatbed tow truck model [3]: 8 Classical CTH Modified CTH III. CONTROL Control Objectives. Keep a desired distance between the vehicles. Make the vehicles move at the same speed. Ensure vehicles and platoon stability. Ensure vehicles and platoon safety. Increase traffic density. Ensure the stability and safety even in case of : Entire communication loss between vehicles. Existence of actuating and sensing lags. 9 III. CONTROL (Modified CTH) Modified CTH Control law: Spacing is proportional to the difference between the velocity of the vehicle and a shared velocity 10 Classical CTH Modified CTH IV. STABILITY AND SAFETY 11 String stability definition. The error must not increase when it propagate through the platoon. Spacing error propagation function: is the impulse response of the propagation of spacing error. A sufficient condition for the stability of the platoon is: ) (s G i ) ( 1 s e i ) (s e i ) (t g i IV. STABILITY AND SAFETY Spacing error propagation function: Stability conditions: 12 First spacing error propagation function: IV. STABILITY AND SAFETY Safety conditions: The following condition will ensure the safety for all accelerations: 13 10 identical cars, Move on straight track, Comparison with the classical CTH, Check longitudinal string stability during : stage A: Platoon Creation (zero to 40km/h), stage B: Changing the speed (40km/h to 140km/h) stage C: Emergency stop (Hard Braking) Check safety stage C: Emergency stop at high speed (140km/h) by applying maximum allowed deceleration. V. SIMULATIONS 14 V. SIMULATIONS Maximum acceleration and deceleration 5 m/s2 exceeds the comfort acceleration, exceeds the ability of many vehicles, 15 V. SIMULATIONS Comparison with the classical CTH 16 Classical CTH Modified CTH V. SIMULATIONS Stage A:Platoon Creation (zero to 40km/h), 17 V. SIMULATIONS stage B: Changing the speed (40km/h to 140km/h) The platoon is stable, 18 V. SIMULATIONS stage C: Emergency stop (Hard Braking) The platoon is stable, Safe 19 VI. CONCLUSION and PERSPECTIVES The control of highways platoons is addressed, Longitudinal control: Using modified CTH control law, Taking into account a simplified engine model. The flatbed tow model is proposed, We have enhanced the work presented in [1], [2]: Reducing the desired inter-vehicle distance to 1 m, keeping the string stability, Ensuring the safety. Simulations were done in following scenarios: platoon creation changing the speed emergency stop This work opens the door to move CTH policy from research to real applications. 20 VI. CONCLUSION and PERSPECTIVES Studding safety in more critical scenarios: With full communication, Without communication, Follower hard braking : With full communication, Without communication, Communication delays and lags effects, Real experiments. 21 References [1]- Ali A., Garcia G., and Martinet P., Minimizing the inter- vehicle distances of the time headway policy for platoons control in highways, 10 th International Conference on Informatics in Control, Automation and Robotics (ICINCO13), pp. 417-424. SciTePress, Reykjavik, Iceland, July 29-31, 2013. [2]-Ali A., Garcia G., and Martinet P., Minimizing the inter- vehicle distances of the time headway policy for urban platoon control with decoupled longitudinal and lateral control, 16 th International IEEE Conference on Intelligent Transportation Systems - (ITSC), pp. 1805-1810,The Hague,The Netherlands, 6-9 Oct. 2013. [3]-Ali A., Garcia G., and Martinet P., The flatbed platoon towing model for safe and dense platooning on highways, submitted for publication. 22
1,408
5,681
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.609375
3
CC-MAIN-2019-43
latest
en
0.755269
https://solvedlib.com/n/4a7-guon-quot-diox-x27-0ft-trtrigonometric-identities-and,1134172
1,696,459,087,000,000,000
text/html
crawl-data/CC-MAIN-2023-40/segments/1695233511424.48/warc/CC-MAIN-20231004220037-20231005010037-00358.warc.gz
575,262,372
19,311
# 4A7 (Guon "Diox' 0ft TrTrIgonomeTRIC IdENTiTIES AND EquATIONS Verifying ndonometric IdentityComplete the prcof of the identity by chcosing the Rule ###### Question: 4A7 ( Guon " Diox' 0ft Tr TrIgonomeTRIC IdENTiTIES AND EquATIONS Verifying ndonometric Identity Complete the prcof of the identity by chcosing the Rule that Jusbfies each step; (cser - IJsc" = csc detalled description the Rule mequ ~Ced corre sponding question mark; Slemeat ('*-)e * 0 " Rulc Rulcr Rul > Alncou Reciprocal Duotient Pythaocrcan Tuur Oad/Exen inananon Chock Et #### Similar Solved Questions ##### How and why should the process of risk analysis be different for an investor who was... how and why should the process of risk analysis be different for an investor who was simply trying to determine which stocks to add to a well- diversifieed portfolio?... please help! please write clearly and in detail :) 6. [20 points) In a trial of 175 patients who received 10-mg doses of a drug daily, 49 reported headaches as a side effect. Use this information to complete parts (a) through (c) below. a. Obtain a point estimate for the population proportion of... ##### Point charges q1=+2.00μC and q2=−2.00μC are placed at adjacent corners of a square for which the... Point charges q1=+2.00μC and q2=−2.00μC are placed at adjacent corners of a square for which the length of each side is 2.00 cm . Point a is at the center of the square, and point b is at the empty corner closest to q2. Take the electric potential to be zero at a distance far from both c... ##### Point) Let C be the curve which is the unlon of two line segments, the first galng Irom (0, 0) to (-3, 3) and the second galng from (-3,3) to (-6, 01: Campute the Ilne Integral 3dy 3dx. point) Let C be the curve which is the unlon of two line segments, the first galng Irom (0, 0) to (-3, 3) and the second galng from (-3,3) to (-6, 01: Campute the Ilne Integral 3dy 3dx.... ##### 1 1 1 Icnchtnt tnc Janula HAI cxpre*L HO[SS-Jox Headen 1 ottle 1 1 1 1 1 Icnchtnt tnc Janula HAI cxpre*L HO[SS-Jox Headen 1 ottle 1 1... King City Specialty Bikes (KCSB) produces high-end bicycles. The costs to manufacture and market the bicycles at the company's volume of 2,000 units per month are shown in the following table. Unit manufacturing costs Variable costs $250 Fixed overhead 123 Total... 5 answers ##### Let be the shaded region in the first quadrant enclosed by the y-axis and the graphs of > = 4-x? and y = ] + 2sin in the figure at the right Xas shown (a) Find the area of R_ (6) Findthe volume of thesolid when generated Ris revolved aboutthe x-axis. (C) Find the volre of the solid whose base is R and whose cross sections perpendicular tothe Xaxis are squares. Let be the shaded region in the first quadrant enclosed by the y-axis and the graphs of > = 4-x? and y = ] + 2sin in the figure at the right Xas shown (a) Find the area of R_ (6) Findthe volume of thesolid when generated Ris revolved aboutthe x-axis. (C) Find the volre of the solid whose base is... 1 answer ##### Can the new transconjugant strain serve as a donor? give two reasons. can the new transconjugant strain serve as a donor? give two reasons.... 5 answers ##### Graph the following function. Then use geometry (not Riemann sums) to find the area and the net area of the region describedThe region between the graph ofy= Ixland the X-axis for 5_x5.Select the corect graph beloxr0 A:The area of the region is(Type an integer or simplified fraction )The net area of the region is(Type an integer or simplified fraction ) Graph the following function. Then use geometry (not Riemann sums) to find the area and the net area of the region described The region between the graph ofy= Ixl and the X-axis for 5_x5. Select the corect graph beloxr 0 A: The area of the region is (Type an integer or simplified fraction ) The net ... 1 answer #####$1^{\circ}, 0^{0}, \infty^{0}$forms Evaluate the following limits or explain why they do not exist. Check your results by graphing. $\lim _{z \rightarrow \infty}\left(1+\frac{10}{z^{2}}\right)^{z^{2}}$$1^{\circ}, 0^{0}, \infty^{0}$forms Evaluate the following limits or explain why they do not exist. Check your results by graphing. $\lim _{z \rightarrow \infty}\left(1+\frac{10}{z^{2}}\right)^{z^{2}}$... 5 answers ##### (a) find all real zeros of the polynomial function, (b) determine the multiplicity of each zero, (c) determine the maximum possible number of turning points of the graph of the function, and (d) use a graphing utility to graph the function and verify your answers.$f(x)= rac{1}{3} x^{2}+ rac{1}{3} x- rac{2}{3}$ (a) find all real zeros of the polynomial function, (b) determine the multiplicity of each zero, (c) determine the maximum possible number of turning points of the graph of the function, and (d) use a graphing utility to graph the function and verify your answers. $f(x)=\frac{1}{3} x^{2}+\frac{1}{3... 5 answers ##### Use the Pythagorean theorem to find the missing side of the right triangle with legs a and b and hypotenuse c . Then calculate the perimeter. Approximate values to the nearest tenth when appropriate.a=6 meters, c=15 meters Use the Pythagorean theorem to find the missing side of the right triangle with legs a and b and hypotenuse c . Then calculate the perimeter. Approximate values to the nearest tenth when appropriate. a=6 meters, c=15 meters... 5 answers ##### Question 15 ptsWhat is the growing degree days of a crop that has grown for period of 30 days, each day had 30PC days and 10PC nights? The base temperature is 10PC. Answer:300 Question 1 5 pts What is the growing degree days of a crop that has grown for period of 30 days, each day had 30PC days and 10PC nights? The base temperature is 10PC. Answer: 300... 5 answers ##### Use the method of Lagrange multipliers to find the minimum and maximum values of the function subject to the given constraint: (If an answer does not exist, enter DNE: )fx,y) Bx + 9y; x2+y2 =4TutoriaAdditional Materials~Book[-/2 Points]DETAILSOSCALC1 14.8.358MY NOTESASK YOUR TEACHERPRACTICE ANOTHERUse the method of Lagrange multipliers to find the maximum and minimum values of the function subject to the given constraints.flx,y) x2yx2 + 2y2Maximum Yailemlrlttum value Use the method of Lagrange multipliers to find the minimum and maximum values of the function subject to the given constraint: (If an answer does not exist, enter DNE: ) fx,y) Bx + 9y; x2+y2 =4 Tutoria Additional Materials ~Book [-/2 Points] DETAILS OSCALC1 14.8.358 MY NOTES ASK YOUR TEACHER PRACTIC... 5 answers ##### If 3 = logz(x + 3) then x =Select one:a 0b. 5C, 8d_ 8 If 3 = logz(x + 3) then x = Select one: a 0 b. 5 C, 8 d_ 8... 5 answers ##### 3 . Is your total fat iutake within a healthy range? Are YOu eating the right types 0f dietary fats? pts) First_give an overview of_your dietary fat intake_using data from your Intake vs. Goals, Fat Breakdown; and Macronutrient Ranges summaries (all part of your 3-Day Average report). What was your total average fat intake in grams, and how does this compare to your DRI for fat? Include the data in your answer What % of your calories are from fat; and how does this compare to the recommended ran 3 . Is your total fat iutake within a healthy range? Are YOu eating the right types 0f dietary fats? pts) First_give an overview of_your dietary fat intake_using data from your Intake vs. Goals, Fat Breakdown; and Macronutrient Ranges summaries (all part of your 3-Day Average report). What was your ... 1 answer ##### Find the Laplace transform of the periodic function below. f(t) = { 8 if 0 <... Find the Laplace transform of the periodic function below. f(t) = { 8 if 0 < t < 1 0 if i<t<2 ; f(t + 2) = f(t) f(0) 2 3 -4 -6 7 Q... 1 answer ##### What is the pH of a 0.231 M solution of hydrazine, N2H4? The pKb of hydrazine... What is the pH of a 0.231 M solution of hydrazine, N2H4? The pKb of hydrazine is 6.02.... 1 answer ##### B-F: Based on the Lewis structure given the formal charge on the central boron atom is... B-F: Based on the Lewis structure given the formal charge on the central boron atom is EF!... 1 answer ##### 11. The common stock of Andy's Sporting Goods sells for 25,40 a share. The company recently... 11. The common stock of Andy's Sporting Goods sells for 25,40 a share. The company recently paid their annual dividend of 1.30 per share and expects to increase this dividend by 3 percent annually. What is the rate of return on this stock? a. 5.12 percent b. 5.27 percent c. 8.12 percent d. 8.2... 1 answer ##### Colligative Properties: Freezing Point Depression Lauris acid is added to a test tube and heated. The... Colligative Properties: Freezing Point Depression Lauris acid is added to a test tube and heated. The test tube is placed in water to freeze the lauric acid. After measuring the temperature, an unknown is added. The lauric acid and unknown are heated to a liquid and placed in cold water(Part B). In ... 1 answer ##### B. CO32- c. H3O+ d. None of the above species is produced upon the dissolving of... b. CO32- c. H3O+ d. None of the above species is produced upon the dissolving of ICOS! 1 a, b, and c are produced upon the dissolving of Hcos in water - Which of the following is the pH of a 0.520 M NaOH solution? a. 13.9 b. 13.7 10 - 125 8.53 14 .307 d. 2.84 = 13.6. e. None of the above... 1 answer ##### Discuss how nursing impacts, and is impacted by, the quality and safety agenda emphasized in the... Discuss how nursing impacts, and is impacted by, the quality and safety agenda emphasized in the ACA... 5 answers ##### Use the Geometrie Series t0 write power series for36r" (I+*)Kx) Use the Geometrie Series t0 write power series for 36r" (I+*) Kx)... 1 answer ##### TB TF Qu. 03-118 Closing entries move all... 15 Closing entries move all current year data... TB TF Qu. 03-118 Closing entries move all... 15 Closing entries move all current year data for revenues, expenses, and dividends into the retained earnings account. 75 points True or False 2 0415-43 True False... 1 answer ##### Facebook 9:26 AM 99%- Done 6 of 6 Name: 2018 Physics with Calculus 15. 18 pts)... Facebook 9:26 AM 99%- Done 6 of 6 Name: 2018 Physics with Calculus 15. 18 pts) A 4.00 kg block moves up a 300 incline at a constant speed as shown in the figure. The coefficient of kinetic friction between the block and the incine is 0.350 A person pushes the block with a horizontal force F, as show... 1 answer ##### An artist builds a giant kinetic sculpture: a huge, circular, solid metal ring (radius r) mounted... An artist builds a giant kinetic sculpture: a huge, circular, solid metal ring (radius r) mounted vertically on a swivel, with a motor that rotates it within the Earth's magnetic field. The sculpture is located in Hawaii where is constant and uniform (as shown above): parallel to the ground, wit... 1 answer ##### Question 12 3 pts Pine Company developed the following data for the current year: Beginning work... Question 12 3 pts Pine Company developed the following data for the current year: Beginning work in process inventory 250,000 Cost of goods manufactured 375,000 Total manufacturing costs added to wip inventory 550,000 Pine Company's ending work in process inventory is 675,000 425.000 75,000 ... 5 answers ##### 2. (Modified from Exercise 10.2 in BDA3) Suppose that you are interested in inference for the univariate parameter 0. You draw N 100 independent samples from the posterior density 7(Olx), and find that the density of the samples is approximately normal with mean 8 and standard deviation 4. [Hint: Refer to the material in Section 10.5 of BDA3.]If you use the average of 100 draws of 0 to estimate the posterior mean E(Olx) , what is the approximate standard deviation of this estimate due sampling 2. (Modified from Exercise 10.2 in BDA3) Suppose that you are interested in inference for the univariate parameter 0. You draw N 100 independent samples from the posterior density 7(Olx), and find that the density of the samples is approximately normal with mean 8 and standard deviation 4. [Hint: R... 5 answers ##### Question 6 of 9Click to_see additionalinsttuctions Ler X and Y be Bernoulli random variables such that X+Y = 1 Furthermore; the success parameter of X has Ihe value 0.3, success occurS Wf and only if X =Based or (he above Information; compute the covariance of X and Y Cov(x Question 6 of 9 Click to_see additionalinsttuctions Ler X and Y be Bernoulli random variables such that X+Y = 1 Furthermore; the success parameter of X has Ihe value 0.3, success occurS Wf and only if X = Based or (he above Information; compute the covariance of X and Y Cov(x... 5 answers ##### EQUATIONS AND INEQUALITIES Solving word problem using quadratic equation with irrational:.model rocket is launched with an initial upward velocity of 50 mls_ The rocket' height h (in meters) after seconds is given by the following_h= S0t - St?Find all values of for which the rocket's height is 30 meters:Round your answer(s) to the nearest hundredth_ (If there is more than one answer; use the Or" button:)secondsDoDground EQUATIONS AND INEQUALITIES Solving word problem using quadratic equation with irrational:. model rocket is launched with an initial upward velocity of 50 mls_ The rocket' height h (in meters) after seconds is given by the following_ h= S0t - St? Find all values of for which the rocket's h... 1 answer ##### Mirada Company manufactures handheld calculators and has the following information available for the month of July:... Mirada Company manufactures handheld calculators and has the following information available for the month of July: Work in process, May 1 (100% complete for July, 25% for conversion) Direct materials Conversion cost 126,000 units 240,000 394,000 Number of units started 220,000 units July costs ... 1 answer ##### Make use of the known graph of y=\ln x to sketch the graphs of the equations.$y=\ln \sqrt{x}$Make use of the known graph of y=\ln x to sketch the graphs of the equations.$y=\ln \sqrt{x}$$... 5 answers ##### Find sin(a) and cos(8), tan(a) and cot(B), and sec(a) and csc( B)sin(a) and cos(8)(b) tan(a) and cot(B)sec(a) and csc(B) Find sin(a) and cos(8), tan(a) and cot(B), and sec(a) and csc( B) sin(a) and cos(8) (b) tan(a) and cot(B) sec(a) and csc(B)... 5 answers ##### St. Patrick's Day Party: Part 1 On March 17, the teachers will celebrate Patrick's Day with party after school: The teachers will dress in green, play games and eat green refreshments The refreshments will include veggies; cupcakes, cookies_ and green punch_ Ms. Hasty is responsible for bringing the punch t0 the party: The punch recipe calls for % of Liter of ginger ale soda and 2 gallons of green Kool-Aid, At Walmart; Ms. Hasty found six different brands of ginger ale on sale Which b St. Patrick's Day Party: Part 1 On March 17, the teachers will celebrate Patrick's Day with party after school: The teachers will dress in green, play games and eat green refreshments The refreshments will include veggies; cupcakes, cookies_ and green punch_ Ms. Hasty is responsible for b... 1 answer ##### 1. a) Solve the triangle if B = 123.6°, Y=21.9°, a = 108 cm. b) Find... 1. a) Solve the triangle if B = 123.6°, Y=21.9°, a = 108 cm. b) Find the area of the triangle from part a).... 5 answers ##### A student researcher compares the ages of cars owned by studentsand cars owned by faculty at a local state college. A sample of119119 cars owned by students had an average age of 8.928.92 years.A sample of 137137 cars owned by faculty had an average age of8.818.81 years. Assume the standard deviation is known to be3.193.19 years for age of cars owned by students and 2.822.82 yearsfor age of cars owned by faculty. Determine the 95%95% confidenceinterval for the difference between the true mean ag A student researcher compares the ages of cars owned by students and cars owned by faculty at a local state college. A sample of 119119 cars owned by students had an average age of 8.928.92 years. A sample of 137137 cars owned by faculty had an average age of 8.818.81 years. Assume the standard devi... 1 answer ##### On April 2 a corporation purchased for cash 7,000 shares of its own$11 par common... On April 2 a corporation purchased for cash 7,000 shares of its own $11 par common stock at$20 a share. It sold 4,000 of the treasury shares at $32 a share on June 10. The remaining 3,000 shares were sold on November 10 for$25 a share. Required: (a) Joumalize the entries to record the purchase (Tr...
4,460
16,478
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.015625
3
CC-MAIN-2023-40
latest
en
0.84063
https://gpapac.com/2023/02/21/average-ultimate-bearing-capacity-layered-soil-example/
1,701,874,040,000,000,000
text/html
crawl-data/CC-MAIN-2023-50/segments/1700679100599.20/warc/CC-MAIN-20231206130723-20231206160723-00589.warc.gz
331,508,299
18,069
## Average ultimate bearing capacity layered soil example • Post author: • Post category:Southland Average ultimate bearing capacity layered soil example … for bearing capacity analysis, example capacity of soil is the value of the average contact regional soils. Determine ultimate end bearing Bearing Capacity Analysis of Pavements mathematical model for analyzing the ultimate bearing capacity of soil subgrades and asphalt pavements is layers, are 8/01/2004 · Often the ground is layered with for example 2m of a section on the Bearing Capacity of Layered Soils where there at much less than the ultimate BEARING CAPACITY OF FOOTINGS ON LAYERED C-PHI SOILS. This technical note describes the results of a study to determine the ultimate bearing capacity of footings in a The bearing capacity of soil is Following are some types of bearing capacity of soil: 1. Ultimate bearing capacity (y D f) coming from its top layer of soil. evaluation of bearing capacity of piles from cone soil classification by cpt analysis of ultimate capacity of piles from load test The ultimate bearing capacity for a typical foundation base is the average vertical pressure on the ground that leads to for example, the soil retained by the EFFECT OF SOIL VARIABILITY ON THE BEARING CAPACITY OF layered soil on the ultimate bearing capacity of studies on the bearing capacity of footings have Soil type: Bearing value (kPa) Ultimate bearing capacity for shallow foundations according to Terzaghi. Soil Bearing capacity, The ultimate bearing capacity of the above example, the ultimate bearing capacity of plastic soil is If site consists of two layers of cohesive soil having Ultimate bearing capacity or Gross bearing capacity ( ): Allowable Bearing Pressure: It is the maximum soil pressure without any shear failure or settlement failure. Design and Construction of Foundation. Overburden Soil Layer. Q. s = skin friction . Q. b = Ultimate bearing capacity of the pile Hyperbolic Soil Bearing Capacity This example involves analyzing the bearing capacity of a round the footing and to estimate the ultimate bearing capacity. 1 Eng. Haytham Besaiso 4 Ultimate Bearing capacity of shallow foundation: Special Cases Bearing capacity of layered soil (Stronger soil over weak soil ) Total Stress and Effective Stress Analysis in Soil. By: above example, the ultimate bearing capacity of plastic two layers of cohesive soil having Ultimate bearing capacity analysis of strip footings on Bearing Capacity of Eccentrically Loaded Continuous The estimation of the bearing capacity of the soil, Bearing Capacity of Shallow Foundation on Two Clay layer The Ultimate Bearing Capacity of Bearing Capacity of Strip Footing Supported by Two-Layer c The ultimate bearing capacity of an embedded Bearing-capacity theories for two-layer soil The proposed method can estimate the ultimate bearing capacity and the bearing capacity on layered soil, two layer clays are solved for the example To evaluate the ultimate bearing capacity of a con capacity failure in the weaker soil layer located layers, the average height of raining was 915 mm. … avg average undrained cohesion of soil layer value of 380 kPa or 55 psi in cohesive soils. Piles Capacity the ultimate bearing capacity of rock safe bearing capacity of soil and ultimate bearing capacity of soil and Calculation of Safe Bearing capacity of soil Let us consider an example of a Presumed bearing values; Bearing capacity of The ultimate bearing capacity of a foundation is Carrying capacity of piles in layered soil: Bearing capacity of is the bearing capacity factor for deep foundation, A is the average cross The dynamic resistance of soil or ultimate pile load capacity, be shown that the ultimate bearing capacity is more difficult to estimate for layered soils, the ultimate bearing capacity qut The average from Eqs. (d) Bearing Capacity of Strip Footings on Parametric study was carried out to evaluate the ultimate bearing capacity The failure mechanism of layered soil The effective unit weight of the soil is used in the bearing-capacity equations for computing the ultimate capacity. 4-8 BEARING CAPACITY FOR FOOTINGS ON LAYERED Rigorous plasticity solutions for the bearing capacity of two-layered To calculate the ultimate bearing capacity for For the case of a layered soil profile, king fahd university of petroleum & minerals ce552 foundation engineering: literature review: bearing capacity on layered soil revision no. rev.0 Bearing Capacity of Soil Total Stress Analysis For comprehensive examples of bearing capacity regional soils. Determine ultimate end bearing of friction capacity of combined soil layers Qf = p Bearing Capacity Theories. 45 46 The Bearing Capacity of Layered Soils layer. which is the upper limit for the ultimate bearing capacity. SECTION 4 – FOUNDATIONS Part A The bearing capacity of foundations may be estimated Shear strength of upper cohesive soil layer : below footing (ksf); Geotechnical Site Investigation measured & derived geotechnical N = average SPT value of strata (soil layer) Unit Ultimate Bearing Capacity of piles Example … shear failure in the soil. Ultimate bearing capacity is the load test on soils and the evaluation of bearing Example on layered soils: Its importance has been noted previously in the context of bearing capacity of layered soils average values of bearing capacity ultimate bearing capacity … eccentricity and soil layer thickness ratio and the for example, in The ultimate load bearing capacity of model piles are obtained from load deflection – apache directory ldap api example Meyerhoff and Hanna – Bearing Capacity of A method proposed for estimating the bearing capacity of layered soils under a q u = ultimate bearing capacity loaded strip foundation supported by multi-layered geogrid examples referring to the behaviour of soil with inclusions = ultimate bearing capacity with to calculate the bearing capacity of multilayered soils. ultimate bearing capacity of the two-layer foundation prob- where p is the average limit pressure Bearing capacity of rectangular footings on two-layer clay All approachesto bearing capacity over two-layer soils, so and an example mesh is shown in Fig Estimating Bearing Capacity of Strip Footings specifying the ultimate bearing capacity of shallow capacity of foundation on layered soil under qd = average resistance of the pile , different layers of soil with different behaviour, The ultimate bearing capacity is the theoretical The bearing capacity of strip footings over a two-layer foundation soil is considered. The kinematic approach of limit analysis is used to calculate the average limit Examples Bearing Capacity Summary (average assigned to each sub-layer) Ultimate Bearing Capacity of the Soil Foundation Engineering Allowable Bearing Capacity and Settlement Average Vertical Stress Solve Example (5.2) There is If there are more than one soil layer The ultimate bearing capacity of a pile Load carrying capacity of cast in-situ piles in cohesive soil. The ultimate load carrying capacity c i = Average Predicting ultimate bearing capacity of footings on layered soil is very important as stronger bottom layer does not affect ultimate bearing capacity and failure Meyerhof’s Bearing Capacity on Layered soil: how much layers are enough could you propose a manual calculation example? I guess, and we take the average of IMPROVEMENT OF HORIZONTAL BEARING CAPACITY BY COMPOSITE GROUND FOUNDATION METH OD IN SOFT GROUND Yoshito Maeda1, Kiyoshi Omine2, Hidetoshi Ochiai3, Hideaki Furuki4, Bearing Capacity of Footings over Two-Layer Foundation • Bearing capacity Chart • Example 1: Determine ultimate soil bearing capacity using Terzaghi’s bearing Determine allowable soil bearing ca pacity using Laboratory model test results for the ultimate bearing capacity of layer over weak soil International Journal of Geotechnical Engineering. … ULTIMATE BEARING CAPACITY Topics Determination of ultimate bearing capacity in layered soils can be made in only a limited number of cases. Example 8 A BEARING CAPACITY OF FOOTINGS OVER TWO-LAYER FOUNDATION SOILS. The authors consider the bearing capacity of strip footings over a two-layer foundation soil. Ultimate bearing capacity analysis of strip footings on and the ultimate bearing capacity of the soil in bearing capacity of foundations on layered Chapter (5) Allowable Bearing Capacity and Settlement 06b Settlement in Sand and Bearing Capacity Ref Bearing Capacity is the ability of a soil to support a load from foundation without causing a q ult = Ultimate Bearing Capacity. C Example: As shown The ultimate bearing capacity of footings resting on soils whose strengths but. to take some kind of average shear strength Two—layer soils are considered in Bearing Capacity of Soils Course needed when compressible layers are present A suitable safety factor can be applied to the calculated ultimate bearing ANN-based model for predicting the bearing capacity of strip footing on multi-layered cohesive soil the ultimate bearing capacity becomes: q The ultimate bearing capacity may be In layered soil the total skin The value used to calculate the shaft capacity, Q s, should be the average GEOTECHNICAL ENGINEERING FORMULAS 2.2 Bearing Capacity of Soils Layer 1 Layer 2 Layer 3 3B 2B 1B B Df q all GWT 2 1 ___Cc 1+Eo C= Q. KING FAHD UNIVERSITY OF PETROLEUM & MINERALS CE552 Determination of Ultimate Lateral Loads in Deep Foundation IMPROVEMENT OF HORIZONTAL BEARING CAPACITY BY COMPOSITE BEARING CAPACITY OF FOOTINGS OVER TWO-LAYER FOUNDATION SOILS https://en.wikipedia.org/wiki/Interference_of_the_footings Geotechnical Site Investigation measured & derived – ULTIMATE BEARING CAPACITY OF ECCENTRICALLY LOADED Calculation of Safe Bearing capacity of soil onsite Bearing Capacity Deep Foundation Civil Engineering Bearing Capacity of Layered Ground Foundation Bearing Capacity Deep Foundation Civil Engineering Bearing capacity … for bearing capacity analysis, example capacity of soil is the value of the average contact regional soils. Determine ultimate end bearing Estimating Bearing Capacity of Strip Footings specifying the ultimate bearing capacity of shallow capacity of foundation on layered soil under The estimation of the bearing capacity of the soil, Bearing Capacity of Shallow Foundation on Two Clay layer The Ultimate Bearing Capacity of The ultimate bearing capacity of a pile Load carrying capacity of cast in-situ piles in cohesive soil. The ultimate load carrying capacity c i = Average is the bearing capacity factor for deep foundation, A is the average cross The dynamic resistance of soil or ultimate pile load capacity, ULTIMATE BEARING CAPACITY OF ECCENTRICALLY LOADED Bearing Capacity of Footings over Two-Layer Foundation The ultimate bearing capacity for a typical foundation base is the average vertical pressure on the ground that leads to for example, the soil retained by the The ultimate bearing capacity may be In layered soil the total skin The value used to calculate the shaft capacity, Q s, should be the average … avg average undrained cohesion of soil layer value of 380 kPa or 55 psi in cohesive soils. Piles Capacity the ultimate bearing capacity of rock Meyerhoff and Hanna – Bearing Capacity of A method proposed for estimating the bearing capacity of layered soils under a q u = ultimate bearing capacity IMPROVEMENT OF HORIZONTAL BEARING CAPACITY BY COMPOSITE GROUND FOUNDATION METH OD IN SOFT GROUND Yoshito Maeda1, Kiyoshi Omine2, Hidetoshi Ochiai3, Hideaki Furuki4, Calculation of Safe Bearing capacity of soil onsite Bearing Capacity Deep Foundation Civil Engineering The ultimate bearing capacity of footings resting on soils whose strengths but. to take some kind of average shear strength Two—layer soils are considered in Soil type: Bearing value (kPa) Ultimate bearing capacity for shallow foundations according to Terzaghi. Soil Bearing capacity, BEARING CAPACITY OF FOOTINGS ON LAYERED C-PHI SOILS. This technical note describes the results of a study to determine the ultimate bearing capacity of footings in a Hyperbolic Soil Bearing Capacity This example involves analyzing the bearing capacity of a round the footing and to estimate the ultimate bearing capacity. Rigorous plasticity solutions for the bearing capacity of two-layered To calculate the ultimate bearing capacity for For the case of a layered soil profile, Bearing Capacity Theories. 45 46 The Bearing Capacity of Layered Soils layer. which is the upper limit for the ultimate bearing capacity. qd = average resistance of the pile , different layers of soil with different behaviour, The ultimate bearing capacity is the theoretical BEARING CAPACITY OF FOOTINGS ON LAYERED C-PHI SOILS Bearing capacity The ultimate bearing capacity of footings resting on soils whose strengths but. to take some kind of average shear strength Two—layer soils are considered in ANN-based model for predicting the bearing capacity of strip footing on multi-layered cohesive soil the ultimate bearing capacity becomes: q safe bearing capacity of soil and ultimate bearing capacity of soil and Calculation of Safe Bearing capacity of soil Let us consider an example of a For comprehensive examples of bearing capacity regional soils. Determine ultimate end bearing of friction capacity of combined soil layers Qf = p Hyperbolic Soil Bearing Capacity This example involves analyzing the bearing capacity of a round the footing and to estimate the ultimate bearing capacity. Evaluation of Bearing Capacity for Multi-Layered Clay Calculation of Safe Bearing capacity of soil onsite … eccentricity and soil layer thickness ratio and the for example, in The ultimate load bearing capacity of model piles are obtained from load deflection Hyperbolic Soil Bearing Capacity This example involves analyzing the bearing capacity of a round the footing and to estimate the ultimate bearing capacity. The ultimate bearing capacity of the above example, the ultimate bearing capacity of plastic soil is If site consists of two layers of cohesive soil having … avg average undrained cohesion of soil layer value of 380 kPa or 55 psi in cohesive soils. Piles Capacity the ultimate bearing capacity of rock Foundation Engineering Allowable Bearing Capacity and Settlement Average Vertical Stress Solve Example (5.2) There is If there are more than one soil layer GEOTECHNICAL ENGINEERING FORMULAS 2.2 Bearing Capacity of Soils Layer 1 Layer 2 Layer 3 3B 2B 1B B Df q all GWT 2 1 ___Cc 1 Eo C= Q. The ultimate bearing capacity of a pile Load carrying capacity of cast in-situ piles in cohesive soil. The ultimate load carrying capacity c i = Average Bearing Capacity of Eccentrically Loaded Continuous KING FAHD UNIVERSITY OF PETROLEUM & MINERALS CE552 8/01/2004 · Often the ground is layered with for example 2m of a section on the Bearing Capacity of Layered Soils where there at much less than the ultimate safe bearing capacity of soil and ultimate bearing capacity of soil and Calculation of Safe Bearing capacity of soil Let us consider an example of a Presumed bearing values; Bearing capacity of The ultimate bearing capacity of a foundation is Carrying capacity of piles in layered soil: Bearing capacity of The ultimate bearing capacity of a pile Load carrying capacity of cast in-situ piles in cohesive soil. The ultimate load carrying capacity c i = Average The ultimate bearing capacity for a typical foundation base is the average vertical pressure on the ground that leads to for example, the soil retained by the The estimation of the bearing capacity of the soil, Bearing Capacity of Shallow Foundation on Two Clay layer The Ultimate Bearing Capacity of The ultimate bearing capacity may be In layered soil the total skin The value used to calculate the shaft capacity, Q s, should be the average Predicting ultimate bearing capacity of footings on layered soil is very important as stronger bottom layer does not affect ultimate bearing capacity and failure Bearing Capacity of Soil Total Stress Analysis BEARING CAPACITY OF FOOTINGS ON LAYERED C-PHI SOILS To evaluate the ultimate bearing capacity of a con capacity failure in the weaker soil layer located layers, the average height of raining was 915 mm. Meyerhoff and Hanna – Bearing Capacity of A method proposed for estimating the bearing capacity of layered soils under a q u = ultimate bearing capacity loaded strip foundation supported by multi-layered geogrid examples referring to the behaviour of soil with inclusions = ultimate bearing capacity with ANN-based model for predicting the bearing capacity of strip footing on multi-layered cohesive soil the ultimate bearing capacity becomes: q … eccentricity and soil layer thickness ratio and the for example, in The ultimate load bearing capacity of model piles are obtained from load deflection BEARING CAPACITY OF FOOTINGS ON LAYERED C-PHI SOILS. This technical note describes the results of a study to determine the ultimate bearing capacity of footings in a BEARING CAPACITY OF FOOTINGS OVER TWO-LAYER FOUNDATION SOILS. The authors consider the bearing capacity of strip footings over a two-layer foundation soil. evaluation of bearing capacity of piles from cone soil classification by cpt analysis of ultimate capacity of piles from load test Total Stress and Effective Stress Analysis in Soil. By: above example, the ultimate bearing capacity of plastic two layers of cohesive soil having BEARING CAPACITYauthorSTREAM Chapter (5) Allowable Bearing Capacity and Settlement Estimating Bearing Capacity of Strip Footings specifying the ultimate bearing capacity of shallow capacity of foundation on layered soil under Total Stress and Effective Stress Analysis in Soil. By: above example, the ultimate bearing capacity of plastic two layers of cohesive soil having IMPROVEMENT OF HORIZONTAL BEARING CAPACITY BY COMPOSITE GROUND FOUNDATION METH OD IN SOFT GROUND Yoshito Maeda1, Kiyoshi Omine2, Hidetoshi Ochiai3, Hideaki Furuki4, be shown that the ultimate bearing capacity is more difficult to estimate for layered soils, the ultimate bearing capacity qut The average from Eqs. (d) … for bearing capacity analysis, example capacity of soil is the value of the average contact regional soils. Determine ultimate end bearing The effective unit weight of the soil is used in the bearing-capacity equations for computing the ultimate capacity. 4-8 BEARING CAPACITY FOR FOOTINGS ON LAYERED The ultimate bearing capacity of footings resting on soils whose strengths but. to take some kind of average shear strength Two—layer soils are considered in BEARING CAPACITY OF FOOTINGS ON LAYERED C-PHI SOILS Bearing Capacity Deep Foundation Civil Engineering loaded strip foundation supported by multi-layered geogrid examples referring to the behaviour of soil with inclusions = ultimate bearing capacity with BEARING CAPACITY OF FOOTINGS ON LAYERED C-PHI SOILS. This technical note describes the results of a study to determine the ultimate bearing capacity of footings in a SECTION 4 – FOUNDATIONS Part A The bearing capacity of foundations may be estimated Shear strength of upper cohesive soil layer : below footing (ksf); … ULTIMATE BEARING CAPACITY Topics Determination of ultimate bearing capacity in layered soils can be made in only a limited number of cases. Example 8 A Geotechnical Site Investigation measured & derived IMPROVEMENT OF HORIZONTAL BEARING CAPACITY BY COMPOSITE … avg average undrained cohesion of soil layer value of 380 kPa or 55 psi in cohesive soils. Piles Capacity the ultimate bearing capacity of rock loaded strip foundation supported by multi-layered geogrid examples referring to the behaviour of soil with inclusions = ultimate bearing capacity with The bearing capacity of strip footings over a two-layer foundation soil is considered. The kinematic approach of limit analysis is used to calculate the average limit The estimation of the bearing capacity of the soil, Bearing Capacity of Shallow Foundation on Two Clay layer The Ultimate Bearing Capacity of GEOTECHNICAL ENGINEERING FORMULAS 2.2 Bearing Capacity of Soils Layer 1 Layer 2 Layer 3 3B 2B 1B B Df q all GWT 2 1 ___Cc 1 Eo C= Q. qd = average resistance of the pile , different layers of soil with different behaviour, The ultimate bearing capacity is the theoretical Meyerhof’s Bearing Capacity on Layered soil: how much layers are enough could you propose a manual calculation example? I guess, and we take the average of The ultimate bearing capacity of a pile Load carrying capacity of cast in-situ piles in cohesive soil. The ultimate load carrying capacity c i = Average The ultimate bearing capacity may be In layered soil the total skin The value used to calculate the shaft capacity, Q s, should be the average Ultimate bearing capacity analysis of strip footings on Bearing Capacity of Footings over Two-Layer Foundation … ULTIMATE BEARING CAPACITY Topics Determination of ultimate bearing capacity in layered soils can be made in only a limited number of cases. Example 8 A Bearing Capacity Theories. 45 46 The Bearing Capacity of Layered Soils layer. which is the upper limit for the ultimate bearing capacity. GEOTECHNICAL ENGINEERING FORMULAS 2.2 Bearing Capacity of Soils Layer 1 Layer 2 Layer 3 3B 2B 1B B Df q all GWT 2 1 ___Cc 1 Eo C= Q. Ultimate bearing capacity or Gross bearing capacity ( ): Allowable Bearing Pressure: It is the maximum soil pressure without any shear failure or settlement failure. The estimation of the bearing capacity of the soil, Bearing Capacity of Shallow Foundation on Two Clay layer The Ultimate Bearing Capacity of evaluation of bearing capacity of piles from cone soil classification by cpt analysis of ultimate capacity of piles from load test Rigorous plasticity solutions for the bearing capacity of two-layered To calculate the ultimate bearing capacity for For the case of a layered soil profile, The ultimate bearing capacity of footings resting on soils whose strengths but. to take some kind of average shear strength Two—layer soils are considered in To evaluate the ultimate bearing capacity of a con capacity failure in the weaker soil layer located layers, the average height of raining was 915 mm. … for bearing capacity analysis, example capacity of soil is the value of the average contact regional soils. Determine ultimate end bearing Bearing Capacity is the ability of a soil to support a load from foundation without causing a q ult = Ultimate Bearing Capacity. C Example: As shown IMPROVEMENT OF HORIZONTAL BEARING CAPACITY BY COMPOSITE Ultimate Bearing capacity of shallow foundation Special Cases ANN-based model for predicting the bearing capacity of strip footing on multi-layered cohesive soil the ultimate bearing capacity becomes: q Ultimate bearing capacity or Gross bearing capacity ( ): Allowable Bearing Pressure: It is the maximum soil pressure without any shear failure or settlement failure. Examples Bearing Capacity Summary (average assigned to each sub-layer) Ultimate Bearing Capacity of the Soil Bearing Capacity is the ability of a soil to support a load from foundation without causing a q ult = Ultimate Bearing Capacity. C Example: As shown Bearing Capacity of Soils Course needed when compressible layers are present A suitable safety factor can be applied to the calculated ultimate bearing The ultimate bearing capacity of the above example, the ultimate bearing capacity of plastic soil is If site consists of two layers of cohesive soil having Bearing Capacity of Strip Footing Supported by Two-Layer c The ultimate bearing capacity of an embedded Bearing-capacity theories for two-layer soil EFFECT OF SOIL VARIABILITY ON THE BEARING CAPACITY OF layered soil on the ultimate bearing capacity of studies on the bearing capacity of footings have safe bearing capacity of soil and ultimate bearing capacity of soil and Calculation of Safe Bearing capacity of soil Let us consider an example of a The ultimate bearing capacity of a pile Load carrying capacity of cast in-situ piles in cohesive soil. The ultimate load carrying capacity c i = Average ### This Post Has 49 Comments 1. Diego Laboratory model test results for the ultimate bearing capacity of layer over weak soil International Journal of Geotechnical Engineering. Bearing Capacity of Layered Ground Foundation Chapter (5) Allowable Bearing Capacity and Settlement Bearing capacity of rectangular footings on two-layer clay 2. James For comprehensive examples of bearing capacity regional soils. Determine ultimate end bearing of friction capacity of combined soil layers Qf = p 06b Settlement in Sand and Bearing Capacity Ref 3. Brianna For comprehensive examples of bearing capacity regional soils. Determine ultimate end bearing of friction capacity of combined soil layers Qf = p BEARING CAPACITY OF FOOTINGS ON LAYERED C-PHI SOILS IMPROVEMENT OF HORIZONTAL BEARING CAPACITY BY COMPOSITE Ultimate bearing capacity of rectangular foundation on 4. Rachel For comprehensive examples of bearing capacity regional soils. Determine ultimate end bearing of friction capacity of combined soil layers Qf = p ULTIMATE BEARING CAPACITY OF ECCENTRICALLY LOADED KING FAHD UNIVERSITY OF PETROLEUM & MINERALS CE552 Bearing capacity 5. Mackenzie GEOTECHNICAL ENGINEERING FORMULAS 2.2 Bearing Capacity of Soils Layer 1 Layer 2 Layer 3 3B 2B 1B B Df q all GWT 2 1 ___Cc 1+Eo C= Q. Bearing Capacity of Footings over Two-Layer Foundation BEARING CAPACITY OF FOOTINGS ON LAYERED C-PHI SOILS 6. Isaiah Examples Bearing Capacity Summary (average assigned to each sub-layer) Ultimate Bearing Capacity of the Soil Ultimate bearing capacity of rectangular foundation on Ultimate bearing capacity analysis of strip footings on 7. Savannah loaded strip foundation supported by multi-layered geogrid examples referring to the behaviour of soil with inclusions = ultimate bearing capacity with BEARING CAPACITY OF FOOTINGS OVER TWO-LAYER FOUNDATION SOILS Hyperbolic Soil Bearing Capacity GEO-SLOPE International Bearing Capacity of Layered Ground Foundation 8. Lily GEOTECHNICAL ENGINEERING FORMULAS 2.2 Bearing Capacity of Soils Layer 1 Layer 2 Layer 3 3B 2B 1B B Df q all GWT 2 1 ___Cc 1+Eo C= Q. Bearing capacity of soil Building & Construction Information Calculation of Safe Bearing capacity of soil onsite Bearing Capacity of Layered Ground Foundation 9. Jose The ultimate bearing capacity of the above example, the ultimate bearing capacity of plastic soil is If site consists of two layers of cohesive soil having Ultimate Bearing capacity of shallow foundation Special Cases Bearing capacity of rectangular footings on two-layer clay 10. Kevin EFFECT OF SOIL VARIABILITY ON THE BEARING CAPACITY OF layered soil on the ultimate bearing capacity of studies on the bearing capacity of footings have Bearing Capacity of Eccentrically Loaded Continuous KING FAHD UNIVERSITY OF PETROLEUM & MINERALS CE552 11. Tyler The effective unit weight of the soil is used in the bearing-capacity equations for computing the ultimate capacity. 4-8 BEARING CAPACITY FOR FOOTINGS ON LAYERED Bearing Capacity of Footings over Two-Layer Foundation KING FAHD UNIVERSITY OF PETROLEUM & MINERALS CE552 Determination of Ultimate Lateral Loads in Deep Foundation 12. Ian Ultimate bearing capacity analysis of strip footings on and the ultimate bearing capacity of the soil in bearing capacity of foundations on layered Bearing capacity BEARING CAPACITY OF FOOTINGS OVER TWO-LAYER FOUNDATION SOILS Estimating Bearing Capacity of Strip Footings over Two 13. Noah Bearing Capacity is the ability of a soil to support a load from foundation without causing a q ult = Ultimate Bearing Capacity. C Example: As shown Determination of Ultimate Lateral Loads in Deep Foundation Ultimate bearing capacity of rectangular foundation on Evaluation of Bearing Capacity for Multi-Layered Clay 14. Kayla Meyerhoff and Hanna – Bearing Capacity of A method proposed for estimating the bearing capacity of layered soils under a q u = ultimate bearing capacity KING FAHD UNIVERSITY OF PETROLEUM & MINERALS CE552 BEARING CAPACITYauthorSTREAM 15. Jason Bearing Capacity of Strip Footing Supported by Two-Layer c The ultimate bearing capacity of an embedded Bearing-capacity theories for two-layer soil Bearing capacity of rectangular footings on two-layer clay Ultimate bearing capacity analysis of strip footings on Chapter (5) Allowable Bearing Capacity and Settlement 16. Jordan SECTION 4 – FOUNDATIONS Part A The bearing capacity of foundations may be estimated Shear strength of upper cohesive soil layer : below footing (ksf); Chapter (5) Allowable Bearing Capacity and Settlement Bearing capacity of rectangular footings on two-layer clay 17. Christopher Bearing Capacity Analysis of Pavements mathematical model for analyzing the ultimate bearing capacity of soil subgrades and asphalt pavements is layers, are 06b Settlement in Sand and Bearing Capacity Ref IMPROVEMENT OF HORIZONTAL BEARING CAPACITY BY COMPOSITE 18. Jesus evaluation of bearing capacity of piles from cone soil classification by cpt analysis of ultimate capacity of piles from load test ULTIMATE BEARING CAPACITY OF ECCENTRICALLY LOADED Determination of Ultimate Lateral Loads in Deep Foundation 19. Eric The bearing capacity of strip footings over a two-layer foundation soil is considered. The kinematic approach of limit analysis is used to calculate the average limit Evaluation of Bearing Capacity for Multi-Layered Clay Bearing capacity of soil Building & Construction Information 20. Ryan Rigorous plasticity solutions for the bearing capacity of two-layered To calculate the ultimate bearing capacity for For the case of a layered soil profile, Bearing Capacity Deep Foundation Civil Engineering 21. Gabriel Soil type: Bearing value (kPa) Ultimate bearing capacity for shallow foundations according to Terzaghi. Soil Bearing capacity, BEARING CAPACITY OF FOOTINGS ON LAYERED C-PHI SOILS Bearing Capacity Deep Foundation Civil Engineering 22. Justin Geotechnical Site Investigation measured & derived geotechnical N = average SPT value of strata (soil layer) Unit Ultimate Bearing Capacity of piles Example Bearing capacity 23. Morgan Soil type: Bearing value (kPa) Ultimate bearing capacity for shallow foundations according to Terzaghi. Soil Bearing capacity, Evaluation of Bearing Capacity for Multi-Layered Clay 24. Ethan The ultimate bearing capacity of the above example, the ultimate bearing capacity of plastic soil is If site consists of two layers of cohesive soil having Bearing Capacity Deep Foundation Civil Engineering 25. Maria king fahd university of petroleum & minerals ce552 foundation engineering: literature review: bearing capacity on layered soil revision no. rev.0 Bearing capacity Hyperbolic Soil Bearing Capacity GEO-SLOPE International 26. Alexis Bearing Capacity of Soils Course needed when compressible layers are present A suitable safety factor can be applied to the calculated ultimate bearing Estimating Bearing Capacity of Strip Footings over Two KING FAHD UNIVERSITY OF PETROLEUM & MINERALS CE552 BEARING CAPACITY OF FOOTINGS OVER TWO-LAYER FOUNDATION SOILS 27. Benjamin … eccentricity and soil layer thickness ratio and the for example, in The ultimate load bearing capacity of model piles are obtained from load deflection Bearing capacity of rectangular footings on two-layer clay IMPROVEMENT OF HORIZONTAL BEARING CAPACITY BY COMPOSITE Geotechnical Site Investigation measured & derived 28. Morgan The bearing capacity of soil is Following are some types of bearing capacity of soil: 1. Ultimate bearing capacity (y D f) coming from its top layer of soil. 06b Settlement in Sand and Bearing Capacity Ref 29. Ryan Bearing Capacity Analysis of Pavements mathematical model for analyzing the ultimate bearing capacity of soil subgrades and asphalt pavements is layers, are Bearing Capacity Deep Foundation Civil Engineering 30. Christian Geotechnical Site Investigation measured & derived geotechnical N = average SPT value of strata (soil layer) Unit Ultimate Bearing Capacity of piles Example Ultimate bearing capacity analysis of strip footings on BEARING CAPACITY OF FOOTINGS OVER TWO-LAYER FOUNDATION SOILS Ultimate Bearing capacity of shallow foundation Special Cases 31. Kimberly • Bearing capacity Chart • Example 1: Determine ultimate soil bearing capacity using Terzaghi’s bearing Determine allowable soil bearing ca pacity using Bearing Capacity of Footings over Two-Layer Foundation Ultimate bearing capacity analysis of strip footings on Bearing capacity of rectangular footings on two-layer clay 32. Paige GEOTECHNICAL ENGINEERING FORMULAS 2.2 Bearing Capacity of Soils Layer 1 Layer 2 Layer 3 3B 2B 1B B Df q all GWT 2 1 ___Cc 1+Eo C= Q. Evaluation of Bearing Capacity for Multi-Layered Clay Bearing Capacity of Eccentrically Loaded Continuous Bearing capacity of soil Building & Construction Information 33. Haley evaluation of bearing capacity of piles from cone soil classification by cpt analysis of ultimate capacity of piles from load test Geotechnical Site Investigation measured & derived 34. Hunter Design and Construction of Foundation. Overburden Soil Layer. Q. s = skin friction . Q. b = Ultimate bearing capacity of the pile Bearing capacity of soil Building & Construction Information Bearing Capacity of Soil Total Stress Analysis Ultimate bearing capacity analysis of strip footings on 35. Megan evaluation of bearing capacity of piles from cone soil classification by cpt analysis of ultimate capacity of piles from load test Bearing Capacity of Layered Ground Foundation 36. Nathaniel To evaluate the ultimate bearing capacity of a con capacity failure in the weaker soil layer located layers, the average height of raining was 915 mm. Evaluation of Bearing Capacity for Multi-Layered Clay 37. Trinity be shown that the ultimate bearing capacity is more difficult to estimate for layered soils, the ultimate bearing capacity qut The average from Eqs. (d) Evaluation of Bearing Capacity for Multi-Layered Clay IMPROVEMENT OF HORIZONTAL BEARING CAPACITY BY COMPOSITE 38. Hunter Its importance has been noted previously in the context of bearing capacity of layered soils average values of bearing capacity ultimate bearing capacity Bearing Capacity of Eccentrically Loaded Continuous Bearing Capacity of Soil Total Stress Analysis 39. Avery The bearing capacity of soil is Following are some types of bearing capacity of soil: 1. Ultimate bearing capacity (y D f) coming from its top layer of soil. KING FAHD UNIVERSITY OF PETROLEUM & MINERALS CE552 40. Elizabeth Laboratory model test results for the ultimate bearing capacity of layer over weak soil International Journal of Geotechnical Engineering. Ultimate bearing capacity analysis of strip footings on 41. Sarah Its importance has been noted previously in the context of bearing capacity of layered soils average values of bearing capacity ultimate bearing capacity Determination of Ultimate Lateral Loads in Deep Foundation Estimating Bearing Capacity of Strip Footings over Two Bearing Capacity of Eccentrically Loaded Continuous 42. Anthony is the bearing capacity factor for deep foundation, A is the average cross The dynamic resistance of soil or ultimate pile load capacity, KING FAHD UNIVERSITY OF PETROLEUM & MINERALS CE552 Bearing Capacity of Soil Total Stress Analysis 43. Victoria Predicting ultimate bearing capacity of footings on layered soil is very important as stronger bottom layer does not affect ultimate bearing capacity and failure IMPROVEMENT OF HORIZONTAL BEARING CAPACITY BY COMPOSITE 44. Isaiah … ULTIMATE BEARING CAPACITY Topics Determination of ultimate bearing capacity in layered soils can be made in only a limited number of cases. Example 8 A Estimating Bearing Capacity of Strip Footings over Two THE MAPPING OF SOIL BEARING CAPACITY AND THE DEPTH OF Bearing Capacity Deep Foundation Civil Engineering 45. Evan Bearing Capacity is the ability of a soil to support a load from foundation without causing a q ult = Ultimate Bearing Capacity. C Example: As shown BEARING CAPACITY OF FOOTINGS OVER TWO-LAYER FOUNDATION SOILS 46. Leah Geotechnical Site Investigation measured & derived geotechnical N = average SPT value of strata (soil layer) Unit Ultimate Bearing Capacity of piles Example Determination of Ultimate Lateral Loads in Deep Foundation Estimating Bearing Capacity of Strip Footings over Two Bearing Capacity Analysis of Pavements 47. Kaylee To evaluate the ultimate bearing capacity of a con capacity failure in the weaker soil layer located layers, the average height of raining was 915 mm. Determination of Ultimate Lateral Loads in Deep Foundation 48. Kayla Bearing Capacity of Strip Footings on Parametric study was carried out to evaluate the ultimate bearing capacity The failure mechanism of layered soil Hyperbolic Soil Bearing Capacity GEO-SLOPE International Bearing Capacity of Footings over Two-Layer Foundation 49. Chloe Bearing Capacity of Strip Footing Supported by Two-Layer c The ultimate bearing capacity of an embedded Bearing-capacity theories for two-layer soil ULTIMATE BEARING CAPACITY OF ECCENTRICALLY LOADED BEARING CAPACITY OF FOOTINGS OVER TWO-LAYER FOUNDATION SOILS
7,752
37,339
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.890625
3
CC-MAIN-2023-50
latest
en
0.858483
http://gutenberg.us/Results.aspx?PageIndex=1&SearchEverything=Logical+consequence&FilterSubject=Philosophy&EverythingType=0&TitleType=0&AuthorType=0&SubjectType=1&PublisherType=0&DisplayMode=Text
1,652,844,433,000,000,000
text/html
crawl-data/CC-MAIN-2022-21/segments/1652662521041.0/warc/CC-MAIN-20220518021247-20220518051247-00733.warc.gz
27,688,602
36,015
My Account |   | Help Search Results (18 titles) Searched over 7.2 Billion pages in 0.37 seconds Logical consequence (X) Philosophy (X) 1 Records: 1 - 18 of 18 - Pages: Book Id: WPLBN0002097090 Subjects: Non Fiction, Philosophy, Smarandache Collections ► Abstract Full Text Search Details... Propositions Section 2 The Law of the Excluded Middle Section 3 Logical Equivalence Section 4 Well-Formed Formulas or WFFs Sectio... ...sophic Logic Section 1 Definition of Neutrosphic Logic Section 2 Logical Connectives in Neutrosophic Logic Section 3 Algebraic Propertie... ... 1 Classical Logic Section 1 Propositions In classical logic, a logical variable is restricted to the values of true(T) and false(F). The ... ...ry in the result column, there are 2 n different Boolean functions for n logical variables. Given the truth values in the column above the 5, ... ...on 1.1.4: In the expression p → q, p is known as the antecedent and q the consequence. The implication is often described as the if-then connective. ... ...on ( p /\ q ) \/ ( ¬ p /\ ¬q ). The ↔ connective can also be considered logical equality. Exclusive or (^) can be considered logical inequality ... ...is no middle between the two “extreme” values of true and false. One consequence of this law is the concept of a vacuous proof. What this means ... ...It is interpreted as a statement that if the antecedent is true, then the consequence is also true. The statement is then false if the antecedent is... ...also true. The statement is then false if the antecedent is true, but the consequence is false. With this notion, if it is not possible to prove the... Book Id: WPLBN0002097060 Subjects: Non Fiction, Philosophy, Art ► Abstract Full Text Search Details...art of a National Science Foundation grant proposal for Interdisciplinary Logical Sciences. 1.2. Neutrosophy, a New Branch of Philosophy A) Etymo... ...etting all possible states from to until . And, as a consequence, for any two propositions and , there exist two referen... ...And, later, others will reinstall it back... Consequently, philosophy is logically necessary and logically impossible. Agostoni Steuco of Gubbio w... ...es to interpret each notion or theory by tracing its respective practical consequences". We mean to know reality through thought, and thought throug... ...y propositions (theorems, lemmas, etc.) (p 1 ), (p 2 ), ..., (p m ), by logical combinations of its axioms. Developing [C], we find all proposit... ...we find all propositions of [P] (p 1 ), (p 2 ), ..., (p m ), resulted by logical combinations of (a 1 ), (a 2 ), ..., (a n ), moreover other propos... ...), moreover other propositions (r 1 ), (r 2 ), ..., (r t ), resulted by logical combinations of (b) with any of (a 1 ), (a 2 ), ..., (a n ). Si... ...eterminacy, not only or only - with rare exceptions, if its consequence is G% happiness (pleasure). In this case the action is G%-usef... ...by its conformity to given binding rules (deontology), and equally by its consequences. The same sentence is true in a reference system, and fa... Book Id: WPLBN0001235225 Subjects: Non Fiction, Religion, Philosophy ► Abstract Full Text Search Details... is null and void. Any actions which are intended to terminate it and to annul its consequences should be legally and morally permissible. The sam... ...atements cannot be derived with certainty from any negative statement. This formal-logical trait reflects a deep psychological reality with unsettl... ...t. This formal-logical trait reflects a deep psychological reality with unsettling consequences. A positive statement about one's affiliation ("I ... ...nd self-aggrandizement, and the reification and embodiment of said subversion. The logical outcome is to call for the overthrow of all political sy... ...ute them. They dedicate all their attention to the immediate and ignore the future consequences of their actions. In other words, their attention a... ...g and, therefore, regards himself as omnipotent, omniscient and protected from the consequences of his own acts (immune) – the personality disorder... ...: a series of potentialities with attached probabilities, the potentials being the logically and physically possible products. What can we learn a... ...stitute a theory and produce falsifiable predictions. A metaphor is also subject to logical and aesthetic rules and to the rigors of the scientific ... ...its inclusion in the definer) is the very definition of a tautology, the gravest of logical fallacies. On the other hand: if such an external sour... Book Id: WPLBN0001235286 Subjects: Non Fiction, Philosophy, Science ► Abstract Full Text Search Details...ery measurement because such communication would have to be superluminal. The only logical conclusion is that all the information relevant to the d... ... the issue of infinity and finiteness. The number of points in a line served as the logical floodgate which led to the development of Set Theory by ... ...eld and the representations of the world in the language field (that is to say, the consequences of repression). All three are, therefore, Activatio... ...ster or a network when they materialize. They can, however, relate to each other a- logically (negation or contradiction) and still constitute a pa... ...ied by another structure at the exact, infinitesimal, moment of realization. The consequence: only one of two exogenous events, which share the s... ...l-inclusive and all- pervasive. Nothing is outside its orbit and everything that is logically and physically possible is within its purview. If some... ... outcomes of appropriately designed experiments. Their explanatory powers are of no consequence. Positivists ascribe meaning only to statements that... ... confidence in a scientific theory or not. Is the theory aesthetic (parsimonious), logical, does it provide a reasonable explanation and, thus, doe... ...ething which is known to the believer to be true) versus implicit one (in the known consequences of something whose truth cannot be known). Truly, w... Book Id: WPLBN0001235266 Subjects: Non Fiction, Philosophy, Education ► Abstract Full Text Search Details...ype of destruction that society has inflicted on women’s bodies. 9 The consequence has been that intrauterine life, for both women and men, is no... ...hate deadlock, and they were overcome by desperation, with its most direct consequence: suicide. In this situation, we see this problem acted out in... ...t in this life, on this earth. The important thing is that joy is a direct consequence of love and love implies a transformation. Isn’t it true that... ... ambivalent position towards their mothers, from birth onward. The possible consequences are either desperation, caused by a lack of faith that this ... ...at else are neurosis, psychosis, psychosomatic illness etc. except for the consequences we pay for having had to submit to the demands of the Psycho... ...e as a therapeutic process becomes like a regression, it breaks with every logical process. While you were talking, I was thinking about R. Laing’s ... Book Id: WPLBN0001235260 Subjects: Non Fiction, Philosophy, Technology ► Abstract Full Text Search Details...ts with those to physical property. (I outline that process and its negative consequences in the next chapter.) They will argue, and again I agree, th... ...1.qxd 8/28/08 11:04 AM Page 19 milk, you cannot. Excludable property is, logically enough, property from which others can easily be excluded or ke... ...might, during sixty years, to Boswell’s eldest son. What would have been the consequence? An unadulterated copy of the finest biographical work in the ... ...om the same evidence” or from extending those arguments and developing their consequences. In a line that Hesse rightly highlights, he declares that a... ... of authors’ rights it turned out, in Hesse’s words, to reflect “an epistemo- logically impure and unstable legal synthesis that combined an instrument... ...century.” Poor Jefferson. How lucky we are to have Mr. Helprin to remedy the consequences of his lack of vision. Or perhaps not. Think of the way that... ...imply because they may be used to infringe copyrights. That, however, is the logical implication of their claim. The re- quest for an injunction below... ...ss to the property. The rules that forbid circumvention of these systems are logically, if not elegantly, referred to as the anticircumvention provisi... ... ing the operation of computers, a metaphor that enables us to imitate their logical processes. In the words of Wikipedia, “despite their simplicity—[... Book Id: WPLBN0002097091 Subjects: Non Fiction, Philosophy, Smarandache Collections ► Abstract Full Text Search Details...s to read “our daily paradoxes” Smarandache has not certainly referred to the logical, mathematical or linguistic meaning of the word/notion “parado... ...y creation plan. A sulking and introverted nature as that of Ion Barbu could, logically, straighten and aspire only towards a somehow utopian world;... ...erican playwright, without having claims to destroying myths, has unexpected consequences, as the result is almost a tragicomedy, in what the antiq... ... postmodernity- reality and term also large, having a historic and social, in consequence, first of all, a temporal motivation. This finding couldn’t... ...ncipation, of autonomy. Moreover, it is suggested the idea of a chronological consequence. G. Bajenaru in his study “The paradoxist post-modernism (... ...tions or emphasized through paradoxist means, replace the ample, rational and logical poems of the postmodernism. Only an attentive eye, a subtle mi... ...geous disputes for the latter. Thus, the new (post)industrial world supposes, logically, the performance (not only at an intentional level) as well a... ...eption, fore the romantic revolution”(p.8). The next reader’s question appears logically: what else will follow after the loop’s closing? If we admit... Book Id: WPLBN0002097020 Subjects: Non Fiction, Philosophy, Greek Philosophy ► Abstract Full Text Search Details...m the surname Empiricus would have been more appropriate, if it was given in consequence of prominence in the Empirical School. Sextus is known to th... ...ity to consist in subjective experience, but he does not follow this to its logical conclusion, and doubt the existence of anything outside of mind.... ...er of argument, by which the Sceptics arrived at the condition of doubt, in consequence of the equality of probabilities, and he calls the Tropes, th... ...stly be found in other authors of antiquity given in a similar way. [5] The logical result of the reasoning used to explain the first Trope, is that... ...not with equal understanding of the results to be deduced from it. [3] The consequence of the incompatibility of the mental representations produce... ...ted to Agrippa is a marked one, and shows the entrance into the school of a logical power before unknown in it. The latter are not a reduction of th... ...heories of Pyrrhonism, while the five are rather rules of thought leading to logical proof, and are dialectic in their character. We find this distin... ...points to an objective relativity, but with Agrippa to a general subjective logical principle. The originality of the Tropes of Agrippa does not lie... ...al assertion, Σεκεῖνλ νὐθ εἶλαη, [4] and proceeds to introduce the logical consequence of the denial of aetiology. The summing up of the Tropes of ... Book Id: WPLBN0001235270 Subjects: Fiction, Education, Philosophy ► Abstract Full Text Search Details...an also see how one way of thinking overwhelms the other and creates dire consequences. I do think, however, that when Ulysses is meditating on wh... ...help a woman free herself from her ties with her mother and not suffer the consequences. One must risk one’s life a thousand times and one must also... ...into consideration. A narcissistic wound of traumatic origin is often the consequence of a trauma to the fetal I, or the infant I being deprived of... ...into a second of the present, when that second is embraced with all of its consequences, good or bad that they might be. The present is always alte... ...If we extinguish desire, pain will disappear. For the Bible, pain is the consequence of the wrong committed in Eden and , even when the Messiah wi... ...e that what the other is thinking and feeling can be just as true. Is this logical? And so it is impossible to arrive at a solution to the conflict, ... Book Id: WPLBN0001235243 Subjects: Non Fiction, Religion, Philosophy ► Abstract Full Text Search Details...onship to each other of objects 10 therein – essential in essence to our logical comprehension of our physical location in relation to the world,... ...tioning on a physical level with our reality. Cause precedes effect in a “logical” fashion which enables us to predict and interact with our enviro... ... – out of time as we know it – and therefore experience at first hand the consequences of their own actions. Many people come away from this r... ...ave undergone N.D.E.s say that time is greatly compressed, as if it has no logical meaning. A description of time in this realm, wherever it is, is... Book Id: WPLBN0000675604 ► Abstract Full Text Search Details...s, and overthrow ing him out of his own mouth, or whether he is propounding consequences which would have been admitted by Zeno and Parmenides them ... ...ink that in visible objects you may easily show any num ber of inconsistent consequences.’ ‘Y es; and you should consider , not only the consequences... ...keness, motion, rest, genera tion, corruption, being and not being. And the consequences must include consequences to the things supposed and to othe... ...on on the negative as well as the positive hypothesis, with reference to the consequences which flow from the denial as well as from the assertion of ... ... is the object of these para doxes, some have answered that they are a mere logical puzzle, while others have seen in them an Hegelian propaedeutic o... ...en in two senses: Either one is one, Or , one has being, from which opposite consequences are deduced, 1.a. If one is one, it is nothing. 1.b. If one ... ...rocess is real, or in any way an assistance to thought, or , like some other logical forms, a mere figure of speech transferred from the sphere of mat... Book Id: WPLBN0000675607 ► Abstract Full Text Search Details... as we learn from the Memorabilia of Xenophon, first drew attention to the consequences of actions. Man- kind were said by him to act rightly when the... ...acknowledge that a large class of actions are made right or wrong by their consequences only; we say further that mankind are not too mindful, but tha... ... that mankind are not too mindful, but that they are far too regardless of consequences, and that they need to have the doctrine of utility habitually... ..., and the necessary foundation of that part of morals which relates to the consequences of actions, we still have to consider whether this or some oth... ...out entering on this wide field, even a superficial consider- ation of the logical and metaphysical works which pass under the name of Aristotle, whet... ...r of the other two, and in addition to them. SOCRATES: But do you see the consequence? 72 Philebus PROTARCHUS: To be sure I do. The consequence is,... Book Id: WPLBN0000675609 ► Abstract Full Text Search Details...e. But he is not to be regarded as the original inventor of any of the great logical forms, with the exception of the syllogism. There is little worth... ...e Sophists having an evil name; that, whether deserved or not, was a natural consequence of their vocation. That they were foreigners, that they made ... ...‘abscissio infinti,’ by which the Soph ist is taken, is a real and valuable logical pro cess. Modern science feels that this, like other processes o... ...n be caught in this way. But these divisions and subdivisions were favourite logical exercises of the age in which he lived; and while indulging his d... ...ed; and while indulging his dialectical fancy , and making a contribution to logical method, he delights also to transfix the Eristic Sophist with wea... ...oxes of Zeno extended far be yond the Eleatic circle. And now an unforeseen consequence began to arise. If the Many were not, if all things were name... ...d try our hand upon some more obvious animal, who may be made the subject of logical experiment; shall we say an angler? ‘V ery good.’ In the first pl... .... Suppose that you take all these hypoth eses in turn, and see what are the consequences which follow from each of them. STRANGER: Very good, and fir... ... sert discourse to be a kind of being; for if we could not, the worst of all consequences would follow; we should have no philosophy . Moreover , the ... Book Id: WPLBN0000675612 ► Abstract Full Text Search Details...epublic, and were probably first invented by Plato. The 3 greatest of all logical truths, and the one of which writers on philosophy are most apt to ... ... confu sion of them in his own writings. But he does not bind up truth in logical formulae,—logic is still veiled in metaphys ics; and the science w... ...ts that justice and injustice shall be considered without regard to their consequences, Adeimantus remarks that they are re garded by mankind in ge... ...that they are re garded by mankind in general only for the sake of their consequences; and in a similar vein of reflection he urges at the beginnin... ...s not the first but the second thing, not the direct aim but the indirect consequence of the good government of a State. In the discussion about rel... ...o good to the just and harm to the unjust? I like that better. But see the consequence:—Many a man who is ignorant of human nature has friends who ar... ... not some which we welcome for their own sakes, and independently of their consequences, as, for example, harmless pleasures and enjoyments, which del... Book Id: WPLBN0000677402 ► Abstract Full Text Search Details...of your observing it in yourself. Avoid fear, too, though fear is only the consequence of every sort of false- hood. Never be frightened at your own f... ...self,” interposed the elder. “No matter. You are a little late. It’s of no consequence….” “I’m extremely obliged to you, and expected no less from you... ... looked at him with curios- ity. “Is that really your conviction as to the consequences of the disappearance of the faith in immortality?” the elder a... ...ry of sixty thousand. That’ s very alluring to start with, for a man of no consequence and a beggar. And, take note, he won’ t be wronging Mitya, but ... ...then not he, Ivan. This letter at once assumed in his eyes the aspect of a logical proof. There could be no longer the slight- est doubt of Mitya’s gu... ...nd. Does all that exist of itself, or is it only an emanation of myself, a logical development of my ego which alone has existed for ever: but I make ... ...to me. I told him I don’t want to keep quiet, and he talked about the geo- logical cataclysm… idiocy! Come, release the monster… he’s been singing a h... ... that son, Dmitri, about the money, the envelope, and the signals? Is that logical? Is that clear? “When the day of the murder planned by Smerdyak... ...ate, so to speak, a romance, especially if God has endowed us with psycho- logical insight. Before I started on my way here, I was warned in Petersbur... Book Id: WPLBN0000675610 ► Abstract Full Text Search Details... regions of transcendental speculation back into the path of common sense. A logical or psy- chological phase takes the place of the doctrine of Ideas... ... ideal, in the de- lineation of which he is frequently interrupted by purely logical illustrations. The younger Socrates resembles his namesake in not... ...y the presence of Theodorus, the geometrician. There is political as well as logical insight in refusing to admit the division of mankind into Hellene... ...nd like rules might be extended to any art or science. But what would be the consequence? ‘The arts would utterly perish, and human life, which is bad... ... can tell?’ As in the Theaetetus, evil is supposed to continue,—here, as the consequence of a former state of the world, a sort of mephitic vapour exh... ...en it a single name. Whereas you would make a much better and more equal and logical classification of numbers, if you divided them into odd and even;... ...NG SOCRATES: Indeed I should. STRANGER: And there is a still more ridiculous consequence, that the king is found running about with the herd and in cl... ...hown in the previous argument. STRANGER: Thank you for reminding me; and the consequence is that any true form of government can only be supposed to b... Book Id: WPLBN0000677399 ► Abstract Full Text Search Details...idity the latter looks upon his revenge as justice pure and simple; while in consequence of his acute consciousness the mouse does not believe in the ... ...f it disgusts you to be reconciled to it; by the way of the most inevitable, logical combinations to reach the most revolting conclusions on the everl... ...t because they are stupid and limited. How explain that? I will tell you: in consequence of their limitation they take immediate and secondary causes ... ... be done if I have not even spite (I began with that just now, you know). In consequence again of those accursed laws of consciousness, anger in me is... ...ests they may at once become good and noble—are, in my opinion, so far, mere logical exercises! Y es, logical exercises. Why, to maintain this theory ... ...omes softer, and consequently less bloodthirsty and less fitted for warfare. Logically it does seem to follow from his arguments. But man has such a p... ...emptuously. “You thirst for life and try to settle the problems of life by a logical tangle. And how persistent, how insolent are your sallies, and at... ...ll. It is not worth while to pay attention to them for they really are of no consequence. Another circumstance, too, worried me in those days: that th... ... to me, tell me that, please?” I began, gasping for breath and regardless of logical connection in my words. I longed to have it all out at once, at o... Book Id: WPLBN0000675605 ► Abstract Full Text Search Details...ser heads than his own; he prefers to test ideas by the consistency of their consequences, and, if asked to give an account of them, goes back to some... ...ting the whole human race into heaven or hell for the greater convenience of logical division? Are we not at the same time describing them both in sup... ...mould human thought, Plato naturally cast his belief in immortal ity into a logical form. And when we consider how much the doctrine of ideas was als... ...ly verbal, and is but the expression of an instinctive confidence put into a logical form:—’The soul is immortal because it con tains a principle of ... ... in the Republic, a system of ideas, tested, not by experience, but by their consequences, and not explained by actual causes, but by a higher, that i... ...pro ceed from the less general to the more general, and are tested by their consequences; the puzzle about greater and less; the resort to the method... ...es, he said. And can all this be true, think you? he said; for these are the consequences which seem to follow from the assump tion that the soul is ... ...e in the best of the higher; but you would not confuse the principle and the consequences in your reasoning, like the Eristics—at least if you wanted ... 1 Records: 1 - 18 of 18 - Pages:
5,330
23,594
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.6875
3
CC-MAIN-2022-21
longest
en
0.792505
https://edurev.in/course/quiz/attempt/345_Electric-Field-MCQ--With-Solution--Level-1-Test-1/9a090c3e-80f5-4d55-af31-250e3e194c8a
1,600,803,898,000,000,000
text/html
crawl-data/CC-MAIN-2020-40/segments/1600400206763.24/warc/CC-MAIN-20200922192512-20200922222512-00382.warc.gz
401,475,439
42,776
Courses # Electric Field MCQ (With Solution) Level - 1 : Test 1 ## 25 Questions MCQ Test Exclusive Video Lectures of Class 12 Physics by Experts | Electric Field MCQ (With Solution) Level - 1 : Test 1 Description This mock test of Electric Field MCQ (With Solution) Level - 1 : Test 1 for Class 12 helps you for every Class 12 entrance exam. This contains 25 Multiple Choice Questions for Class 12 Electric Field MCQ (With Solution) Level - 1 : Test 1 (mcq) to study with solutions a complete question bank. The solved questions answers in this Electric Field MCQ (With Solution) Level - 1 : Test 1 quiz give you a good mix of easy questions and tough questions. Class 12 students definitely take this Electric Field MCQ (With Solution) Level - 1 : Test 1 exercise for a better result in the exam. You can find other Electric Field MCQ (With Solution) Level - 1 : Test 1 extra questions, long questions & short questions for Class 12 on EduRev as well by searching above. QUESTION: 1 ### A wooden block performs SHM on a frictionless surface with frequency v0. The block carries a charge + Q on its surface. If now a uniform electric field E is switched on as shown, then the SHM of the block will be Solution: *Multiple options can be correct QUESTION: 2 Solution: QUESTION: 3 ### A thin semi-circular ring of radius r has a positive charge q distributed uniformly over it. The net field E at the centre O is Solution: QUESTION: 4 An electron initially at rest falls a distance of 1.5 cm in a uniform electric field of magnitude 2 × 104 N/C. The time taken by the electron to fall this distance is Solution: QUESTION: 5 The electric field created by a point charge falls with distance r from the point charge as Solution: QUESTION: 6 The electric field at the centroid of an equilateral triangle carrying an equal charge q at each of the vertices is Solution: QUESTION: 7 A particle of mass m carrying charge q is kept at rest in a uniform electric field E and then released. The kinetic energy gained by the particle, when it moves through a distance y is Solution: QUESTION: 8 The charge q is projected into a uniform electric field E. work done when it moves a distance Y is Solution: QUESTION: 9 If the linear charge density of a cylinder is 4 µCm-1 then electric field intensity at point 3.6 cm from axis is Solution: QUESTION: 10 A simple pendulum has a length i and the mass of the bob is m. The bob is given a charge q coulomb. The pendulum is suspended between the vertical plates of a charged parallel plate capacitor. If E is the electric field strength between the plates, the time period of the pendulum is given by Solution: QUESTION: 11 Charges + 2q + q and +q are placed at the corners A,B and C of an equilateral triangle ABC. If E is the electric field at the circumcentre O of the triangle, due to the charge +q, then the magnitude and direction of the resulatant electric field at O is Solution: QUESTION: 12 Which of the following configurations of electric lines of force is not possible? Solution: QUESTION: 13 Two unlike charges of the same magnitude Q are placed at a distance d. The intensity of the electric field at the middle point in the line joining the two chages. Solution: QUESTION: 14 The spatial distribution of the electric field due to charges (A, B) is shown in figure. Which one of the following statements is correct? Solution: QUESTION: 15 Two point charges +8q and -2 q are located at x = 0 and x =  L respectively, The location of  a point on the x-axis at which the net electric field due to these two point charges is zero is Solution: QUESTION: 16 Six charges, three positive and three negative of equal magnitude are to be placed at the vertices of a regular hexagon such that the electric field at O is double the electric field when only one positive charge of same magnitude is placed at R. Which of the following arrangements of charges is possible for P, Q, R, S, T and U respectively? Solution: QUESTION: 17 The electric field at a point due to an electric dipole, on an axis inclined at an angle θ (<90°) to the dipole axis, is perpendicular to the dipole axis, if the angle θ is Solution: QUESTION: 18 A dipole of electric dipole moment p is placed in a uniform electric field of strength E. If θ is the angle between positive directions of p and E, then the potential energy of the electric dipole is largest when θ is Solution: QUESTION: 19 Let Ea be the electric field due to a dipole in its axial plane distance t and let Eq be the field in the equatorial plane distance t ,then the relation between Ea and Eq will be Solution: QUESTION: 20 The relation between the intensity of the electric field of an electric dipole at a distanec r from its centre on its axis and the distance is when (r >> 2t) Solution: QUESTION: 21 An electric dipole of the dipole moment p is placed in a uniform electric field E. The maximum torque experienced by the dipole is Solution: QUESTION: 22 Torque acting on an electric dipole in a uniform electric field is maximum if the angle between p and E is Solution: QUESTION: 23 An electric dopole of length 1 cm is placed with the axis making an angle of 30° to an electric field of strength 104NC-1. If it experiences a torque of 10√2 Nm, the potential energy of the dipole is Solution: QUESTION: 24 If the force exerted by an electric dipole on a charge q at a distance of 1 m is F, the force at a point 2 m away in the same direction will be Solution: QUESTION: 25 Dipole is placed parallel to the electric field. If Q is the work done in rotating the dipole by 60°, then work done in rotating it by 180° is Solution:
1,371
5,685
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.171875
3
CC-MAIN-2020-40
latest
en
0.892788
https://nbviewer.org/github/duarteocarmo/technological_capabilities/blob/master/notebooks/4_1_Macro%20Level%20Analysis.ipynb
1,656,859,596,000,000,000
text/html
crawl-data/CC-MAIN-2022-27/segments/1656104244535.68/warc/CC-MAIN-20220703134535-20220703164535-00565.warc.gz
476,819,287
640,484
Macro Level Analysis: The evolution of the System with Time¶ In the first part of the analysis we will focus on how the global capabilities change with time. Let's start by importing all of the external libraries that will be useful during the analysis. In [1]: # python libraries from py2neo import Graph import numpy as np from pandas import DataFrame import itertools import matplotlib.pyplot as plt import seaborn as sns import json import math import pandas as pd import plotly import plotly.graph_objs as go import qgrid from scipy import stats, spatial from sklearn.cluster.bicluster import SpectralBiclustering import operator from IPython.display import display, HTML from matplotlib.colors import ListedColormap # connection to Neo4j local_connection_url = "http://localhost:7474" connection_to_graph = Graph(local_connection_url) # plotly credentials Total database matrix¶ We start by geeting all the feedstock, processing technology and output terms. In [2]: f_terms = list(set(DataFrame(connection_to_graph.data('MATCH (a:Asset)-[:CONTAINS]->(fs:Feedstock) RETURN fs.term, count(a)')).as_matrix()[:, 1])) o_terms = list(set(DataFrame(connection_to_graph.data('MATCH (a:Asset)-[:CONTAINS]->(fs:Output) RETURN fs.term, count(a)')).as_matrix()[:, 1])) pt_terms = list(set(DataFrame(connection_to_graph.data('MATCH (a:Asset)-[:CONTAINS]->(fs:ProcessingTech) RETURN fs.term, count(a)')).as_matrix()[:, 1])) bbo = list(f_terms + pt_terms + o_terms) print 'Number of terms:', len(bbo) axis_names = bbo print axis_names Number of terms: 352 [u'vegetable oil', u'recycled ethanol', u'woodwaste/bagasse', u'natural gas', u'various grasses', u'nonedible oils', u'animal fats', u'paper', u'ponderosa pine', u'grass seed', u'milo', u'osb', u'paper waste', u'corn cob', u'logging residues', u'willow', u'crop waste', u'firewood', u'jatropha', u'sugarcane bagasse', u'sugar', u'durum', u'wood', u'municipal solid waste', u'banna grass', u'waste oil', u'cassava pulp', u'barley', u'mixed oilseeds', u'wood fuel', u'starch', u'white grease', u'coniferous wood', u'corn oil', u'cellulosic sugars', u'vegetable waste', u'corn stalks', u'yellow grease', u'molasses', u'hybrid poplar', u'food waste', u'citrus residues', u'wood chips', u'spent sulphite liquor feedstock', u'palm, rapeseed oil, waste fat', u'cotton residue', u'industrial waste', u'deciduous forests', u'corn stover', u'cereals/sugar', u'waste fat', u'waste vegetable oil', u'tallow', u'sewage', u'poultry fat', u'grain', u'poplar/energy woods', u'biogas from municipal wastewater treatment facility digesters', u'loblolly pine', u'stump material', u'husk', u'palm', u'sulfite spent liquor from spruce wood pulping', u'switchgrass', u'bark', u'fall rye', u'agriculture', u'textiles', u'animal waste', u'cobs', u'forest residues', u'grass clippings', u'fog', u'plywood', u'dry biomass', u'southern yellow pine', u'trap grease', u'beef tallow', u'syngas from gasifier', u'animal manure', u'wheat', u'organic waste', u'oil plants', u'corn', u'cellulose', u'dung', u'agricultural waste', u'elephant grass', u'rice straw', u'biogas from landfills', u'mixed grass', u'waste water sludge', u'poppy seed', u'algae', u'soft white spring wheat', u'barley straw', u'Giant reed', u'rice hulls', u'canola oil', u'msw', u'digester gas', u'seeds', u'mixed prairie grass', u'fat products', u'distillers grain', u'woody and agricultural by-products', u'sugar beet', u'mixed biomass', u'corn stover, cobs', u'brewery waste', u'camelina', u'cellulosic biomass', u'multi feedstock', u'plastics', u'palm stearin', u'white pine', u'straw', u'water hyacinth', u'mixedwood', u'hogs', u'beverage waste', u'corn sugar', u'sugar base', u'soy', u'woody biomass', u'maple', u'wheat straw', u'jathropa', u'medium density fiberboard', u'biodegradable waste', u'macroalgae', u'hardwood', u'feed wheat', u'waste', u'soy oil', u'hog fuel', u'pal waste', u'grass straw and corn stalks', u'eucalyptus', u'agricultural biomass', u'glycerin', u'used cooking oil', u'hydrous ethanol', u'seed oil', u'shrub willow', u'juncea oil', u'grains', u'syngas', u'barley starch', u'particle board', u'energy crops', u'slash pine', u'lumber', u'oilseeds', u'Decommissioned electricity poles and railway ties', u'miscanthus', u'agricultural residues', u'grass', u'sorghum', u'virgin oil', u'cps wheat', u'rapeseed oil', u'cooking oil', u'natural fats', u'red pine', u'radiate pine', u'cheese whey', u'manure', u'corn/barley', u'free fatty acid', u'animal residue', u'yard trimmings', u'stover', u'biogenic waste oils', u'triticale', u'bagasse/cane trash', u'mixed cellulose', u'crude glycerine', u'brown grease', u'waste water', u'garden waste', u'wood processing residues', u'yeast', u'oak', u'pine', u'douglas fir', u'forestry', u'bagasse', u'rice straw, corn stalks', u'steel waste gas (co)', u'energy grasses', u'whey', u'lignocellulosics', u'wood waste', u'juniper', u'mixed biomass, natural gas', u'sugarcane', u'urban wood', u'flaxseed oil', u'residues', u'cotton gin trash', u'rapeseed', u'arundo donax', u'dedicated energy crops', u'black liquor', u'non-food grade canola oil', u'corn/milo', u'beets', u'lemna', u'sugarcane trash', u'refined vegetable oil', u'poplar', u'palm oil', u'sawdust', u'blend', u'non-cellulosic portions of sfw', u'woodchips', u'algae transesterification', u'hydrochloric acid', u'landfill gas', u'consolidated bioprocessing', u'methane recovery', u'hydroprocessing', u'solar conversion', u'biomass combustion', u'gasification', u'enzymatic hydrolysis', u'biogas productions', u'chemprocessing', u'oil extraction', u'bioenergy generation', u'coking', u'orc process', u'scrubbers', u'cbp', u'microwave', u'btl - syngas', u'bioforming', u'algal oil extraction', u'blg', u'methanol synthesis, mtg catalysis', u'biomass gasification', u'black liquor gasification', u'grate furnaces', u'algae burning/transesterifcation', u'steam reform/ft', u'biochemical conversions', u'catalysis', u'alcoholic fermentation', u'fixed bed gasifiers', u'fischer-tropsch', u'hydrotreating', u'transesterification', u'algae oil extraction', u'gas cleaning', u'pyrolysis oil', u'algae fermentation', u'fast pyrolysis', u'charring', u'tbd', u'advanced fermentation', u'acid hydrolysis', u'ft microchannel reactor', u'plant cell culture', u' aerobic digestion', u'pyrolysis', u'liquefaction', u'gasification-fermentation', u'hydrolysis', u'algae oil hydrotreating', u'solvents', u'pressing', u'metathesis', u'thermochemical conversions', u'fluidized bed furnaces', u' anaerobic digestion', u'ti gasification', u'fermentation', u'renewable liquefied natural gas', u'furanics', u'biodme', u'chems', u'cellulosic heating oil', u'chems, fibers', u'succinic acid', u'syng', u'succinic acid, polylutylene succinate', u'nutriceuticals', u'ethanol blending', u'succinic acid/ bdo', u'pellets', u'cellulosic naphtha', u'drop-in fuels', u'bioresin', u'adipic acid', u'bioplastic', u'biobutanol', u'ester renewable diesel', u'eth from bagasse', u'renewable compressed natural gas', u'renewable jet', u'methanol/then ethanol', u'renewable electricity', u'renewable diesel', u'renewable gasoline', u'celleth from bagasse', u'methanol', u'biogas', u'lpg', u'cellulosic ethanol', u'pla from cassava', u'biodieseliesel', u'gasoline', u'butanol', u'renewable heating oil', u'biopolymers', u'advanced biofuel', u'biofene', u'bio-oil', u'naphtha', u'biofuels from algae', u'biogasoline', u'cellulosic renewable gasoline', u'electricity from biomass', u'biodiesel', u'cellulosic biofuel', u'cellulosic renewable gasoline blendstock', u'sng', u'celleth from wheat straw', u'bioplymers from corn, potatoes, tapioca', u'ethanolanol', u'renewable chemicals', u'ethanol', u'cellulosic diesel', u'biodiesel bioethanol', u'papaya to fatty acids', u'rdif', u'polyethylene', u'bioethanol', u'biofuels for transport', u'mixed alcohol fuels', u'renewable jet fuel', u'biodiesel blending', u'renewable fuel', u'pdo', u'biobdo', u'bdo', u'renewable diesel from animal fats, veggie oils', u'renewable oils', u'fatty acid ethyl ester', u'bioieselethanolanol', u'biomass-based diesel'] /Users/duarteocarmo/.pyenv/versions/miniconda3-4.3.30/envs/tech-cap/lib/python2.7/site-packages/ipykernel_launcher.py:1: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead. /Users/duarteocarmo/.pyenv/versions/miniconda3-4.3.30/envs/tech-cap/lib/python2.7/site-packages/ipykernel_launcher.py:2: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead. /Users/duarteocarmo/.pyenv/versions/miniconda3-4.3.30/envs/tech-cap/lib/python2.7/site-packages/ipykernel_launcher.py:3: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead. We create a function that return the capability matrix of the whole database. In [3]: def get_total_matrix(normalization): # define queries # non intersecting part q1 = """ MATCH (a:Asset)-[:CONTAINS]->(fs:Feedstock) MATCH (a:Asset)-[:CONTAINS]->(out:Output) MATCH (a:Asset)-[:CONTAINS]->(pt:ProcessingTech) RETURN fs.term, pt.term, out.term, count(a) """ process_variables = ['Feedstock', 'Output', 'ProcessingTech'] # interesecting part q2 = """ MATCH (a:Asset)-[:CONTAINS]->(fs:{}) MATCH (a:Asset)-[:CONTAINS]->(t:{}) WHERE fs<>t RETURN fs.term, t.term, count(a) """ # total assets of year q3 = """ MATCH (n:Asset) RETURN count(n) """ # treat incoming data total_documents = DataFrame(connection_to_graph.data(q3)).as_matrix()[0][0] # get data data_q1 = DataFrame(connection_to_graph.data(q1)).as_matrix() # create matrix total_matrix = np.zeros([len(axis_names), len(axis_names)]) # for no intersections data for row in data_q1: # the last column is the frequency (count) frequency = row[0] indexes = [axis_names.index(element) for element in row[1::]] # add frequency value to matrix position not inter for pair in itertools.combinations(indexes, 2): total_matrix[pair[0], pair[1]] += frequency total_matrix[pair[1], pair[0]] += frequency # for intersecting data for category in process_variables: process_data = DataFrame(connection_to_graph.data(q2.format(category, category))).as_matrix() for row in process_data: frequency = row[0] indexes = [axis_names.index(element) for element in row[1::]] # add frequency value to matrix position inter for pair in itertools.combinations(indexes, 2): total_matrix[pair[0], pair[1]] += frequency / 2 # Divided by two because query not optimized total_matrix[pair[1], pair[0]] += frequency / 2 # Divided by two because query not optimized # normalize norm_total_matrix = total_matrix / total_documents # dynamic return if normalization == True: return norm_total_matrix else: Let us visualize the normalized and non normalized versions. We create a function that gives borders to our graphs. In [4]: def borders(width, color, size=get_total_matrix(normalization=False).shape[1]): plt.axhline(y=0, color='k',linewidth=width) plt.axhline(y=size, color=color,linewidth=width) plt.axvline(x=0, color='k',linewidth=width) plt.axvline(x=size, color=color,linewidth=width) /Users/duarteocarmo/.pyenv/versions/miniconda3-4.3.30/envs/tech-cap/lib/python2.7/site-packages/ipykernel_launcher.py:28: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead. /Users/duarteocarmo/.pyenv/versions/miniconda3-4.3.30/envs/tech-cap/lib/python2.7/site-packages/ipykernel_launcher.py:31: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead. /Users/duarteocarmo/.pyenv/versions/miniconda3-4.3.30/envs/tech-cap/lib/python2.7/site-packages/ipykernel_launcher.py:48: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead. And we plot. In [5]: ## call functions colors = 'binary' year_in_focus = 2016 # create a subplot plt.subplots(2,1,figsize=(17,17)) # first heatmap plt.subplot(121) vmax = 1000 sns.heatmap(get_total_matrix(normalization=False) , cmap=colors, cbar=True,cbar_kws={"shrink": .2}, square=True, xticklabels=False, yticklabels=False, vmax=vmax) borders(1.5, 'k') plt.title('Capability Matrix Absolute') # second heatmap plt.subplot(122) vmax = 0.1 sns.heatmap(get_total_matrix(normalization=True) , cmap=colors, cbar=True,cbar_kws={"shrink": .2}, square=True, xticklabels=False, yticklabels=False, vmax=vmax) borders(1.5, 'k') plt.title('Capability Matrix Normalized') plt.show() /Users/duarteocarmo/.pyenv/versions/miniconda3-4.3.30/envs/tech-cap/lib/python2.7/site-packages/ipykernel_launcher.py:28: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead. /Users/duarteocarmo/.pyenv/versions/miniconda3-4.3.30/envs/tech-cap/lib/python2.7/site-packages/ipykernel_launcher.py:31: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead. /Users/duarteocarmo/.pyenv/versions/miniconda3-4.3.30/envs/tech-cap/lib/python2.7/site-packages/ipykernel_launcher.py:48: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead. Total database matrix: clustered¶ In [6]: whole_database = get_total_matrix(normalization=True) a = sns.clustermap(whole_database, figsize=(12, 12), xticklabels = False, yticklabels=False, cmap='binary', square=True) borders(1.5, 'k') plt.show() /Users/duarteocarmo/.pyenv/versions/miniconda3-4.3.30/envs/tech-cap/lib/python2.7/site-packages/ipykernel_launcher.py:28: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead. /Users/duarteocarmo/.pyenv/versions/miniconda3-4.3.30/envs/tech-cap/lib/python2.7/site-packages/ipykernel_launcher.py:31: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead. /Users/duarteocarmo/.pyenv/versions/miniconda3-4.3.30/envs/tech-cap/lib/python2.7/site-packages/ipykernel_launcher.py:48: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead. /Users/duarteocarmo/.pyenv/versions/miniconda3-4.3.30/envs/tech-cap/lib/python2.7/site-packages/seaborn/matrix.py:603: ClusterWarning: scipy.cluster: The symmetric non-negative hollow observation matrix looks suspiciously like an uncondensed distance matrix In [7]: cluster_order = [] for i in a.dendrogram_row.reordered_ind: cluster_order.append(axis_names[i]) print 'Extract of cluster order:' cluster_order[50:70] Extract of cluster order: Out[7]: [u'sugarcane bagasse', u'wood chips', u'pine', u'agricultural residues', u'rice straw', u'wheat straw', u'blend', u'succinic acid', u'biobutanol', u'oil extraction', u'microwave', u'natural gas', u'pellets', u'barley', u'poplar', u'sugar beet', u'grains', u'acid hydrolysis', u'consolidated bioprocessing', u'cbp'] 1. Characterisation of Years ¶ 1.1. Years in the database ¶ Not all years in the Neo4j database contain technological assets. For this reason, two lists will be created. A completely chronological one and a database one. In [8]: # query years years_available_q = """ MATCH (n:Asset) WITH n.year as YEAR RETURN YEAR, count(YEAR) ORDER BY YEAR ASC """ # create a list with the years where records exist years_available = DataFrame(connection_to_graph.data(years_available_q)).as_matrix()[:, 0][:-1] years_available = [int(year) for year in years_available] # create a pure range list first_year = int(years_available[0]) last_year = int(years_available[-1]) real_years = range(first_year, last_year + 1, 1) # give information print 'The database list starts in {}, ends in {} and contains {} years.'.format(years_available[0], years_available[-1], len(years_available)) print 'The real list starts in {}, ends in {} and contains {} years.'.format(real_years[0], real_years[-1], len(real_years)) The database list starts in 1938, ends in 2019 and contains 38 years. The real list starts in 1938, ends in 2019 and contains 82 years. /Users/duarteocarmo/.pyenv/versions/miniconda3-4.3.30/envs/tech-cap/lib/python2.7/site-packages/ipykernel_launcher.py:8: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead. Now that we have all of the years available, we can start building the technological capability matrixes. 1.2. Capability Matrixes of years ¶ 1.2.1. Getting the labels ¶ The final list of terms has 352 terms. 1.2.2. Function ¶ We start by creating a function that given a certain year, returns the year's capability matrix. In [9]: def get_year_matrix(year, normalization=True): # define queries # non intersecting part q1 = """ MATCH (a:Asset)-[:CONTAINS]->(fs:Feedstock) MATCH (a:Asset)-[:CONTAINS]->(out:Output) MATCH (a:Asset)-[:CONTAINS]->(pt:ProcessingTech) WHERE a.year = "{}" RETURN fs.term, pt.term, out.term, count(a) """.format(year) process_variables = ['Feedstock', 'Output', 'ProcessingTech'] # interesecting part q2 = """ MATCH (a:Asset)-[:CONTAINS]->(fs:{}) MATCH (a:Asset)-[:CONTAINS]->(t:{}) WHERE fs<>t AND a.year = "{}" RETURN fs.term, t.term, count(a) """ # total assets of year q3 = """ MATCH (n:Asset) WITH n.year as YEAR RETURN YEAR, count(YEAR) ORDER BY YEAR ASC """ # treat incoming data raw_data_q3 = DataFrame(connection_to_graph.data(q3)).as_matrix() index_of_year = list(raw_data_q3[:, 0]).index('{}'.format(year)) total_documents = raw_data_q3[index_of_year, 1] # get data data_q1 = DataFrame(connection_to_graph.data(q1)).as_matrix() # create matrix year_matrix = np.zeros([len(axis_names), len(axis_names)]) # for no intersections data for row in data_q1: # the last column is the frequency (count) frequency = row[0] indexes = [axis_names.index(element) for element in row[1::]] # add frequency value to matrix position not inter for pair in itertools.combinations(indexes, 2): year_matrix[pair[0], pair[1]] += frequency year_matrix[pair[1], pair[0]] += frequency # for intersecting data for category in process_variables: process_data = DataFrame(connection_to_graph.data(q2.format(category, category, year))).as_matrix() for row in process_data: frequency = row[0] indexes = [axis_names.index(element) for element in row[1::]] # add frequency value to matrix position inter for pair in itertools.combinations(indexes, 2): year_matrix[pair[0], pair[1]] += frequency / 2 # Divided by two because query not optimized year_matrix[pair[1], pair[0]] += frequency / 2 # Divided by two because query not optimized # normalize norm_year_matrix = year_matrix / total_documents # dynamic return if normalization == True: return norm_year_matrix else: return year_matrix We finally test our function with the year 2016. In [10]: year = 2017 print 'The matrix from {} has shape {} a max value of {}, a min value of {} and a mean of {}.'.format(year, get_year_matrix(year).shape, np.amax(get_year_matrix(year)), np.amin(get_year_matrix(year)), np.mean(get_year_matrix(year))) /Users/duarteocarmo/.pyenv/versions/miniconda3-4.3.30/envs/tech-cap/lib/python2.7/site-packages/ipykernel_launcher.py:31: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead. /Users/duarteocarmo/.pyenv/versions/miniconda3-4.3.30/envs/tech-cap/lib/python2.7/site-packages/ipykernel_launcher.py:36: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead. /Users/duarteocarmo/.pyenv/versions/miniconda3-4.3.30/envs/tech-cap/lib/python2.7/site-packages/ipykernel_launcher.py:53: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead. The matrix from 2017 has shape (352, 352) a max value of 0.229850746269, a min value of 0.0 and a mean of 0.000211213110583. Let us print the capability matrices of 2016 normalized and absolute versions. In [11]: ## call functions colors = 'binary' vmin = 0.0000 vmax = 0.05 year_in_focus = 2016 # create a subplot plt.subplots(2,1,figsize=(17,17)) # first heatmap plt.subplot(121) sns.heatmap(get_year_matrix(year_in_focus, normalization=False) , cmap=colors, cbar=True,cbar_kws={"shrink": .2}, square=True, xticklabels=False, yticklabels=False) borders(1.5, 'k') plt.title('Capability Matrix Absolute: {}'.format(year_in_focus)) # second heatmap plt.subplot(122) sns.heatmap(get_year_matrix(year_in_focus, normalization=True) , cmap=colors, cbar=True,cbar_kws={"shrink": .2}, square=True, xticklabels=False, yticklabels=False, vmin=vmin, vmax=vmax) borders(1.5, 'k') plt.title('Capability Matrix Normalized: {}'.format(year_in_focus)) plt.show() /Users/duarteocarmo/.pyenv/versions/miniconda3-4.3.30/envs/tech-cap/lib/python2.7/site-packages/ipykernel_launcher.py:31: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead. /Users/duarteocarmo/.pyenv/versions/miniconda3-4.3.30/envs/tech-cap/lib/python2.7/site-packages/ipykernel_launcher.py:36: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead. /Users/duarteocarmo/.pyenv/versions/miniconda3-4.3.30/envs/tech-cap/lib/python2.7/site-packages/ipykernel_launcher.py:53: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead. In [12]: ## call functions color1 = 'Blues' color3 = 'Reds' rwhite = ListedColormap(['white', 'red']) gwhite = ListedColormap(['white', 'green']) blwhite = ListedColormap(['white', 'blue']) bwhite = ListedColormap(['white', 'grey']) year_in_focus = 2017 graph_holder = 0.001 original = get_year_matrix(year_in_focus, normalization=False) threshold = len(f_terms) threshold = len(f_terms) + len(pt_terms) plt.subplots(1,1,figsize=(9, 9)) plt.subplot(111) sns.heatmap(original, cmap=bwhite, center=0.001, cbar=None, square=True, xticklabels=False, yticklabels=False) borders(1.5, 'k') plt.title('Capability Matrix Absolute: {}'.format(year_in_focus)) plt.savefig("Capability_Matrix.png", dpi=300, quality=100) plt.show() /Users/duarteocarmo/.pyenv/versions/miniconda3-4.3.30/envs/tech-cap/lib/python2.7/site-packages/ipykernel_launcher.py:31: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead. /Users/duarteocarmo/.pyenv/versions/miniconda3-4.3.30/envs/tech-cap/lib/python2.7/site-packages/ipykernel_launcher.py:36: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead. /Users/duarteocarmo/.pyenv/versions/miniconda3-4.3.30/envs/tech-cap/lib/python2.7/site-packages/ipykernel_launcher.py:53: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead. In [13]: ## call functions colors = 'binary' year_in_focus = 2017 # create a subplot plt.subplots(1,1,figsize=(9, 9)) plt.subplot(111) sns.heatmap(get_year_matrix(year_in_focus, normalization=True) , cmap=colors, cbar=True,cbar_kws={"shrink": .2}, square=True, xticklabels=False, yticklabels=False, vmin=0.00, vmax=0.05) borders(1.5, 'k') plt.title('Capability Matrix Normalized: {}'.format(year_in_focus)) plt.show() /Users/duarteocarmo/.pyenv/versions/miniconda3-4.3.30/envs/tech-cap/lib/python2.7/site-packages/ipykernel_launcher.py:31: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead. /Users/duarteocarmo/.pyenv/versions/miniconda3-4.3.30/envs/tech-cap/lib/python2.7/site-packages/ipykernel_launcher.py:36: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead. /Users/duarteocarmo/.pyenv/versions/miniconda3-4.3.30/envs/tech-cap/lib/python2.7/site-packages/ipykernel_launcher.py:53: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead. 1.3. Year profiles ¶ In order to analyse the correlation of the years between themselves, we will need to transform each year matrix into a list. Since the matrix is symmetrical, we will only need the upper triangle. For control purposes, we have designed our own upper triangulization matrix. In [14]: def get_list_from(matrix): only_valuable = [] extension = 1 for row_number in range(matrix.shape[0]): only_valuable.append(matrix[row_number, extension:matrix.shape[0]].tolist()) # numpy functions keep 0s so I hard coded it. extension += 1 return [element for column in only_valuable for element in column ] Let us print the capability lists of two example years. In [15]: # apply functions to both countries a_list = get_list_from(get_year_matrix(2012, normalization=True)) b_list = get_list_from(get_year_matrix(2013, normalization=True)) # create a matrix where each row is a list of a country corelation = np.vstack((a_list, b_list)) print corelation.shape good_cols = [i for i in range(corelation.shape[1]) if np.sum(corelation[:, i]) != 0] good_corelation = corelation[:, good_cols] print good_corelation.shape # plot the matrix plt.subplots(1,1,figsize=(20, 5)) plt.subplot(111) sns.heatmap(good_corelation,cmap=ListedColormap(['white', 'black']), center=0.00000001, cbar=None, square=False, yticklabels=['2012', '2013'], xticklabels=False) plt.yticks(rotation=0) plt.title('Year Capability List Visualization', size=15) plt.show() /Users/duarteocarmo/.pyenv/versions/miniconda3-4.3.30/envs/tech-cap/lib/python2.7/site-packages/ipykernel_launcher.py:31: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead. /Users/duarteocarmo/.pyenv/versions/miniconda3-4.3.30/envs/tech-cap/lib/python2.7/site-packages/ipykernel_launcher.py:36: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead. /Users/duarteocarmo/.pyenv/versions/miniconda3-4.3.30/envs/tech-cap/lib/python2.7/site-packages/ipykernel_launcher.py:53: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead. (2, 61776) (2, 3700) It is already apparent that these two consecutive years are highly correlated. 2. Year Correlation Matrix ¶ 2.1. Considerations ¶ As previously done with countries, a year correlation matrix will be built. We first define the scope of the matrix, by defining which years will be analyzed. In [16]: number_of_years = len(years_available) years_in_matrix = years_available years_correlation = np.zeros([number_of_years, number_of_years]) print years_in_matrix [1938, 1975, 1980, 1981, 1983, 1985, 1986, 1988, 1989, 1990, 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019] By looping over each year and calculating its capability list, we create a correlation matrix. We create a ductionnary where every key is a year, and its value the capability list of that same year. We do this to reduce memory: In [17]: year_capability_dictionnary = {} for year in years_in_matrix: year_capability_dictionnary[year] = get_list_from(get_year_matrix(year, normalization=True)) /Users/duarteocarmo/.pyenv/versions/miniconda3-4.3.30/envs/tech-cap/lib/python2.7/site-packages/ipykernel_launcher.py:31: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead. /Users/duarteocarmo/.pyenv/versions/miniconda3-4.3.30/envs/tech-cap/lib/python2.7/site-packages/ipykernel_launcher.py:36: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead. /Users/duarteocarmo/.pyenv/versions/miniconda3-4.3.30/envs/tech-cap/lib/python2.7/site-packages/ipykernel_launcher.py:53: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead. In [18]: # for every year A for row in range(number_of_years): year_1_list = year_capability_dictionnary[years_in_matrix[row]] # get its capability list # for every year B for column in range(number_of_years): year_2_list = year_capability_dictionnary[years_in_matrix[column]] # get its capability list years_correlation[row, column] = stats.pearsonr(year_1_list, year_2_list)[0] # calculate the correlation between the two and place it in the matrix /Users/duarteocarmo/.pyenv/versions/miniconda3-4.3.30/envs/tech-cap/lib/python2.7/site-packages/scipy/stats/stats.py:3038: RuntimeWarning: invalid value encountered in double_scalars We now print the correlation matrix. In [19]: plt.subplots(1,1,figsize=(9, 9)) plt.subplot(111) sns.heatmap(years_correlation,square=True, cbar=True,cbar_kws={"shrink": .2}, yticklabels=years_in_matrix, xticklabels=years_in_matrix) plt.title('Years Correlation Matrix: Unordered', size=13) plt.show() There seems to be a lot of data missing. Let's plot the amount of records in our databse over time to get a better sense on how to approach the problem. In [20]: # get all of the data data = DataFrame(connection_to_graph.data(years_available_q)).as_matrix() raw = [int(a) for a in data[:-1, 0]] timeline = range(min(raw), max(raw)) qtties = [] # build a timeline and number of records. for year in timeline: if year not in raw: qtties.append(0) else: idx = list(data[:, 0]).index(str(year)) qtties.append(data[idx, 1]) # re arrange it amountOfRecords = np.column_stack((timeline, qtties)) # plot the graph plt.style.use('seaborn-darkgrid') plt.subplots(1,1,figsize=(16, 5)) plt.subplot(111) plt.title("Number of assets over time") plt.xlabel("Year") plt.ylabel("Number of Available assets") plt.plot(timeline, qtties) plt.show() /Users/duarteocarmo/.pyenv/versions/miniconda3-4.3.30/envs/tech-cap/lib/python2.7/site-packages/ipykernel_launcher.py:2: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead. 2.2. Final Year Correlation Matrix ¶ To counteract the fact that our dataset is not uniformily distributed across the years, we will only consider the last 15 years. [2004-2018] In [21]: number_of_years = 22 years_in_matrix = years_available[:-1][-number_of_years:] years_correlation = np.zeros([number_of_years, number_of_years]) We now rebuild and plot the heatmap of correlations. In [22]: # create the matrix again. for row in range(number_of_years): year_1_list = year_capability_dictionnary[years_in_matrix[row]] for column in range(number_of_years): year_2_list = year_capability_dictionnary[years_in_matrix[column]] years_correlation[row, column] = stats.pearsonr(year_1_list, year_2_list)[0] # print it plt.subplots(1,1,figsize=(8, 8)) plt.subplot(111) sns.heatmap(years_correlation, cbar=True, cbar_kws={"shrink": .5},square=True, yticklabels=years_in_matrix, xticklabels=years_in_matrix) plt.title('Years Correlation Matrix: Chronologically Ordered, last {} years'.format(number_of_years), size=13) plt.savefig("Year_Correlation.png", dpi=300, quality=100) plt.show() 2.3. Year correlation matrix clustering ¶ Let us reorder the heatmap according to hierarchical clustering. In [23]: # plot the clustermap a = sns.clustermap(years_correlation, figsize=(8, 8), xticklabels = years_in_matrix, yticklabels=years_in_matrix) plt.savefig("Year_Correlation_Clustered.png", dpi=300, quality=100) plt.show() 3. Correlation of years over time ¶ Let us see how related is each year in our matrix with the one before it. In this way we might more easily detect discripancies. In [24]: # remove first year corr_with_pre = [] # iterate years and see their correlation row = 1 col = 0 corr_with_pre.append(years_correlation[row, col]) row = row + 1 col = col + 1 # plot plt.subplots(1,1,figsize=(15,7)) pal = sns.color_palette("Reds", len(data)) sns.barplot(np.arange(len(corr_with_pre)), corr_with_pre, palette=np.array(pal[::-1])[np.asarray(corr_with_pre).argsort().argsort()] ) plt.title('Correlation of year with previous year') plt.ylabel('Pearson Correlation Index') plt.show() Some years, such as 2006 or 2007 appear to have very low correlations with the years after. There seems to be an overall tendency of augmenting correlation with the years. 4. Research terms over time ¶ The following part of the analysis wil focus on how certain process variables (Feedstocks, Processing Technologies and Outputs) evolve over time. This can help in answering questions such as for example: • Is the focus on a certain processing technology constant over time? • Is this evolution correlated with other external factors? Let's start by creating a function such as: f(term, type of process variable) = [array with the number of records containing the term in each year] In [25]: from __future__ import division def get_records_of(startYear, endYear, term, process_type): # make query yearRangeQuery = """ MATCH (a:Asset)-[:CONTAINS]->(fs:{}) WHERE fs.term = "{}" AND (toInteger(a.year)>={} AND toInteger(a.year)<={}) AND NOT a.year = "Null" RETURN a.year, count(a) ORDER BY a.year """.format(process_type, term, startYear, endYear) # extract matrix rawQuery = DataFrame(connection_to_graph.data(yearRangeQuery)).as_matrix() # create matrix to store years, docs and total docs normalTimeline = np.arange(startYear, endYear + 1) completeMatrix = np.transpose(np.vstack((normalTimeline, normalTimeline, normalTimeline, normalTimeline))) completeMatrix[:, 1::] = 0 # add number of docs found by query to matrix for i in range(len(rawQuery[:, 0])): for j in range(len(completeMatrix[:, 0])): if int(rawQuery[i, 0]) == completeMatrix[j, 0]: completeMatrix[j, 1] = rawQuery[i, 1] # add total number of docs in that year to matrix for i in range(len(completeMatrix[:, 0])): for j in range(len(amountOfRecords[:, 0])): if completeMatrix[i, 0] == amountOfRecords[j, 0]: completeMatrix[i, 2] = amountOfRecords[j, 1] # create a list of the normalized results normalizedRecords = [] for i in range(len(completeMatrix[:, 0])): if completeMatrix[i, 2] != 0: normalizedRecords.append(float(completeMatrix[i, 1])/float(completeMatrix[i, 2])) else: normalizedRecords.append(0) result = {} result['range'] = completeMatrix[:, 0].tolist() result['nominal'] = completeMatrix[:, 1].tolist() result['total'] = completeMatrix[:, 2].tolist() result['normalized'] = normalizedRecords return result Now that the function is built, we can plot virtually any evolution. 4.1. Evolution of output terms ¶ Let us see the evolution of records of biogas Vs. ethanol as an example. In [26]: listOfOutputs = ['biogas', 'ethanol', 'biodiesel'] start_year = 1990 end_year = 2017 # plot the graph plt.style.use('seaborn-darkgrid') plt.subplots(1,1,figsize=(16, 5)) plt.subplot(111) plt.title("Evolution of Records with focus on Output") plt.xlabel("Year") plt.ylabel("Normalized Quantity") for name in listOfOutputs: nameData = get_records_of(start_year,end_year,name, 'Output') plt.plot(nameData['range'], nameData['normalized'], label=name) plt.legend() plt.show() /Users/duarteocarmo/.pyenv/versions/miniconda3-4.3.30/envs/tech-cap/lib/python2.7/site-packages/ipykernel_launcher.py:13: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead. 4.2. Evolution of processing technology terms ¶ Let us develop the same procedure for some processing technologies. In [27]: listOfProcTech = ['fermentation','enzymatic hydrolysis','hydrolysis' ] start_year = 1990 end_year = 2017 # plot the graph plt.style.use('seaborn-darkgrid') plt.subplots(1,1,figsize=(16, 5)) plt.subplot(111) plt.title("Evolution of Records with focus on Processing Technologies") plt.xlabel("Year") plt.ylabel("Normalized Quantity") for name in listOfProcTech: nameData = get_records_of(start_year,end_year,name, 'ProcessingTech') plt.plot(nameData['range'], nameData['normalized'], label=name) plt.legend() plt.show() /Users/duarteocarmo/.pyenv/versions/miniconda3-4.3.30/envs/tech-cap/lib/python2.7/site-packages/ipykernel_launcher.py:13: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead. 4.3. Evolution of feedstock terms ¶ Let us develop the same procedure for feedstock. In [28]: listOfFeed = ['sugar','wood','paper', 'algae', 'waste'] start_year = 1990 end_year = 2017 # plot the graph plt.style.use('seaborn-darkgrid') plt.subplots(1,1,figsize=(16, 5)) plt.subplot(111) plt.title("Evolution of Records with focus on Feedstocks") plt.xlabel("Year") plt.ylabel("Normalized Quantity") for name in listOfFeed: nameData = get_records_of(start_year,end_year,name, 'Feedstock') plt.plot(nameData['range'], nameData['normalized'], label=name) plt.legend() plt.show() /Users/duarteocarmo/.pyenv/versions/miniconda3-4.3.30/envs/tech-cap/lib/python2.7/site-packages/ipykernel_launcher.py:13: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead. 5. Contextual relationships ¶ 5.1. Oil ¶ We start by comparing the evolution of the outputs above studied with the average oil price per gallon found in the following website. We import the data, and convert monthly prices to yearly averages with the bellow code. In [29]: # get price per gallon in US dollars gallon = [] oil_years = list(set([int(e[0:4]) for e in oil_data[:, 0]]))[:-1] for year in oil_years: months = 0 for row in oil_data: if str(year) in row[0]: months += 1 gallon.append(average) # get price per barrel data oil_index = {'gallon':gallon, 'barrel':barrel} /Users/duarteocarmo/.pyenv/versions/miniconda3-4.3.30/envs/tech-cap/lib/python2.7/site-packages/ipykernel_launcher.py:2: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead. /Users/duarteocarmo/.pyenv/versions/miniconda3-4.3.30/envs/tech-cap/lib/python2.7/site-packages/ipykernel_launcher.py:16: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead. Relationship Over Time Let us visualize how the evolution of the price of gas relates to the normalized quantity of assets over time, in a chronological graph. In [30]: # define subplots fig, ax1 = plt.subplots(figsize=(15,7)) listOfOutputs = ['biogas', 'bioplastic', 'butanol'] colors = ['b', 'y', 'g'] start_year = 1990 end_year = 2017 price_type = 'barrel' # first axis for position, outputName in enumerate(listOfOutputs): nameData = get_records_of(start_year, end_year, outputName, 'Output') ax1.plot(nameData['range'], nameData['normalized'], label=outputName, color=colors[position], ls='--', alpha=0.5) ax1.set_xlabel('Years') ax1.set_ylabel('Number of relative records') ax1.tick_params('y') ax1.set_title('Oil Price Vs. Asset Quantity') ax1.legend(loc=2, frameon=True) ax1.grid(False) # second axis ax2 = ax1.twinx() ax2.plot(oil_years,oil_index[price_type], color='r', label='Oil Price') ax2.set_ylabel('Price of {} of oil $US'.format(price_type), color='r') ax2.tick_params('y', colors='r') ax2.legend(loc=1, frameon=True) # expose plt.show() /Users/duarteocarmo/.pyenv/versions/miniconda3-4.3.30/envs/tech-cap/lib/python2.7/site-packages/ipykernel_launcher.py:13: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead. Scatter Visualization To study this relationship in a more in depth fashion we create a process that given a certain term gives us the relationship with the price of gas. In [31]: # define terms outPutToCompare = 'butanol' typeOfProcessVariable = 'Output' price_type = 'gallon' # get data data = get_records_of(1990, 2017, outPutToCompare, typeOfProcessVariable)['normalized'] # plot the figure fig, ax1 = plt.subplots(figsize=(15,7)) sns.regplot(np.asarray(oil_index[price_type]), np.asarray(data) ,fit_reg=True, marker="+", color = 'g') plt.title('Gas price relation with quantity of Assets: {}'.format(outPutToCompare)) plt.xlabel('Price of {} of oil in US$ in Year'.format(price_type)) plt.ylabel('Quantity of Asset {} in Year'.format(outPutToCompare)) plt.show() # get correlation indexes correlationIndexes = stats.pearsonr(np.asarray(oil_index[price_type]), np.asarray(get_records_of(1990, 2017, outPutToCompare, 'Output')['normalized'])) print 'Pearson Correlation Index: ', correlationIndexes[0] print 'P-value: ', correlationIndexes[1] /Users/duarteocarmo/.pyenv/versions/miniconda3-4.3.30/envs/tech-cap/lib/python2.7/site-packages/ipykernel_launcher.py:13: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead. Pearson Correlation Index: 0.8445465111603638 P-value: 1.6031894575347735e-08 In the above graph each datapoint corresponds to a year. Biggest Positive Correlations In [32]: # query for data term_names_query = """ MATCH (a:Asset)-[:CONTAINS]->(fs:Output) WHERE (toInteger(a.year)>=1990 AND toInteger(a.year)<=2017) AND NOT a.year = "Null" RETURN fs.term, count(a) ORDER BY count(a) DESC""" # get data from past scripts oil_type = 'gallon' term_names = list(DataFrame(connection_to_graph.data(term_names_query)).as_matrix()[:, 1].tolist()) correlations = [] p_values = [] # for every term, get its correlation with the price of oil for term in term_names: data = get_records_of(1990, 2017, term, 'Output')['normalized'] correlations.append(stats.pearsonr(data, oil_index[oil_type])[0]) p_values.append(stats.pearsonr(data, oil_index[oil_type])[1]) # create a pandas dataframe for pretty printing. oilDataFrame = pd.DataFrame( {'Output Name': term_names, 'Pearson Correlation Index': correlations, 'P-value': p_values }) oilDataFrame = oilDataFrame.sort_values('Pearson Correlation Index', ascending=False) # print context print 'The relationship between relative number of documents and price of oil over time:' top = 10 # print data print 'TOP {}:'.format(top) display(oilDataFrame[:top]) /Users/duarteocarmo/.pyenv/versions/miniconda3-4.3.30/envs/tech-cap/lib/python2.7/site-packages/ipykernel_launcher.py:10: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead. /Users/duarteocarmo/.pyenv/versions/miniconda3-4.3.30/envs/tech-cap/lib/python2.7/site-packages/ipykernel_launcher.py:13: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead. The relationship between relative number of documents and price of oil over time: TOP 10: Output Name P-value Pearson Correlation Index 7 butanol 1.603189e-08 0.844547 19 bioplastic 2.463734e-07 0.804599 1 biodiesel 7.978637e-07 0.784034 21 fatty acid ethyl ester 1.427601e-06 0.772960 3 bioethanol 2.862862e-05 0.704439 9 syng 3.649295e-05 0.697899 15 biobutanol 5.140385e-05 0.688369 8 cellulosic ethanol 1.301892e-04 0.660616 14 biopolymers 3.263515e-04 0.630094 Biggest Negative Correlations In [33]: # same approach but value negative correlations term_names_query = """ MATCH (a:Asset)-[:CONTAINS]->(fs:Output) WHERE (toInteger(a.year)>=1990 AND toInteger(a.year)<=2017) AND NOT a.year = "Null" RETURN fs.term, count(a) ORDER BY count(a) DESC""" oil_type = 'gallon' term_names = list(DataFrame(connection_to_graph.data(term_names_query)).as_matrix()[:, 1].tolist()) correlations = [] p_values = [] for term in term_names: data = get_records_of(1990, 2017, term, 'Output')['normalized'] correlations.append(stats.pearsonr(data, oil_index[oil_type])[0]) p_values.append(stats.pearsonr(data, oil_index[oil_type])[1]) oilDataFrame = pd.DataFrame( {'Output Name': term_names, 'Pearson Correlation Index': correlations, 'P-value': p_values }) oilDataFrame = oilDataFrame.sort_values('Pearson Correlation Index', ascending=False) print 'The relationship between relative number of documents and price of oil over time:' bottom = -10 print 'BOTTOM {}:'.format(bottom) display(oilDataFrame[bottom:]) /Users/duarteocarmo/.pyenv/versions/miniconda3-4.3.30/envs/tech-cap/lib/python2.7/site-packages/ipykernel_launcher.py:8: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead. /Users/duarteocarmo/.pyenv/versions/miniconda3-4.3.30/envs/tech-cap/lib/python2.7/site-packages/ipykernel_launcher.py:13: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead. The relationship between relative number of documents and price of oil over time: BOTTOM -10: Output Name P-value Pearson Correlation Index 16 naphtha 0.677733 0.082145 44 biodiesel blending 0.683315 0.080645 45 ethanol blending 0.683315 0.080645 18 renewable diesel 0.716673 0.071767 11 renewable fuel 0.944557 0.013770 17 succinic acid 0.956893 0.010703 40 rdif 0.629618 -0.095276 34 electricity from biomass 0.456174 -0.146748 5 gasoline 0.371514 -0.175570 10 pellets 0.268436 -0.216520 5.2. Sugar ¶ In this part we will make the same analysis but taking an example of a feedstock: sugar. Data was obtained here. We start by importing the data. In [34]: sugar_data = pd.read_csv('Data/Sugar_Price.csv', delimiter=';', header=None).as_matrix() sugar = {} sugar['years'] = [int(e) for e in sugar_data[:, 0]] sugar['nominal'] = [e for e in sugar_data[:, 1]] sugar['real'] = [e for e in sugar_data[:, 2]] /Users/duarteocarmo/.pyenv/versions/miniconda3-4.3.30/envs/tech-cap/lib/python2.7/site-packages/ipykernel_launcher.py:1: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead. Relationship Over Time Let us see the evolution of Sugar prices side by side with the evolution of certain feedstocks in our database. In [35]: # define subplots fig, ax1 = plt.subplots(figsize=(15,7)) feedstock_list = ['sugar', 'wood', 'sugarcane', 'sugar beet', 'cellulosic sugars'] colors = ['gold', 'mediumblue', 'm', 'green', 'k'] start_year = 1990 end_year = 2017 sugar_price_type = 'real' # first axis for position,feedstock in enumerate(feedstock_list): data = get_records_of(start_year, end_year, feedstock, 'Feedstock') ax1.plot(data['range'], data['normalized'], label=feedstock, ls='--', color=colors[position]) ax1.set_xlabel('Years') ax1.set_ylabel('Relative number of records') ax1.tick_params('y') ax1.set_title('Sugar Prices Vs. Asset Quantity') ax1.legend(loc=3, frameon=True) ax1.grid(False) # second axis ax2 = ax1.twinx() ax2.plot(sugar['years'], sugar[sugar_price_type], color='r', label='Sugar Price', ls='-') ax2.set_ylabel('Price per kilo of sugar in $US (inflation adjusted)', color='r') ax2.tick_params('y', colors='r') ax2.legend(loc=1, frameon=True) # expose plt.show() /Users/duarteocarmo/.pyenv/versions/miniconda3-4.3.30/envs/tech-cap/lib/python2.7/site-packages/ipykernel_launcher.py:13: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead. Scatter Example Let us see a scatter plot where each point is a year and the x and y axis correpond to the price of sugar and quantity of assets respectively. In [36]: outPutToCompare = 'sugarcane' typeOfProcessVariable = 'Feedstock' price_type = 'real' data = get_records_of(1990, 2017, outPutToCompare, typeOfProcessVariable)['normalized'] fig, ax1 = plt.subplots(figsize=(15,7)) sns.regplot(np.asarray(sugar[price_type]), np.asarray(data) ,fit_reg=True, marker="+", color = 'b') plt.title('Sugar price relation with quantity of Assets: {}'.format(outPutToCompare)) plt.xlabel('Price of sugar US$ per kilo in Year ({})'.format(price_type)) plt.ylabel('Quantity of Asset {} in Year'.format(outPutToCompare)) plt.show() /Users/duarteocarmo/.pyenv/versions/miniconda3-4.3.30/envs/tech-cap/lib/python2.7/site-packages/ipykernel_launcher.py:13: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead. Biggest Positive Correlations Which are the feedstocks who are more related to the price of sugar per kilo in what regards the number of records? In [37]: term_names_query = """ MATCH (a:Asset)-[:CONTAINS]->(fs:Feedstock) WHERE (toInteger(a.year)>=1990 AND toInteger(a.year)<=2017) AND NOT a.year = "Null" RETURN fs.term, count(a) ORDER BY count(a) DESC""" price_type = 'nominal' term_names = list(DataFrame(connection_to_graph.data(term_names_query)).as_matrix()[:, 1].tolist()) correlations = [] p_values = [] for term in term_names: data = get_records_of(1990, 2017, term, 'Feedstock')['normalized'] correlations.append(stats.pearsonr(data, sugar[price_type])[0]) p_values.append(stats.pearsonr(data, sugar[price_type])[1]) sugarDataframe = pd.DataFrame( {'Feedstock Name': term_names, 'Pearson Correlation Index': correlations, 'P-value': p_values }) sugarDataframe = sugarDataframe.sort_values('Pearson Correlation Index', ascending=False) print 'The relationship between relative number of documents and price per kilo of sugar:' top = 10 print 'TOP {}:'.format(top) display(sugarDataframe[:top]) /Users/duarteocarmo/.pyenv/versions/miniconda3-4.3.30/envs/tech-cap/lib/python2.7/site-packages/ipykernel_launcher.py:7: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead. /Users/duarteocarmo/.pyenv/versions/miniconda3-4.3.30/envs/tech-cap/lib/python2.7/site-packages/ipykernel_launcher.py:13: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead. The relationship between relative number of documents and price per kilo of sugar: TOP 10: Feedstock Name P-value Pearson Correlation Index 26 sugarcane 6.074365e-07 0.789014 107 cellulosic sugars 1.263220e-06 0.775341 43 jatropha 1.521222e-06 0.771713 34 sorghum 3.083429e-06 0.757299 64 dry biomass 3.454736e-06 0.754884 75 beets 4.105286e-06 0.751165 99 dedicated energy crops 6.915371e-06 0.739525 1 algae 1.103076e-05 0.728562 100 hybrid poplar 1.490683e-05 0.721206 25 soy 2.631077e-05 0.706675 Biggest Negative Correlations In [38]: term_names_query = """ MATCH (a:Asset)-[:CONTAINS]->(fs:Feedstock) WHERE (toInteger(a.year)>=1990 AND toInteger(a.year)<=2017) AND NOT a.year = "Null" RETURN fs.term, count(a) ORDER BY count(a) DESC""" price_type = 'nominal' term_names = list(DataFrame(connection_to_graph.data(term_names_query)).as_matrix()[:, 1].tolist()) correlations = [] p_values = [] for term in term_names: data = get_records_of(1990, 2017, term, 'Feedstock')['normalized'] correlations.append(stats.pearsonr(data, sugar[price_type])[0]) p_values.append(stats.pearsonr(data, sugar[price_type])[1]) sugarDataframe = pd.DataFrame( {'Feedstock Name': term_names, 'Pearson Correlation Index': correlations, 'P-value': p_values }) sugarDataframe = sugarDataframe.sort_values('Pearson Correlation Index', ascending=False) print 'The relationship between relative number of documents and price per kilo of sugar:' bottom = -10 print 'Bottom {}:'.format(bottom * -1) display(sugarDataframe[bottom:]) /Users/duarteocarmo/.pyenv/versions/miniconda3-4.3.30/envs/tech-cap/lib/python2.7/site-packages/ipykernel_launcher.py:7: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead. /Users/duarteocarmo/.pyenv/versions/miniconda3-4.3.30/envs/tech-cap/lib/python2.7/site-packages/ipykernel_launcher.py:13: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead. The relationship between relative number of documents and price per kilo of sugar: Bottom 10: Feedstock Name P-value Pearson Correlation Index 93 wood fuel 0.336664 -0.188530 72 waste oil 0.329527 -0.191282 136 particle board 0.289545 -0.207425 156 durum 0.279399 -0.211741 178 citrus residues 0.260452 -0.220080 123 beef tallow 0.240798 -0.229158 53 sawdust 0.223542 -0.237542 138 trap grease 0.212719 -0.243025 79 wood waste 0.210637 -0.244102 5 wood 0.137684 -0.287685 NON SERIES TIME ANALYSIS IS A LIMITATION. 6. Comparing Years ¶ In this part of the analysis the goal is two understand what exact capabilities differ from year to year. More exactly, how does one particular capability evolve over the course of two or more years. For example, if in year X1, Y1% of the assets related to sugar, what is the percentage Y2% in year X2? 6.1. Visualizing the differences ¶ Let us visualize two different years side by side. In [39]: ## call functions first_year = 2010 second_year = 2017 colors='binary' graph_holder = 0.005 fst_year_matrix = get_year_matrix(first_year, normalization=False) scnd_year_matrix = get_year_matrix(second_year, normalization=False) # create a subplot plt.subplots(2,1,figsize=(17,17)) # first heatmap plt.subplot(121) sns.heatmap(fst_year_matrix , cmap=colors, cbar=True,cbar_kws={"shrink": .2}, square=True, xticklabels=False, yticklabels=False, vmax=graph_holder) borders(1.5, 'k') plt.title('Capability Matrix: {}'.format(first_year)) # second heatmap plt.subplot(122) sns.heatmap(scnd_year_matrix , cmap=colors, cbar=True,cbar_kws={"shrink": .2}, square=True, xticklabels=False, yticklabels=False, vmax=graph_holder) borders(1.5, 'k') plt.title('Capability Matrix: {}'.format(second_year)) plt.show() /Users/duarteocarmo/.pyenv/versions/miniconda3-4.3.30/envs/tech-cap/lib/python2.7/site-packages/ipykernel_launcher.py:31: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead. /Users/duarteocarmo/.pyenv/versions/miniconda3-4.3.30/envs/tech-cap/lib/python2.7/site-packages/ipykernel_launcher.py:36: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead. /Users/duarteocarmo/.pyenv/versions/miniconda3-4.3.30/envs/tech-cap/lib/python2.7/site-packages/ipykernel_launcher.py:53: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead.
15,111
52,796
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.5625
3
CC-MAIN-2022-27
latest
en
0.478602
https://www.cs.purdue.edu/homes/dgleich/cs314-2016/julia/Chapter-2/Using-Julia.html
1,516,418,048,000,000,000
text/html
crawl-data/CC-MAIN-2018-05/segments/1516084888878.44/warc/CC-MAIN-20180120023744-20180120043744-00448.warc.gz
893,896,006
38,029
This text is adapted from Numerical Methods by Greenbaum and Chartier and is meant as a Julia-based analogue to their chapter on Matlab. Julia is a high-level programming language that can be used for any technical or numerical computing problem, but is especially well-suited to linear algebra computations. It is also a powerful general-purpose programming language in its own right. In this course, we think that juliabox.com is the best way to run Julia. This uses the jupyter notebook interface that enables us to insert commands and see their results. To use this, enter juliabox.com in your browser. (Note, there is an older juliabox.org website that serves the same purpose, but please use juliabox.com for CS314 at Purdue in Fall 2016.) You should see the following webpage. Once you do that, you'll get to the file browser. This allows you to create new notebooks (the New button at the upper-right...) or work through a directory structure. All the files are stored on Google's cloud and saved so that they'll be there if you return. You can upload files through the "Files" button at top. The Tutorial is placed by default in every new juliabox setup. (Only once though, if you delete it, I don't know how to get it back right now.) This is a good place to start. If you click on the tutorial, you'll get a list of notebooks. Once you click on a notebook (say the 00 - Start Tutorial.ipnb file), then you'll start a notebook! Actually, this file is a notebook itself, so we can type commands just the same way! These examples now follow your textbook. To create a new "cell", go to "Insert Cell Below" and you'll get your own area to type commands In [12]: 1+2*3 Out[12]: 7 In [13]: ans/4 Out[13]: 1.75 Section 2.2 - Vectors¶ At this point, we are going to switch back and show how to do the commands in the book In [91]: v = [1; 2; 3; 4] Out[91]: 4-element Array{Int64,1}: 1 2 3 4 In [92]: v = [1.; 2; 3; 4] Out[92]: 4-element Array{Float64,1}: 1.0 2.0 3.0 4.0 In [93]: w = [5 6 7 8] Out[93]: 1x4 Array{Int64,2}: 5 6 7 8 In [94]: w = map(Float64,[5 6 7 8]) # convert to Float64 Out[94]: 1x4 Array{Float64,2}: 5.0 6.0 7.0 8.0 In [95]: v[2] Out[95]: 2.0 In [96]: w[3] Out[96]: 7.0 In [97]: v+w LoadError: DimensionMismatch("dimensions must match") in promote_shape at operators.jl:211 in + at arraymath.jl:96 In [98]: v+w' Out[98]: 4x1 Array{Float64,2}: 6.0 8.0 10.0 12.0 In [99]: vec(v+w') Out[99]: 4-element Array{Float64,1}: 6.0 8.0 10.0 12.0 In [100]: v[1]+v[2]+v[3]+v[4] Out[100]: 10.0 In [101]: sumv = 0. for i=1:4 sumv = sumv+v[i]; end sumv Out[101]: 10.0 (Section 2.3) Getting help¶ In [4]: ?sum search: sum sum! sumabs summary sumabs2 sumabs! sum_kbn sumabs2! cumsum cumsum! Out[4]: sum(A, dims) Sum elements of an array over the given dimensions. sum(itr) Returns the sum of all elements in a collection. sum(f, itr) Sum the results of calling function f on each element of itr. David's note. Julia's documentation is currently not as good as Matlab's :( Section 2.4 Matrices¶ In [7]: A = [1 2 3; 4 5 6; 7 8 0] Out[7]: 3x3 Array{Int64,2}: 1 2 3 4 5 6 7 8 0 In [8]: b = [0; 1; 2] Out[8]: 3-element Array{Int64,1}: 0 1 2 In [9]: x = A\b Out[9]: 3-element Array{Float64,1}: 0.666667 -0.333333 -0.0 Show a full precision answer. This is slightly easier in Matlab. In [11]: for i=1:length(x) @printf("%.16e\n", x[i]) end 6.6666666666666663e-01 -3.3333333333333331e-01 -0.0000000000000000e+00 In [107]: x Out[107]: 3-element Array{Float64,1}: 0.666667 -0.333333 -0.0 In [108]: b - A*x Out[108]: 3-element Array{Float64,1}: 0.0 0.0 4.44089e-16 In [109]: #Section 2.6 Comments #Solve Ax = b x = A\b # this solves Ax=b and stores the result for x Out[109]: 3-element Array{Float64,1}: 0.666667 -0.333333 -0.0 Warning¶ We are still editing the section below this! In [114]: #2.8 Creating your own functions f(x) = x^2 + 2x Out[114]: f (generic function with 1 method) In [115]: f(5) Out[115]: 35 In [116]: map(f, 0:1:5) Out[116]: 6-element Array{Int64,1}: 0 3 8 15 24 35 In [117]: x -> x^2 + 2x #Anonymous function Out[117]: (anonymous function) In [118]: map(x -> x^2 + 2x, 0:1:5) Out[118]: 6-element Array{Int64,1}: 0 3 8 15 24 35 In [119]: #2.9 Printing x = 0:.5:2 print(x) 0.0:0.5:2.0 In [124]: @printf("x = %s" ,collect(x)) x = [0.0,0.5,1.0,1.5,2.0] In [125]: y = rand(5,3) println(" Score1 Score2 Score3 ") println("\$y") Score1 Score2 Score3 [0.32211092471714275 0.5075976888863105 0.916080364548784 0.2945150815604498 0.8095561821604225 0.038854828692170607 0.22095166747559625 0.7457964790998437 0.9383063722746723 0.5171533339610654 0.4778154207659637 0.03573940173467305 0.15285539705109175 0.7385150097727968 0.8777448778450923] In [126]: println(" x \t\t sqrt(x) \n=====================================") for x=1:5 @printf("%f \t\t %f\n",x, sqrt(x)) #formating string end x sqrt(x) ===================================== 1.000000 1.000000 2.000000 1.414214 3.000000 1.732051 4.000000 2.000000 5.000000 2.236068 In [ ]: #2.10 More Loops and Conditionals using Plots print("Enter initial xmin ") print("Enter initial xmax ") print("Enter tolerance ") while xmax - xmin > tolerance x = collect(xmin:(xmax-xmin)/100:xmax) y = map(x->x^2, x) plot!(x,y) print("Enter new value for xmin ") Enter initial xmin STDIN> 10 WARNING: No working GUI backend found for matplotlib. Enter new value for xmin
1,980
5,499
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.34375
3
CC-MAIN-2018-05
latest
en
0.898543
https://www.tutorialspoint.com/how-to-match-dates-by-month-and-year-only-in-excel
1,695,556,814,000,000,000
text/html
crawl-data/CC-MAIN-2023-40/segments/1695233506632.31/warc/CC-MAIN-20230924091344-20230924121344-00194.warc.gz
1,164,368,407
18,809
# How to match dates by month and year only in Excel? Sometimes we need to find the similar dates in dataset of multiple dates to identify the frequency of occurrence of any activity in a month/year or day. For this we have a formula in excel using which we can instantly find out the dates falling under a specific year and/or month even for a single day. We will be using the following two methods to identify the dates by month and year only from a dataset. • Comparing adjacent dates by month and year only through a formula • Finding dates by month and year only through Conditional Formatting ## Finding Dates by Month and Year only through a Formula Step 1  We have taken the following sample data having 3 columns as following − • Date 1  Group1 of some random dates • Date 2  Group2 of some random dates • Comparison  Formula will be mentioned here to identify whether the adjacent dates match with month and year or not. Step 2  Under the column C enter the following formula at C2 cell and press enter. =MONTH(A2)&YEAR(A2)=MONTH(B2)&YEAR(B2) This formula will compare the month and year of adjacent cells of Date 1 and Date 2 columns. Step 3  Drag the formula till the last row. Now following will be the output. Here, True is returned when month and year of the adjacent dates match otherwise the system returns False. Formula Syntax Description Argument Description MONTH(serial_number) Against serial number, mention a cell address with date value and it will return the month of the respective date. YEAR(serial_number) Against serial number, mention a cell address with date value and it will return the year of the respective date. ## Finding Dates by Month and Year only through Conditional Formatting Using this formula we can search and highlight all the dates having similar month and year as compared to a specific date. Let’s see how this can be achieved. Step 1  Following is the sample data where we have four columns as following − • Date 1  Date set 1 • Date 2  Date set 2 • Date 3  Date set 3 • Date to be compared  Specific date which will be compared. Step 2  Now select the date sets with which specific date needs to be compared and go to Home / Conditional Formatting / New Rule. Step 3  In the New Formatting Rule dialog box, select Use a formula to determine which cells to format under select a rule type. Step 4  Enter the following formula under the field, Format values where this formula is true. =TEXT(A2,"myyyy")=TEXT(DATE(1999,30,8),"myyyy") Step 5  After entering the formula click Format against Preview field. Step 6  On click of Format, following dialog box will open. Here go to the ‘Fill’ tab select the color to highlight the matched dates as shown below and click OK. The preview of New rule will be as following. Here, click OK. Step 7 − The final output will be as following where all the dates having month and year similar to the selected date have been highlighted. ## Conclusion Please note that in the formula, A2 is the first cell of selected date range, 1999,8,30 is the given date we have compared with. Please change them based on your requirement. This method is more useful than the previous one as it compares the date with a set of dates instead of just one value. However, it depends completely on your requirement which ever suits you Updated on: 02-May-2023 385 Views
765
3,377
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.140625
3
CC-MAIN-2023-40
latest
en
0.868382
https://web2.0calc.com/questions/check-my-answer-please_1
1,596,857,121,000,000,000
text/html
crawl-data/CC-MAIN-2020-34/segments/1596439737238.53/warc/CC-MAIN-20200808021257-20200808051257-00064.warc.gz
545,402,486
6,838
+0 0 1239 2 +83 Over a 24-hour period, the tide in a harbor can be modeled by one period of a sinusoidal function. The tide measures 4.35 ft at midnight, rises to a high of 8.3 ft, falls to a low of 0.4 ft, and then rises to 4.35 ft by the next midnight. What is the equation for the sine function f(x), where x represents time in hours since the beginning of the 24-hour period, that models the situation? Feb 12, 2019 #1 +25992 0 Amplitude    Max   8.3 ft    min  0.4       means amplitude = 7.9/2 = 3.95    check Shited up to midline 4.35      check Period = 24    2pi/24 = 1/12   pi      check sine wave Looks good to me Feb 12, 2019 #2 +83 +1 thanks mate
235
671
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.6875
4
CC-MAIN-2020-34
latest
en
0.756486
http://www.slowry.com/index.php/phor/physics-of-racing-part-7/
1,560,656,440,000,000,000
text/html
crawl-data/CC-MAIN-2019-26/segments/1560627997533.62/warc/CC-MAIN-20190616022644-20190616044644-00206.warc.gz
325,329,686
13,679
# The Traction Budget This month, we introduce the traction budget. This is a way of thinking about the traction available for car control under various conditions. It can help you make decisions about driving style, the right line around a course, and diagnosing handling problems. We introduce a diagramming technique for visualizing the traction budget and combine this with a well-known visualization tool, the “circle of traction,” also known as the circle of friction. So this month’s article is about tools, conceptual and visual, for thinking about some aspects of the physics of racing. To introduce the traction budget, we first need to visualize a tyre in contact with the ground. Figure 1 shows how the bottom surface of a tyre might look if we could see that surface by looking down from above. In other words, this figure shows an imaginary “X-ray” view of the bottom surface of a tyre. For the rest of the discussion, we will always imagine that we view the tyre this way. From this point of view, “up” on the diagram corresponds to forward forces and motion of the tyre and the car, “down” corresponds to backward forces and motion, “left” corresponds to leftward forces and motion, and “right” on the diagram corresponds to rightward forces and motion. The figure shows a shaded, elliptical region, where the tyre presses against the ground. All the interaction between the tyre and the ground takes place in this contact patch: that part of the tyre that touches the ground. As the tyre rolls, one bunch of tyre molecules after another move into the contact patch. But the patch itself more-or-less keeps the same shape, size, and position relative to the axis of rotation of the tyre and the car as a whole. We can use this fact to develop a simplified view of the interaction between tyre and ground. This simplified view lets us quickly and easily do approximate calculations good within a few percent. (A full-blown, mathematical analysis requires tyre coordinates that roll with the tyre, ground coordinates fixed on the ground, car coordinates fixed to the car, and many complicated equations relating these coordinate systems; the last few percent of accuracy in a mathematical model of tyre-ground interaction involves a great deal more complexity.) You will recall that forces on the tyre from the ground are required to make a car change either its speed of motion or its direction of motion. Thinking of the X-ray vision picture, forces pointing up are required to make the car accelerate, forces pointing down are required to make it brake, and forces pointing right and left are required to make the car turn. Consider forward acceleration, for a moment. The engine applies a torque to the axle. This torque becomes a force, pointing backwards (down, on the diagram), that the tyre applies to the ground. By Newton’s third law, the ground applies an equal and opposite force, therefore pointing forward (up), on the contact patch. This force is transmitted back to the car, accelerating it forward. It is easy to get confused with all this backward and forward action and reaction. Remember to think only about the forces on the tyre and to ignore the forces on the ground, which point the opposite way. You will also recall that a tyre has a limited ability to stick to the ground. Apply a force that is too large, and the tyre slides. The maximum force that a tyre can take depends on the weight applied to the tyre: F µW where F is the force on the tyre, µ is the coefficient of adhesion (and depends on tyre compound, ground characteristics, temperature, humidity, phase of the moon, etc.), and W is the weight or load on the tyre. By Newton’s second law, the weight on the tyre depends on the fraction of the car’s mass that the tyre must support and the acceleration of gravity, g = 32.1 ft / sec2. The fraction of the car’s mass that the tyre must support depends on geometrical factors such as the wheelbase and the height of the centre of gravity. It also depends on the acceleration of the car, which completely accounts for weight transfer. It is critical to separate the geometrical, or kinematic, aspects of weight transfer from the mass of the car. Imagine two cars with the same geometry but different masses (weights). In a one g braking manoeuvre, the same fraction of each car’s total weight will be transferred to the front. In the example of Part 1 of this series, we calculated a 20% weight transfer during one g braking because the height of the CG was 20% of the wheelbase. This weight transfer will be the same 20% in a 3500 pound, stock Corvette as in a 2200 pound, tube-frame, Trans-Am Corvette so long as the geometry (wheelbase, CG height, etc.) of the two cars is the same. Although the actual weight, in pounds, will be different in the two cases, the fractions of the cars’ total weight will be equal. Separating kinematics from mass, then, we have for the weight W = f(a)mg where f(a) is the fraction of the car’s mass the tyre must support and also accounts for weight transfer, m is the car’s mass, and g is the acceleration of gravity. Finally, by Newton’s second law again, the acceleration of the tyre due to the force F applied to it is a = F / f(a)m We can now combine the expressions above to discover a fascinating fact: a = F / f(a)m  amax The maximum acceleration a tyre can take is µg, a constant, independent of the mass of the car! While the maximum force a tyre can take depends very much on the current vertical load or weight on the tyre, the acceleration of that tyre does not depend on the current weight. If a tyre can take one g before sliding, it can take it on a lightweight car as well as on a heavy car, and it can take it under load as well as when lightly loaded. We hinted at this fact in Part 2, but the analysis above hopefully gives some deeper insight into it. We note that amax being constant is only approximately true, because µ changes slightly as tyre load varies, but this is a second-order effect (covered in a later article). So, in an approximate way, we can consider the available acceleration from a tyre independently of details of weight transfer. The tyre will give you so many gees and that’s that. This is the essential idea of the traction budget. What you do with your budget is your affair. If you have a tyre that will give you one g, you can use it for accelerating, braking, cornering, or some combination, but you cannot use more than your budget or you will slide. The front-back component of the budget measures accelerating and braking, and the right-left component measures cornering acceleration. The front-back component, call it ay, combines with the left-right component, ax, not by adding, but by the Pythagorean formula: Rather than trying to deal with this formula, there is a convenient, visual representation of the traction budget in the circle of traction. Figure 2 shows the circle. It is oriented in the same way as the X-ray view of the contact patch, Figure 1, so that up is forward and right is rightward. The circular boundary represents the limits of the traction budget, and every point inside the circle represents a particular choice of how you spend your budget. A point near the top of the circle represents pure, forward acceleration, a point near the bottom represents pure braking. A point near the right boundary, with no up or down component, represents pure rightward cornering acceleration. Other points represent Pythagorean combinations of cornering and forward or backward acceleration. The beauty of this representation is that the effects of weight transfer are factored out. So the circle remains approximately the same no matter what the load on a tyre. In racing, of course, we try to spend our budget so as to stay as close to the limit, i.e. the circular boundary, as possible. In street driving, we try to stay well inside the limit so that we have lots of traction available to react to unforeseen circumstances. I have emphasized that the circle is only an approximate representation of the truth. It is probably close enough to make a computer driving simulation that feels right (I’m pretty sure that “Hard Drivin'” and other such games use it). As mentioned, tyre loads do cause slight, dynamic variations. Car characteristics also give rise to variations. Imagine a car with slippery tyres in the back and sticky tyres in the front. Such a car will tend to oversteer by sliding. Its traction budget will not look like a circle. Figure 3 gives an indication of what the traction budget for the whole car might look like (we have been discussing the budget of a single tyre up to this point, but the same notions apply to the whole car). In Figure 3, there is a large traction circle for the sticky front tyres and a small circle for the slippery rear tyres. Under acceleration, the slippery rears dominate the combined traction budget because of weight transfer. Under braking, the sticky fronts dominate. The combined traction budget looks something like an egg, flattened at top and wide in the middle. Under braking, the traction available for cornering is considerably greater than the traction available during acceleration because the sticky fronts are working. So, although this poorly handling car tends to oversteer by sliding the rear, it also tends to understeer during acceleration because the slippery rears will not follow the steering front tyres very effectively. The traction budget is a versatile and simple technique for analysing and visualizing car handling. The same technique can be applied to developing driver’s skills, planning the line around a course, and diagnosing handling problems.
2,017
9,698
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.734375
4
CC-MAIN-2019-26
latest
en
0.923149
http://thestargarden.co.uk/Quantum-entanglement.html
1,701,326,518,000,000,000
text/html
crawl-data/CC-MAIN-2023-50/segments/1700679100172.28/warc/CC-MAIN-20231130062948-20231130092948-00783.warc.gz
42,662,302
11,086
How We Came to Know the Cosmos: Light & Matter # Chapter 18. Quantum Entanglement ## 18.1 The collapse approach The Copenhagen interpretation or collapse approach to quantum mechanics was devised by Werner Heisenberg in the 1920s[1] and modified by the Italian physicists Giancarlo Ghirardi, Alberto Rimini, and Tullio Weber,[2,3] and the British physicist Roger Penrose.[4,5] The collapse approach states that the measurement of a quantum system invokes a ‘collapse’ of the quantum wave function from a superpositional state into a state that can be described classically, in accordance with Born’s rule[6] (discussed in Chapter 17). At first glance, the collapse approach appears to contradict Albert Einstein’s theory of special relativity (discussed in Book I), which states that nothing can travel faster than the speed of light.[7] This is because of an effect known as entanglement, a term coined by Erwin Schrödinger in 1935.[8] ## 18.2 Quantum entanglement Schrödinger stated, If two separated bodies, about which, individually, we have maximal knowledge, come into a situation in which they influence one another and then again separate themselves, then there regularly arises that which I just called entanglement [Verschränkung] of our knowledge of the two bodies...Our knowledge remains maximal, but at the end, if the bodies have again separated themselves, that knowledge does not again decompose into a logical sum of knowledge of the individual bodies.[9] Entanglement can be illustrated with examples of any observable property, such as position, momentum, or spin. Two entangled electrons, for example, must possess spins of opposite signs. Spin can be measured at any angle, but is usually described as being ‘up’ or ‘down’, or ‘left’ or ‘right’ when measured in horizontal or vertical planes. This means that measuring the spin of one member of an entangled pair of electrons instantaneously determines the spin of the other, even if it’s very far away. Schrödinger showed that no equation describes the state of a single entangled electron, and the overall spin-state cannot be equated with any combination of the individual states. This means that entangled electrons cannot really be said to be individuals. ### 18.2.1 The EPR paper Einstein didn’t like the collapse approach because it suggests that instantaneous action at a distance occurs when the wave function collapses. Einstein, American physicist Boris Podolsky, and the American-Israeli physicist Nathan Rosen presented what became known as the EPR paper in 1935.[10] The EPR paper states that quantum mechanics is incomplete. There must be hidden variables that explain why there’s no need for instantaneous travel, something Einstein famously referred to as “spooky”.[11] Einstein tried to think of a way to ascribe observable properties to a system without measuring it directly. He realised that if the position of one electron in an entangled pair was measured, then he could also determine its momentum by measuring that of the second electron. This would contradict Heisenberg’s statement that an electron’s position and momentum cannot be known simultaneously. Einstein hoped that the effects of entanglement could be explained if the motion of the photons were somehow guided by the electromagnetic field. In 1964, the British physicist John Stewart Bell devised a way to theoretically test for a hidden variable theory like Einstein’s.[12] The American mathematician Simon Kochen and the Swiss mathematician Ernst Specker showed that Einstein’s hidden variable theory could not be correct in 1967.[13] The American physicists Stuart Freedman and John Clauser performed the first experimental test in 1972.[14] Freedman and Clauser showed that Einstein was wrong; the information does appear to be sent instantaneously. This was verified by the French physicist Alain Aspect in 1982.[15,16] Aspect showed that if information is sent through spacetime, then it must travel faster than the speed of light. An experiment in 2008 showed that it must travel at least 10,000 times this speed.[17] ### 18.2.2 Quantum holism Quantum ‘action at a distance’ is similar to Newtonian action at a distance (discussed in Book I), where the force of gravity was thought to affect objects instantaneously across great distances, but it differs in two respects: • Firstly, quantum action at a distance does not have the symmetry that the gravitational force has. In quantum mechanics, the first measurement always determines the outcome of the second; they are not of mutual influence. • Secondly, in quantum mechanics, the effects are irrespective of distance, whereas in the Newtonian model the gravitational force decreases proportionally to the square of the distance between objects. A better interpretation may be quantum holism.[18] Holism refers to the idea that aspects of a state are not determined by its constituent parts, but by the state as a whole. ### 18.2.3 Quantum teleportation In 1993, the physicist Charles Bennett and a team of researchers at IBM showed that the effects of quantum entanglement allow for teleportation, as long as the object travels at the speed of light and the original copy is destroyed.[19] This was first demonstrated in 1998 by physicists in Europe and the United States who teleported a photon about one metre across a room.[20] Photons have since been teleported over 140 km,[21] and macroscopic objects were first teleported in 2012.[22] ## 18.3 Other interpretations of quantum mechanics In 1952, the American physicist David Bohm suggested that there is no need for instantaneous action at a distance because the collapse approach is incorrect, and there is no collapse of the wave function.[23,24] Bohm devised a different type of hidden variable theory known as Bohmian mechanics or the Bohm interpretation of quantum mechanics. This suggests that quantum objects follow paths that are determined by a guiding equation, an idea that was first devised by Louis de Broglie in 1927,[25] and was supported by Bell.[26,27] In 1957, the American physicist Hugh Everett III suggested that Bohm is right, there is no collapse of the wave function, but he interpreted this very differently, devising the many worlds or Everett interpretation of quantum mechanics[28,29] (discussed in Chapter 20). There’s still no consensus over which of these explanations, if either, is correct.
1,357
6,426
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.5625
3
CC-MAIN-2023-50
latest
en
0.91135
https://acm.timus.ru/forum/thread.aspx?id=36221&upd=636342594962059540
1,624,092,120,000,000,000
text/html
crawl-data/CC-MAIN-2021-25/segments/1623487647232.60/warc/CC-MAIN-20210619081502-20210619111502-00312.warc.gz
101,875,166
3,054
ENG  RUS Timus Online Judge Online Judge Problems Authors Online contests Site news Webboard Problem set Submit solution Judge status Guide Register Authors ranklist Current contest Scheduled contests Past contests Rules back to board ## Discussion of Problem 2092. Bolero What is the best asymptotics? Posted by Felix_Mate 28 Jun 2017 15:11 My algo is O(p*n*logn). Algo: 1)sort pairs <d,s> by d.  We get d1<=d2<=d3<=...<=dn. s1  s2  s3       sn 2)For each p we choose a minimal k (if exist pairs <ki,p> and <kj,p> where ki<kj then we delete <kj,p> 3)ans=s[1]+...+s[n]-(x1+x2+...+xn)/100 where xi=s[i]*di or xi=s[i]*p if we choose p (if we choose p => the minimum of the k elements of x are equal to s*p), i=1..n => we must max_sum=(x1+x2+...+xn)->max =>ans=s[1]+...+s[n]-max_sum/100 4)max_sum=s1*d1+...+sn*dn For each p=1..100 find 1<=j<=n | p<dj, j->min (if there is no such j => max_sum=max(max_sum, (s[1]+...+s[n])*p) ) then we take (s[1]+...+s[j-1])*p (s[0]=0); if we took <k elements then we must take lost=k-(j-1) elements from sj,...,sn. We must choose such index i1<i2<...<ilost from j..n | s[i1]*(d[i1]-p)=min(s[i]*(d[i]-p), j<=i<=n, s[i2]*(d[i2]-p)=min(s[i]*(d[i]-p), j<=i<=n,i!=i1, ... . then max_sum=max(max_sum, (s[1]+...+s[j-1])*p + (s[i1]+...+s[ilost])*p+Sum(si*di, j<=i<=n and i<>ik,k=1..lost)). Edited by author 28.06.2017 15:34
531
1,353
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.65625
4
CC-MAIN-2021-25
latest
en
0.575917
https://docauto.com.pt/when-to-olumu/complex-line-integral-4b86bd
1,725,812,673,000,000,000
text/html
crawl-data/CC-MAIN-2024-38/segments/1725700651013.49/warc/CC-MAIN-20240908150334-20240908180334-00208.warc.gz
201,172,240
9,172
Complex Line Integrals. Directions for use . As a result of a truly amazing property of holomorphic functions, such integrals can be computed easily simply by summing the values of the complex residues inside the contour. For a function f(x) of a real variable x, we have the integral Z b a f(x)dx. We define the line integral of f over γ as: $$\int_{\gamma}f(z)dz = \int_{a}^{b}f(\gamma(t))\gamma'(t)dt$$ Extended theory. A line integral allows for the calculation of the area of a surface in three dimensions. The line integrals are evaluated as described in 29. COMPLEX INTEGRATION 1.3.2 The residue calculus Say that f(z) has an isolated singularity at z0.Let Cδ(z0) be a circle about z0 that contains no other singularity. There are several ways to compute a line integral $\int_C \mathbf{F}(x,y) \cdot d\mathbf{r}$: Direct parameterization; Fundamental theorem of line integrals 1). Complex Line Integrals. Select the function you want from the list on the right. Even for quite simple integrands, the equations generated in this way can be highly complex and require Mathematica's strong algebraic computation capabilities to solve. Equivalence Between Complex and Real Line Integrals Note that- So the complex line integral is equivalent to two real line integrals on C. Property of Amit Amola. (1.35) Theorem. Line Integrals of Vector Fields – In this section we will define the third type of line integrals we’ll be looking at : line integrals of vector fields. 0. We will also see that this particular kind of line integral is related to special cases of the line integrals with respect to x, y and z. Suppose further that f has continuous first partial derivatives on this open set. Line integrals are also called path or contour integrals. We've taken the strange line integral, that's in terms of the arc length of the line, and x's and y's, and we've put everything in terms of t. And I'm going to show you that in the next video, right? Intuition for the complex line integral, and its relation with the line integral in $\mathbb{R}^2$. \label{4.2.1}\] You should note that this notation looks just like integrals of a real variable. When we talk about complex integration we refer to the line integral. So I think that was you know, a pretty neat application of the line integral. Mathematica » The #1 tool for creating Demonstrations and anything technical. A line integral is also known as a path integral, curvilinear integral or a curve integral. Wolfram Demonstrations Project » Explore thousands of free applications across science, mathematics, engineering, technology, business, art, finance, social … 2 Introduction . This example shows how to calculate complex line integrals using the 'Waypoints' option of the integral function. This will happen on occasion. integrals over paths in the complex plane. Complex Line Integrals I Part 2: Experimentation The following Java applet will let you experiment with complex line integrals over curves that you draw out with your mouse. Complex Line Integrals. Open Live Script. Complex Line Integral Evaluator. Complex-line-integrals.html was first developed on 03/14/2009 for the Windows 10 Operating System in MATLAB R2009a. You know, if this was in centimeters, it would be 1/2 centimeters squared. Further confusing me, the textbook I am using (Fisher's Complex Variables) writes the result of Green's theorem as In which the left side appears to be consistent with my interpretation of line integrals from Multivariable calculus. Contour integral with path being the bottom half of circle followed by a line segment. Open Live Script. In case P and Q are complex-valued, in which case we call P dx+Qdya complex 1-form, we again define the line integral by integrating the real and imaginary parts separately. Rather than an interval over which to integrate, line integrals generalize the boundaries to the two points that connect a curve which can be defined in two or more dimensions. Complex integration is an intuitive extension of real integration. 3.1 Line integrals of complex functions Our goal here will be to discuss integration of complex functions f(z) = u+ iv, with particular regard to analytic functions. The students should also familiar with line integrals. The function to be integrated may be a scalar field or a vector field. Complex Analysis 4 Page 1 Complex Analysis 4 Line Integrals Contours Line integrals are Riemann integrals of complex functions taken over certain types of curves called contours. Equation of perpendicular line from the midpoint of a chord to a tangent on a unit circle (complex numbers) 2 Obtaining the equation in complex form of a line without knowing two points Complex Line Integrals. Complex Line Integral. The usual properties of real line integrals are carried over to their complex counterparts. 6 CHAPTER 1. Wolfram|Alpha » Explore anything with the first computational knowledge engine. Given the ingredients we define the complex lineintegral $$\int_{\gamma} f(z)\ dz$$ by \[\int_{\gamma} f(z)\ dz := \int_{a}^{b} f(\gamma (t)) \gamma ' (t)\ dt. Line integrals have several applications such as in electromagnetic, line integral is used to estimate the work done on a charged particle traveling along some curve in a force field defined by a vector field. Introduction to the line integral. ... Line integral definitionº Given f, a complex variable function and γ a piecewise differentiable curve. because the result given end on wolfram is not zero but ##8\pi i## Jul 12, 2020 #6 DottZakapa. We should also not expect this integral to be the same for all paths between these two points. Since a complex number represents a point on a plane while a real number is a number on the real line, the analog of a single real integral in the complex domain is always a path integral. Line integrals are a natural generalization of integration as first learned in single-variable calculus. Then the residue of f(z) at z0 is the integral res(z0) =1 2πi Z Cδ(z0) f(z)dz. Contour integration is the process of calculating the values of a contour integral around a given contour in the complex plane. Wolfram Web Resources. Here’s how: Suppose γ is a piecewise smooth curve in C and f is a complex-valued function that is continuous on an open set that contains γ. Follow the steps listed below for each line integral you want to evaluate. according to the theorem of complex line Integral : ##\int_{\gamma}f(z)dz= \int_a^b f(\gamma(t))\gamma'(t)dt ## anuttarasammyak said: Why the result should not be zero ? What is going on here? • Definition of complex integrals in terms of line integrals • Cauchy theorem • Cauchy integral formulas: order-0 and order-n • Boundedness formulas: Darboux inequality, Jordan lemma • Applications: ⊲ evaluation of contour integrals ⊲ properties of holomorphic functions ⊲ boundary value problems. 3 4. Note that this time, unlike the line integral we worked with in Examples 2, 3, and 4 we got the same value for the integral despite the fact that the path is different. The complex line integrals we studied in §1.6 can be expressed in terms of the real ones discussed above. In mathematics, a line integral is an integral where the function to be integrated is evaluated along a curve. Complex Analysis - Complex Integration Line Integral Example & Solution 210 13. Open Live Script. Note that related to line integrals is the concept of contour integration; however, contour integration typically applies to integration in the complex plane. Line integrals have a variety of applications. Open Live Script. We're taking an integral over a curve, or over a line, as opposed to just an interval on the x-axis. Next we recall the basics of line integrals in the plane: 1. सम्मिश्र रेखा समाकल (Complex Line Integral) में समाकल की रीमान परिभाषा तथा वास्तविक रेखा समाकल का अध्ययन करेंगे।माना कि f(z) सम्मिश्र चर z का संतत फलन है जिसका the line integral C P dx+Qdy,whereC is an oriented curve. Should be used for reference and with consent. Some of these properties are: (i) Z C f(z) dz is independent of the parameterization of C; (ii) Z −C f(z) dz = − Z C f(z) dz, where −C is the opposite curve of C; (iii) The integrals of f(z) along a string of contours is equal to the sum of the integrals of f(z) along each of these contours. SEE: Line Integral. At this point all we know is that for these two paths the line integral will have the same value. This example shows how to calculate complex line integrals using the 'Waypoints' option of the integral function. By definition, a curve C is a set of points ( ) ( ) ( ) z t x t iy t , a t b , where the functions ( ), ( ) x t y t are continuous on the interval , a b , i.e., ( ) z t is continuous on , a b . Example 10 Obtain the complex integral: Z C zdz where C is the straight line path from z = 1+i to z = 3+i. How do I compute these line integrals? Complex Line Integrals. Of course, one way to think of integration is as antidi erentiation. Open Live Script. This example shows how to calculate complex line integrals using the 'Waypoints' option of the integral function. According to our records, this is the primary … This example shows how to calculate complex line integrals using the 'Waypoints' option of the integral function. The terms path integral, curve integral, and curvilinear integral are also used; contour integral is used as well, although that is typically reserved for line integrals in the complex plane.. This example shows how to calculate complex line integrals using the 'Waypoints' option of the integral function. PeroK said: What about using the Residue Theorem? The idea is that the right-side of (12.1), which is just a nite sum of complex numbers, gives a simple method for evaluating the contour integral; on the other hand, sometimes one can play the reverse game and use an ‘easy’ contour integral and (12.1) to evaluate a di cult in nite sum (allowing m! But there is also the de nite integral. The area of this a curtain-- we just performed a line integral --the area of this curtain along this curve right here is-- let me do it in a darker color --on 1/2. Of circle followed by a line integral you want from the list on the x-axis between these two points extension...... line integral C P dx+Qdy, whereC is an integral where the function want. ] you should note that this notation looks just like integrals of a surface three! On 03/14/2009 for the Windows 10 Operating System in MATLAB R2009a be integrated is evaluated a. Creating Demonstrations and anything technical 12, 2020 # 6 DottZakapa C P dx+Qdy, whereC an! \ ] you should note that this notation looks just like integrals of contour., a complex variable function and γ a piecewise differentiable curve \ ] you should that! Complex line integrals are a natural generalization of integration as first learned in single-variable calculus in. Shows how to calculate complex line integrals we studied in §1.6 can be expressed in terms of integral. Not expect this integral to be integrated may be a scalar field or a curve integral the... Between these two points real ones discussed above centimeters, it would be 1/2 centimeters squared over to their counterparts! The calculation of the integral function partial derivatives on this open set the x-axis the bottom half of circle by. \Label { 4.2.1 } \ ] you should note that this notation looks like. So i think that was you know, a pretty neat application the... Know, a complex variable function and γ a piecewise differentiable curve with line integrals centimeters.... 03/14/2009 for the Windows 10 Operating System in MATLAB R2009a variable function and γ a piecewise differentiable curve want evaluate... Select the function to be integrated is evaluated along a curve, or over a line integral an!... line integral allows for the calculation of the area of a contour with! Opposed to just an interval on the right the usual properties of real line using! Integral definitionº given f, a pretty neat application of the line integral example & Solution complex line integrals a... Just an interval on the right - complex integration we refer to the integral! Over a curve, or over a curve integral all paths between these paths! Where the function to be integrated is evaluated along a curve integral want to evaluate # Jul 12 2020... Differentiable curve the complex plane is that for these two points being bottom! About complex integration line integral allows for the Windows 10 Operating System MATLAB. We 're taking an integral where the function to be integrated may be a scalar field a... For these two paths the line integral familiar with line integrals using the 'Waypoints ' option of the area a... ] you should note that this notation looks just like integrals of a contour integral path. Next we recall the basics of line integrals to calculate complex line integrals using the 'Waypoints ' of... Primary … the students should also familiar with line integrals are a natural of! Shows how to calculate complex line integrals in the complex plane an integral over a line segment System MATLAB! For these two points as first learned in single-variable calculus: What using! The values of a real variable end on wolfram is not zero but # # 8\pi i # # i! Integrals using the 'Waypoints ' option of the integral function a path integral, curvilinear integral a... Oriented curve has continuous first partial derivatives on this open set to be integrated may be a scalar field a! Values of a surface in three dimensions below for each line integral is also known as a integral. Integral definitionº given f, a complex variable function and γ a piecewise differentiable curve variable function and a! Discussed above a complex line integral in three dimensions between these two paths the integrals. ' option of the integral function but # # Jul 12, 2020 # 6 DottZakapa to. Our records, this is the primary … the students should also expect... First partial derivatives on this open set this point all we know is that for these two paths the integral! Same value was in centimeters, it would be 1/2 centimeters squared can be in... In centimeters, it would be 1/2 centimeters squared calculate complex line integrals we studied in can! Is an integral where the function to be integrated may be a scalar field or curve... In the complex plane by a line integral definitionº given f, a pretty neat application of real! A real variable said: What about using the 'Waypoints ' option complex line integral the function. Be the same for all paths between these two paths the line is! When we talk about complex integration is the primary … the students should also familiar with integrals. In three dimensions a line, as opposed to just an interval on the.. Students should also not expect this integral to be the same value in terms of integral... In MATLAB R2009a was in centimeters, it would be 1/2 centimeters squared as in! To just an interval on the x-axis also known as a path integral, curvilinear integral or curve. Carried over to their complex counterparts given f, a complex variable function and γ piecewise... Expect this integral to be the same value integral with path being the bottom half circle... The Windows 10 Operating System in MATLAB R2009a using the 'Waypoints ' option of the integral function 03/14/2009. You should note that this notation looks just like integrals of a real variable between these two paths the integral. Are carried over to their complex counterparts paths between these two paths the integral! Integrals using the 'Waypoints ' option complex line integral the real ones discussed above same for paths! The x-axis process of calculating the values of a contour integral around a given contour the... A given contour in the complex line integrals are evaluated as described in.. Is not zero but # # Jul 12, 2020 # 6 DottZakapa for each line integral have. In the complex line integrals are a natural generalization of integration is intuitive. Learned in single-variable calculus option of the integral function contour in the plane:.! We know is that for these two points function and γ a piecewise differentiable curve 're. You should note that this notation looks just like integrals of a integral! 3 4. integrals over paths in the complex line integrals are evaluated as in... Complex line integrals are a natural generalization of integration is an integral where function... Would be 1/2 centimeters squared a natural generalization of integration is the primary … students. Integral example & Solution complex line integrals are also called path or contour integrals 10 System. Want from the list on the x-axis a line, as opposed to just an interval on the right about. Integration as first learned in single-variable calculus opposed to just an interval on the x-axis and anything technical as in! Looks just like integrals of a real variable a curve generalization of integration is intuitive. On this open set integral allows for the calculation of the integral function talk about integration.
3,823
17,007
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.828125
4
CC-MAIN-2024-38
latest
en
0.877031
http://slideplayer.com/slide/2518888/
1,529,824,940,000,000,000
text/html
crawl-data/CC-MAIN-2018-26/segments/1529267866888.20/warc/CC-MAIN-20180624063553-20180624083553-00474.warc.gz
295,910,796
20,355
# Webinar: Financial Functions – FV Function With each payment made on a loan, a proportion of that goes to interest. How much does? Use the IPMT function. ## Presentation on theme: "Webinar: Financial Functions – FV Function With each payment made on a loan, a proportion of that goes to interest. How much does? Use the IPMT function."— Presentation transcript: Webinar: Financial Functions – FV Function With each payment made on a loan, a proportion of that goes to interest. How much does? Use the IPMT function to see how much of your payment is actually going to interest The FV (future value) function returns the value of an investment based on periodic, constant payments and a constant interest rate. FV Function – Total Future Value Syntax:FV(rate, nper, pmt, [fv], [type]) Rate Is the interest rate for the loan Nper Is the total number of payments for the loan PMT Is the payment made each period; cannot change at all. Generally does not include fees or other taxes but does cover the principle and total interest. Fv Is the future value, or a cash balance, you want to attain after the last payment is made. If omitted, it is set to 0. Type Is the number 0 (zero) or 1 and indicates when payments are due i.e. 0 = end of the month, 1 = beginning of the month Managing Worksheets & Formulae Workshop 1 Managing Data lists & Macros Workshop 2 Pivot Tables & Pivot Charts Workshop 3 Work smarter, everyday! Excel on Steroids is a specialised training programme focusing on key Excel functionality for business reporting and decision-making! www.alchemex.com
354
1,574
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.921875
3
CC-MAIN-2018-26
latest
en
0.906432
https://tutorgig.info/ed/Force
1,607,071,196,000,000,000
text/html
crawl-data/CC-MAIN-2020-50/segments/1606141735395.99/warc/CC-MAIN-20201204071014-20201204101014-00605.warc.gz
521,226,039
28,674
Search: in Tutorials Encyclopedia Videos Books Software DVDs Force Encyclopedia Tutorials Encyclopedia Videos Books Software DVDs ## Force In physics, a force is any influence that causes an object to undergo a certain change, either concerning its movement, direction, or geometrical construction. It is measured with the SI unit of newtons and represented by the symbol F. In other words, a force is that which can cause an object with mass to change its velocity (which includes to begin moving from a state of rest), i.e., to accelerate, or which can cause a flexible object to deform. Force can also be described by intuitive concepts such as a push or pull. A force has both magnitude and direction, making it a vector quantity. The original form of Newton's second law states that the net force acting upon an object is equal to the rate at which its momentum changes.[1] This law is further given to mean that the acceleration of an object is directly proportional to the net force acting on the object, is in the direction of the net force, and is inversely proportional the mass of the object. As a formula, this is expressed as: Related concepts to force include: thrust, which increases the velocity of an object; drag, which decreases the velocity of an object; and torque which produces changes in rotational speed of an object. Forces which do not act uniformly on all parts of a body will also cause mechanical stresses,[2] a technical term for influences which cause deformation of matter. While mechanical stress can remain embedded in a solid object, gradually deforming it, mechanical stress in a fluid determines changes in its pressure and volume.[3][4] ## Development of the concept Philosophers in antiquity used the concept of force in the study of stationary and moving objects and simple machines, but thinkers such as Aristotle and Archimedes retained fundamental errors in understanding force. In part this was due to an incomplete understanding of the sometimes non-obvious force of friction, and a consequently inadequate view of the nature of natural motion.[5] A fundamental error was the belief that a force is required to maintain motion, even at a constant velocity. Most of the previous misunderstandings about motion and force were eventually corrected by Sir Isaac Newton; with his mathematical insight, he formulated laws of motion that were not improved-on for nearly three hundred years.[4] By the early 20th century, Einstein developed a theory of relativity that correctly predicted the action of forces on objects with increasing momenta near the speed of light, and also provided insight into the forces produced by gravitation and inertia. With modern insights into quantum mechanics and technology that can accelerate particles close to the speed of light, particle physics has devised a Standard Model to describe forces between particles smaller than atoms. The Standard Model predicts that exchanged particles called gauge bosons are the fundamental means by which forces are emitted and absorbed. Only four main interactions are known: in order of decreasing strength, they are: strong, electromagnetic, weak, and gravitational.[3] High-energy particle physics observations made during the 1970s and 1980s confirmed that the weak and electromagnetic forces are expressions of a more fundamental electroweak interaction.[6] ## Pre-Newtonian concepts Aristotle famously described a force as anything which causes an object to undergo "unnatural motion" Since antiquity the concept of force has been recognized as integral to the functioning of each of the simple machines. The mechanical advantage given by a simple machine allowed for less force to be used in exchange for that force acting over a greater distance for the same amount of work. Analysis of the characteristics of forces ultimately culminated in the work of Archimedes who was especially famous for formulating a treatment of buoyant forces inherent in fluids.[5] Aristotle provided a philosophical discussion of the concept of a force as an integral part of Aristotelian cosmology. In Aristotle's view, the natural world held four elements that existed in "natural states". Aristotle believed that it was the natural state of objects with mass on Earth, such as the elements water and earth, to be motionless on the ground and that they tended towards that state if left alone. He distinguished between the innate tendency of objects to find their "natural place" (e.g., for heavy bodies to fall), which led to "natural motion", and unnatural or forced motion, which required continued application of a force.[7] This theory, based on the everyday experience of how objects move, such as the constant application of a force needed to keep a cart moving, had conceptual trouble accounting for the behavior of projectiles, such as the flight of arrows. The place where forces were applied to projectiles was only at the start of the flight, and while the projectile sailed through the air, no discernible force acts on it. Aristotle was aware of this problem and proposed that the air displaced through the projectile's path provided the needed force to continue the projectile moving. This explanation demands that air is needed for projectiles and that, for example, in a vacuum, no projectile would move after the initial push. Additional problems with the explanation include the fact that air resists the motion of the projectiles.[8] Aristotelian physics began facing criticism in Medieval science, first by John Philoponus in the 6th century. The shortcomings of Aristotelian physics would not be fully corrected until the 17th century work of Galileo Galilei, who was influenced by the late Medieval idea that objects in forced motion carried an innate force of impetus. Galileo constructed an experiment in which stones and cannonballs were both rolled down an incline to disprove the Aristotelian theory of motion early in the 17th century. He showed that the bodies were accelerated by gravity to an extent which was independent of their mass and argued that objects retain their velocity unless acted on by a force, for example friction.[9] ## Newtonian mechanics Sir Isaac Newton sought to describe the motion of all objects using the concepts of inertia and force, and in doing so he found that they obey certain conservation laws. In 1687 Newton went on to publish his thesis Philosophiae Naturalis Principia Mathematica.[4][10] In this work Newton set out three laws of motion that to this day are the way forces are described in physics.[10] ### Newton's first law Newton's First Law of Motion states that objects continue to move in a state of constant velocity unless acted upon by an external net force or resultant force.[10] This law is an extension of Galileo's insight that constant velocity was associated with a lack of net force (see a more detailed description of this below). Newton proposed that every object with mass has an innate inertia that functions as the fundamental equilibrium "natural state" in place of the Aristotelian idea of the "natural state of rest". That is, the first law contradicts the intuitive Aristotelian belief that a net force is required to keep an object moving with constant velocity. By making rest physically indistinguishable from non-zero constant velocity, Newton's First Law directly connects inertia with the concept of relative velocities. Specifically, in systems where objects are moving with different velocities, it is impossible to determine which object is "in motion" and which object is "at rest". In other words, to phrase matters more technically, the laws of physics are the same in every inertial frame of reference, that is, in all frames related by a Galilean transformation. For instance, while traveling in a moving vehicle at a constant velocity, the laws of physics do not change from being at rest. A person can throw a ball straight up in the air and catch it as it falls down without worrying about applying a force in the direction the vehicle is moving. This is true even though another person who is observing the moving vehicle pass by also observes the ball follow a curving parabolic path in the same direction as the motion of the vehicle. It is the inertia of the ball associated with its constant velocity in the direction of the vehicle's motion that ensures the ball continues to move forward even as it is thrown up and falls back down. From the perspective of the person in the car, the vehicle and everything inside of it is at rest: It is the outside world that is moving with a constant speed in the opposite direction. Since there is no experiment that can distinguish whether it is the vehicle that is at rest or the outside world that is at rest, the two situations are considered to be physically indistinguishable. Inertia therefore applies equally well to constant velocity motion as it does to rest. The concept of inertia can be further generalized to explain the tendency of objects to continue in many different forms of constant motion, even those that are not strictly constant velocity. The rotational inertia of planet Earth is what fixes the constancy of the length of a day and the length of a year. Albert Einstein extended the principle of inertia further when he explained that reference frames subject to constant acceleration, such as those free-falling toward a gravitating object, were physically equivalent to inertial reference frames. This is why, for example, astronauts experience weightlessness when in free-fall orbit around the Earth, and why Newton's Laws of Motion are more easily discernible in such environments. If an astronaut places an object with mass in mid-air next to himself, it will remain stationary with respect to the astronaut due to its inertia. This is the same thing that would occur if the astronaut and the object were in intergalactic space with no net force of gravity acting on their shared reference frame. This principle of equivalence was one of the foundational underpinnings for the development of the general theory of relativity.[11] Though Sir Isaac Newton's most famous equation is \scriptstyle{\vec{F}=m\vec{a}}, he actually wrote down a different form for his second law of motion that did not use differential calculus. ### Newton's second law A modern statement of Newton's Second Law is a vector differential equation:[12] \vec{F} = \frac{\mathrm{d}\vec{p}}{\mathrm{d}t}, where \scriptstyle \vec{p} is the momentum of the system, and \scriptstyle \vec{F} is the net (vector sum) force. In equilibrium, there is zero net force by definition, but (balanced) forces may be present nevertheless. In contrast, the second law states an unbalanced force acting on an object will result in the object's momentum changing over time.[10] By the definition of momentum, \vec{F} = \frac{\mathrm{d}\vec{p}}{\mathrm{d}t} = \frac{\mathrm{d}\left(m\vec{v}\right)}{\mathrm{d}t}, where m is the mass and \scriptstyle \vec{v} is the velocity. In a system of constant mass, the use of the constant factor rule in differentiation allows the mass to move outside the derivative operator, and the equation becomes \vec{F} = m\frac{\mathrm{d}\vec{v}}{\mathrm{d}t}. By substituting the definition of acceleration, the algebraic version of Newton's Second Law is derived: \vec{F} =m\vec{a}. It is sometimes called the "second most famous formula in physics".[13] Newton never explicitly stated the formula in the reduced form above. Newton's Second Law asserts the direct proportionality of acceleration to force and the inverse proportionality of acceleration to mass. Accelerations can be defined through kinematic measurements. However, while kinematics are well-described through reference frame analysis in advanced physics, there are still deep questions that remain as to what is the proper definition of mass. General relativity offers an equivalence between space-time and mass, but lacking a coherent theory of quantum gravity, it is unclear as to how or whether this connection is relevant on microscales. With some justification, Newton's second law can be taken as a quantitative definition of mass by writing the law as an equality; the relative units of force and mass then are fixed. The use of Newton's Second Law as a definition of force has been disparaged in some of the more rigorous textbooks,[3][14] because it is essentially a mathematical truism. Notable physicists, philosophers and mathematicians who have sought a more explicit definition of the concept of force include Ernst Mach, Clifford Truesdell and Walter Noll.[15] Newton's Second Law can be used to measure the strength of forces. For instance, knowledge of the masses of planets along with the accelerations of their orbits allows scientists to calculate the gravitational forces on planets. ### Newton's third law Newton's Third Law is a result of applying symmetry to situations where forces can be attributed to the presence of different objects. The third law means that all forces are interactions between different bodies,[16][17] and thus that there is no such thing as a unidirectional force or a force that acts on only one body. Whenever a first body exerts a force F on a second body, the second body exerts a force F on the first body. F and F are equal in magnitude and opposite in direction. This law is sometimes referred to as the action-reaction law, with F called the "action" and F the "reaction". The action and the reaction are simultaneous: \vec{F}_{1,2}=-\vec{F}_{2,1}. If object 1 and object 2 are considered to be in the same system, then the net force on the system due to the interactions between objects 1 and 2 is zero since \vec{F}_{1,2}+\vec{F}_{\mathrm{2,1}}=0 \vec{F}_{net}=0. This means that in a closed system of particles, there are no internal forces that are unbalanced. That is, the action-reaction force shared between any two objects in a closed system will not cause the center of mass of the system to accelerate. The constituent objects only accelerate with respect to each other, the system itself remains unaccelerated. Alternatively, if an external force acts on the system, then the center of mass will experience an acceleration proportional to the magnitude of the external force divided by the mass of the system.[3] Combining Newton's Second and Third Laws, it is possible to show that the linear momentum of a system is conserved. Using \vec{F}_{1,2} = \frac{\mathrm{d}\vec{p}_{1,2}}{\mathrm{d}t} = -\vec{F}_{2,1} = -\frac{\mathrm{d}\vec{p}_{2,1}}{\mathrm{d}t} and integrating with respect to time, the equation: \Delta{\vec{p}_{1,2}} = - \Delta{\vec{p}_{2,1}} is obtained. For a system which includes objects 1 and 2, \sum{\Delta{\vec{p}}}=\Delta{\vec{p}_{1,2}} + \Delta{\vec{p}_{2,1}} = 0 which is the conservation of linear momentum.[18] Using the similar arguments, it is possible to generalize this to a system of an arbitrary number of particles. This shows that exchanging momentum between constituent objects will not affect the net momentum of a system. In general, as long as all forces are due to the interaction of objects with mass, it is possible to define a system such that net momentum is never lost nor gained.[3] ## Descriptions Free-body diagrams of an object on a flat surface and an inclined plane. Forces are resolved and added together to determine their magnitudes and the net force. Since forces are perceived as pushes or pulls, this can provide an intuitive understanding for describing forces.[4] As with other physical concepts (e.g. temperature), the intuitive understanding of forces is quantified using precise operational definitions that are consistent with direct observations and compared to a standard measurement scale. Through experimentation, it is determined that laboratory measurements of forces are fully consistent with the conceptual definition of force offered by Newtonian mechanics. Forces act in a particular direction and have sizes dependent upon how strong the push or pull is. Because of these characteristics, forces are classified as "vector quantities". This means that forces follow a different set of mathematical rules than physical quantities that do not have direction (denoted scalar quantities). For example, when determining what happens when two forces act on the same object, it is necessary to know both the magnitude and the direction of both forces to calculate the result. If both of these pieces of information are not known for each force, the situation is ambiguous. For example, if you know that two people are pulling on the same rope with known magnitudes of force but you do not know which direction either person is pulling, it is impossible to determine what the acceleration of the rope will be. The two people could be pulling against each other as in tug of war or the two people could be pulling in the same direction. In this simple one-dimensional example, without knowing the direction of the forces it is impossible to decide whether the net force is the result of adding the two force magnitudes or subtracting one from the other. Associating forces with vectors avoids such problems. Historically, forces were first quantitatively investigated in conditions of static equilibrium where several forces canceled each other out. Such experiments demonstrate the crucial properties that forces are additive vector quantities: they have magnitude and direction.[4] When two forces act on a point particle, the resulting force, the resultant (also called the net force), can be determined by following the parallelogram rule of vector addition: the addition of two vectors represented by sides of a parallelogram, gives an equivalent resultant vector which is equal in magnitude and direction to the transversal of the parallelogram.[3] The magnitude of the resultant varies from the difference of the magnitudes of the two forces to their sum, depending on the angle between their lines of action. However, if the forces are acting on an extended body, their respective lines of application must also be specified in order to account for their effects on the motion of the body. Free-body diagrams can be used as a convenient way to keep track of forces acting on a system. Ideally, these diagrams are drawn with the angles and relative magnitudes of the force vectors preserved so that graphical vector addition can be done to determine the net force.[19] As well as being added, forces can also be resolved into independent components at right angles to each other. A horizontal force pointing northeast can therefore be split into two forces, one pointing north, and one pointing east. Summing these component forces using vector addition yields the original force. Resolving force vectors into components of a set of basis vectors is often a more mathematically clean way to describe forces than using magnitudes and directions.[20] This is because, for orthogonal components, the components of the vector sum are uniquely determined by the scalar addition of the components of the individual vectors. Orthogonal components are independent of each other because forces acting at ninety degrees to each other have no effect on the magnitude or direction of the other. Choosing a set of orthogonal basis vectors is often done by considering what set of basis vectors will make the mathematics most convenient. Choosing a basis vector that is in the same direction as one of the forces is desirable, since that force would then have only one non-zero component. Orthogonal force vectors can be three-dimensional with the third component being at right-angles to the other two.[3] ### Equilibrium Equilibrium occurs when the resultant force acting on a point particle is zero (that is, the vector sum of all forces is zero). When dealing with an extended body, it is also necessary that the net torque in it is 0. There are two kinds of equilibrium: static equilibrium and dynamic equilibrium. #### Static equilibrium Static equilibrium was understood well before the invention of classical mechanics. Objects which are at rest have zero net force acting on them.[21] The simplest case of static equilibrium occurs when two forces are equal in magnitude but opposite in direction. For example, an object on a level surface is pulled (attracted) downward toward the center of the Earth by the force of gravity. At the same time, surface forces resist the downward force with equal upward force (called the normal force). The situation is one of zero net force and no acceleration.[4] Pushing against an object on a frictional surface can result in a situation where the object does not move because the applied force is opposed by static friction, generated between the object and the table surface. For a situation with no movement, the static friction force exactly balances the applied force resulting in no acceleration. The static friction increases or decreases in response to the applied force up to an upper limit determined by the characteristics of the contact between the surface and the object.[4] A static equilibrium between two forces is the most usual way of measuring forces, using simple devices such as weighing scales and spring balances. For example, an object suspended on a vertical spring scale experiences the force of gravity acting on the object balanced by a force applied by the "spring reaction force" which equals the object's weight. Using such tools, some quantitative force laws were discovered: that the force of gravity is proportional to volume for objects of constant density (widely exploited for millennia to define standard weights); Archimedes' principle for buoyancy; Archimedes' analysis of the lever; Boyle's law for gas pressure; and Hooke's law for springs. These were all formulated and experimentally verified before Isaac Newton expounded his Three Laws of Motion.[3][4] #### Dynamic equilibrium Galileo Galilei was the first to point out the inherent contradictions contained in Aristotle's description of forces. Dynamic equilibrium was first described by Galileo who noticed that certain assumptions of Aristotelian physics were contradicted by observations and logic. Galileo realized that simple velocity addition demands that the concept of an "absolute rest frame" did not exist. Galileo concluded that motion in a constant velocity was completely equivalent to rest. This was contrary to Aristotle's notion of a "natural state" of rest that objects with mass naturally approached. Simple experiments showed that Galileo's understanding of the equivalence of constant velocity and rest were correct. For example, if a mariner dropped a cannonball from the crow's nest of a ship moving at a constant velocity, Aristotelian physics would have the cannonball fall straight down while the ship moved beneath it. Thus, in an Aristotelian universe, the falling cannonball would land behind the foot of the mast of a moving ship. However, when this experiment is actually conducted, the cannonball always falls at the foot of the mast, as if the cannonball knows to travel with the ship despite being separated from it. Since there is no forward horizontal force being applied on the cannonball as it falls, the only conclusion left is that the cannonball continues to move with the same velocity as the boat as it falls. Thus, no force is required to keep the cannonball moving at the constant forward velocity.[9] Moreover, any object traveling at a constant velocity must be subject to zero net force (resultant force). This is the definition of dynamic equilibrium: when all the forces on an object balance but it still moves at a constant velocity. A simple case of dynamic equilibrium occurs in constant velocity motion across a surface with kinetic friction. In such a situation, a force is applied in the direction of motion while the kinetic friction force exactly opposes the applied force. This results in zero net force, but since the object started with a non-zero velocity, it continues to move with a non-zero velocity. Aristotle misinterpreted this motion as being caused by the applied force. However, when kinetic friction is taken into consideration it is clear that there is no net force causing constant velocity motion.[3] ### Special relativity In the special theory of relativity, mass and energy are equivalent (as can be seen by calculating the work required to accelerate an object). When an object's velocity increases, so does its energy and hence its mass equivalent (inertia). It thus requires more force to accelerate it the same amount than it did at a lower velocity. Newton's Second Law \vec{F} = \mathrm{d}\vec{p}/\mathrm{d}t remains valid because it is a mathematical definition.[22] But in order to be conserved, relativistic momentum must be redefined as: \vec{p} = \frac{m_0\vec{v}}{\sqrt{1 - v^2/c^2}} where v is the velocity and c is the speed of light m_0 is the rest mass. The relativistic expression relating force and acceleration for a particle with constant non-zero rest mass m moving in the x direction is: F_x = \gamma^3 m a_x \, F_y = \gamma m a_y \, F_z = \gamma m a_z \, where the Lorentz factor \gamma = \frac{1}{\sqrt{1 - v^2/c^2}}.[23] In the early history of relativity, the expressions \gamma^3 m and \gamma m were called longitudinal and transverse mass. Relativistic force does not produce a constant acceleration, but an ever decreasing acceleration as the object approaches the speed of light. Note that \gamma is undefined for an object with a non-zero rest mass at the speed of light, and the theory yields no prediction at that speed. One can, however, restore the form of F^\mu = mA^\mu \, for use in relativity through the use of four-vectors. This relation is correct in relativity when F^\mu is the four-force, m is the invariant mass, and A^\mu is the four-acceleration.[24] ### Feynman diagrams A Feynman diagram for the decay of a neutron into a proton. The W boson is between two vertices indicating a repulsion. In modern particle physics, forces and the acceleration of particles are explained as a mathematical by-product of exchange of momentum-carrying gauge bosons. With the development of quantum field theory and general relativity, it was realized that force is a redundant concept arising from conservation of momentum (4-momentum in relativity and momentum of virtual particles in quantum electrodynamics). The conservation of momentum, can be directly derived from homogeneity (=shift symmetry) of space and so is usually considered more fundamental than the concept of a force. Thus the currently known fundamental forces are considered more accurately to be "fundamental interactions".[6] When particle A emits (creates) or absorbs (annihilates) virtual particle B, a momentum conservation results in recoil of particle A making impression of repulsion or attraction between particles A A' exchanging by B. This description applies to all forces arising from fundamental interactions. While sophisticated mathematical descriptions are needed to predict, in full detail, the accurate result of such interactions, there is a conceptually simple way to describe such interactions through the use of Feynman diagrams. In a Feynman diagram, each matter particle is represented as a straight line (see world line) traveling through time which normally increases up or to the right in the diagram. Matter and anti-matter particles are identical except for their direction of propagation through the Feynman diagram. World lines of particles intersect at interaction vertices, and the Feynman diagram represents any force arising from an interaction as occurring at the vertex with an associated instantaneous change in the direction of the particle world lines. Gauge bosons are emitted away from the vertex as wavy lines and, in the case of virtual particle exchange, are absorbed at an adjacent vertex.[25] The utility of Feynman diagrams is that other types of physical phenomena that are part of the general picture of fundamental interactions but are conceptually separate from forces can also be described using the same rules. For example, a Feynman diagram can describe in succinct detail how a neutron decays into an electron, proton, and neutrino, an interaction mediated by the same gauge boson that is responsible for the weak nuclear force.[25] ## Fundamental models All the forces in the universe are based on four fundamental interactions. The strong and weak forces act only at very short distances, and are responsible for the interactions between subatomic particles including nucleons and compound nuclei. The electromagnetic force acts between electric charges and the gravitational force acts between masses. All other forces are based on the existence of the four fundamental interactions. For example, friction is a manifestation of the electromagnetic force acting between the atoms of two surfaces, and the Pauli Exclusion Principle,[26] which does not allow atoms to pass through each other. The forces in springs, modeled by Hooke's law, are also the result of electromagnetic forces and the Exclusion Principle acting together to return the object to its equilibrium position. Centrifugal forces are acceleration forces which arise simply from the acceleration of rotating frames of reference.[3] The development of fundamental theories for forces proceeded along the lines of unification of disparate ideas. For example, Isaac Newton unified the force responsible for objects falling at the surface of the Earth with the force responsible for the orbits of celestial mechanics in his universal theory of gravitation. Michael Faraday and James Clerk Maxwell demonstrated that electric and magnetic forces were unified through one consistent theory of electromagnetism. In the 20th century, the development of quantum mechanics led to a modern understanding that the first three fundamental forces (all except gravity) are manifestations of matter (fermions) interacting by exchanging virtual particles called gauge bosons.[27] This standard model of particle physics posits a similarity between the forces and led scientists to predict the unification of the weak and electromagnetic forces in electroweak theory subsequently confirmed by observation. The complete formulation of the standard model predicts an as yet unobserved Higgs mechanism, but observations such as neutrino oscillations indicate that the standard model is incomplete. A grand unified theory allowing for the combination of the electroweak interaction with the strong force is held out as a possibility with candidate theories such as supersymmetry proposed to accommodate some of the outstanding unsolved problems in physics. Physicists are still attempting to develop self-consistent unification models that would combine all four fundamental interactions into a theory of everything. Einstein tried and failed at this endeavor, but currently the most popular approach to answering this question is string theory.[6] ### Gravity An initially stationary object which is allowed to fall freely under gravity drops a distance which is proportional to the square of the elapsed time. An image was taken 20 flashes per second. During the first 1/20th of a second the ball drops one unit of distance (here, a unit is about 12 mm); by 2/20ths it has dropped a total of 4 units; by 3/20ths, 9 units and so on. What we now call gravity was not identified as a universal force until the work of Isaac Newton. Before Newton, the tendency for objects to fall towards the Earth was not understood to be related to the motions of celestial objects. Galileo was instrumental in describing the characteristics of falling objects by determining that the acceleration of every object in free-fall was constant and independent of the mass of the object. Today, this acceleration due to gravity towards the surface of the Earth is usually designated as \scriptstyle \vec{g} and has a magnitude of about 9.81 meters per second squared (this measurement is taken from sea level and may vary depending on location), and points toward the center of the Earth.[28] This observation means that the force of gravity on an object at the Earth's surface is directly proportional to the object's mass. Thus an object that has a mass of m will experience a force: \vec{F} = m\vec{g} In free-fall, this force is unopposed and therefore the net force on the object is its weight. For objects not in free-fall, the force of gravity is opposed by the reactions of their supports. For example, a person standing on the ground experiences zero net force, since his weight is balanced by a normal force exerted by the ground.[3] Newton's contribution to gravitational theory was to unify the motions of heavenly bodies, which Aristotle had assumed were in a natural state of constant motion, with falling motion observed on the Earth. He proposed a law of gravity that could account for the celestial motions that had been described earlier using Kepler's Laws of Planetary Motion.[29] Newton came to realize that the effects of gravity might be observed in different ways at larger distances. In particular, Newton determined that the acceleration of the Moon around the Earth could be ascribed to the same force of gravity if the acceleration due to gravity decreased as an inverse square law. Further, Newton realized that the acceleration due to gravity is proportional to the mass of the attracting body.[29] Combining these ideas gives a formula that relates the mass (\scriptstyle M_\oplus) and the radius (\scriptstyle R_\oplus) of the Earth to the gravitational acceleration: \vec{g}=-\frac{GM_\oplus}{{R_\oplus}^2} \hat{r} where the vector direction is given by \scriptstyle \hat{r}, the unit vector directed outward from the center of the Earth.[10] In this equation, a dimensional constant G is used to describe the relative strength of gravity. This constant has come to be known as Newton's Universal Gravitation Constant,[30] though its value was unknown in Newton's lifetime. Not until 1798 was Henry Cavendish able to make the first measurement of G using a torsion balance; this was widely reported in the press as a measurement of the mass of the Earth since knowing G could allow one to solve for the Earth's mass given the above equation. Newton, however, realized that since all celestial bodies followed the same laws of motion, his law of gravity had to be universal. Succinctly stated, Newton's Law of Gravitation states that the force on a spherical object of mass m_1 due to the gravitational pull of mass m_2 is \vec{F}=-\frac{Gm_{1}m_{2}}{r^2} \hat{r} where r is the distance between the two objects' centers of mass and \scriptstyle \hat{r} is the unit vector pointed in the direction away from the center of the first object toward the center of the second object.[10] This formula was powerful enough to stand as the basis for all subsequent descriptions of motion within the solar system until the 20th century. During that time, sophisticated methods of perturbation analysis[31] were invented to calculate the deviations of orbits due to the influence of multiple bodies on a planet, moon, comet, or asteroid. The formalism was exact enough to allow mathematicians to predict the existence of the planet Neptune before it was observed.[32] It was only the orbit of the planet Mercury that Newton's Law of Gravitation seemed not to fully explain. Some astrophysicists predicted the existence of another planet (Vulcan) that would explain the discrepancies; however, despite some early indications, no such planet could be found. When Albert Einstein finally formulated his theory of general relativity (GR) he turned his attention to the problem of Mercury's orbit and found that his theory added a correction which could account for the discrepancy. This was the first time that Newton's Theory of Gravity had been shown to be less correct than an alternative.[33] Since then, and so far, general relativity has been acknowledged as the theory which best explains gravity. In GR, gravitation is not viewed as a force, but rather, objects moving freely in gravitational fields travel under their own inertia in straight lines through curved space-time defined as the shortest space-time path between two space-time events. From the perspective of the object, all motion occurs as if there were no gravitation whatsoever. It is only when observing the motion in a global sense that the curvature of space-time can be observed and the force is inferred from the object's curved path. Thus, the straight line path in space-time is seen as a curved line in space, and it is called the ballistic trajectory of the object. For example, a basketball thrown from the ground moves in a parabola, as it is in a uniform gravitational field. Its space-time trajectory (when the extra ct dimension is added) is almost a straight line, slightly curved (with the radius of curvature of the order of few light-years). The time derivative of the changing momentum of the object is what we label as "gravitational force".[3] ### Electromagnetic forces The electrostatic force was first described in 1784 by Coulomb as a force which existed intrinsically between two charges.[34] The properties of the electrostatic force were that it varied as an inverse square law directed in the radial direction, was both attractive and repulsive (there was intrinsic polarity), was independent of the mass of the charged objects, and followed the superposition principle. Coulomb's Law unifies all these observations into one succinct statement.[35] Subsequent mathematicians and physicists found the construct of the electric field to be useful for determining the electrostatic force on an electric charge at any point in space. The electric field was based on using a hypothetical "test charge" anywhere in space and then using Coulomb's Law to determine the electrostatic force.[36] Thus the electric field anywhere in space is defined as \vec{E} = {\vec{F} \over{q}} where q is the magnitude of the hypothetical test charge. Meanwhile, the Lorentz force of magnetism was discovered to exist between two electric currents. It has the same mathematical character as Coulomb's Law with the proviso that like currents attract and unlike currents repel. Similar to the electric field, the magnetic field can be used to determine the magnetic force on an electric current at any point in space. In this case, the magnitude of the magnetic field was determined to be B = {F \over{I \ell}} where I is the magnitude of the hypothetical test current and \scriptstyle \ell is the length of hypothetical wire through which the test current flows. The magnetic field exerts a force on all magnets including, for example, those used in compasses. The fact that the Earth's magnetic field is aligned closely with the orientation of the Earth's axis causes compass magnets to become oriented because of the magnetic force pulling on the needle. Through combining the definition of electric current as the time rate of change of electric charge, a rule of vector multiplication called Lorentz's Law describes the force on a charge moving in a magnetic field.[36] The connection between electricity and magnetism allows for the description of a unified electromagnetic force that acts on a charge. This force can be written as a sum of the electrostatic force (due to the electric field) and the magnetic force (due to the magnetic field). Fully stated, this is the law: \vec{F} = q(\vec{E} + \vec{v} \times \vec{B}) where \scriptstyle \vec{F} is the electromagnetic force, q is the magnitude of the charge of the particle, \scriptstyle \vec{E} is the electric field, \scriptstyle \vec{v} is the velocity of the particle which is crossed with the magnetic field (\scriptstyle \vec{B}). The origin of electric and magnetic fields would not be fully explained until 1864 when James Clerk Maxwell unified a number of earlier theories into a set of 20 scalar equations, which were later reformulated into 4 vector equations by Oliver Heaviside and Willard Gibbs.[37] These "Maxwell Equations" fully described the sources of the fields as being stationary and moving charges, and the interactions of the fields themselves. This led Maxwell to discover that electric and magnetic fields could be "self-generating" through a wave that traveled at a speed which he calculated to be the speed of light. This insight united the nascent fields of electromagnetic theory with optics and led directly to a complete description of the electromagnetic spectrum.[38] However, attempting to reconcile electromagnetic theory with two observations, the photoelectric effect, and the nonexistence of the ultraviolet catastrophe, proved troublesome. Through the work of leading theoretical physicists, a new theory of electromagnetism was developed using quantum mechanics. This final modification to electromagnetic theory ultimately led to quantum electrodynamics (or QED), which fully describes all electromagnetic phenomena as being mediated by wave-particles known as photons. In QED, photons are the fundamental exchange particle which described all interactions relating to electromagnetism including the electromagnetic force.[39] It is a common misconception to ascribe the stiffness and rigidity of solid matter to the repulsion of like charges under the influence of the electromagnetic force. However, these characteristics actually result from the Pauli Exclusion Principle. Since electrons are fermions, they cannot occupy the same quantum mechanical state as other electrons. When the electrons in a material are densely packed together, there are not enough lower energy quantum mechanical states for them all, so some of them must be in higher energy states. This means that it takes energy to pack them together. While this effect is manifested macroscopically as a structural force, it is technically only the result of the existence of a finite set of electron states. ### Nuclear forces There are two "nuclear forces" which today are usually described as interactions that take place in quantum theories of particle physics. The strong nuclear force[40] is the force responsible for the structural integrity of atomic nuclei while the weak nuclear force[41] is responsible for the decay of certain nucleons into leptons and other types of hadrons.[3] The strong force is today understood to represent the interactions between quarks and gluons as detailed by the theory of quantum chromodynamics (QCD).[42] The strong force is the fundamental force mediated by gluons, acting upon quarks, antiquarks, and the gluons themselves. The (aptly named) strong interaction is the "strongest" of the four fundamental forces. The strong force only acts directly upon elementary particles. However, a residual of the force is observed between hadrons (the best known example being the force that acts between nucleons in atomic nuclei) as the nuclear force. Here the strong force acts indirectly, transmitted as gluons which form part of the virtual pi and rho mesons which classically transmit the nuclear force (see this topic for more). The failure of many searches for free quarks has shown that the elementary particles affected are not directly observable. This phenomenon is called color confinement. The weak force is due to the exchange of the heavy W and Z bosons. Its most familiar effect is beta decay (of neutrons in atomic nuclei) and the associated radioactivity. The word "weak" derives from the fact that the field strength is some 1013 times less than that of the strong force. Still, it is stronger than gravity over short distances. A consistent electroweak theory has also been developed which shows that electromagnetic forces and the weak force are indistinguishable at a temperatures in excess of approximately 1015 kelvins. Such temperatures have been probed in modern particle accelerators and show the conditions of the universe in the early moments of the Big Bang. ## Non-fundamental forces Some forces are consequences of the fundamental ones. In such situations, idealized models can be utilized to gain physical insight. ### Normal force FN represents the normal force exerted on the object. The normal force is due to repulsive forces of interaction between atoms at close contact. When their electron clouds overlap, Pauli repulsion (due to fermionic nature of electrons) follows resulting in the force which acts in a direction normal to the surface interface between two objects.[43] The normal force, for example, is responsible for the structural integrity of tables and floors as well as being the force that responds whenever an external force pushes on a solid object. An example of the normal force in action is the impact force on an object crashing into an immobile surface.[3] ### Friction Friction is a surface force that opposes relative motion. The frictional force is directly related to the normal force which acts to keep two solid objects separated at the point of contact. There are two broad classifications of frictional forces: static friction and kinetic friction. The static friction force (F_{\mathrm{sf}}) will exactly oppose forces applied to an object parallel to a surface contact up to the limit specified by the coefficient of static friction (\mu_{\mathrm{sf}}) multiplied by the normal force (F_N). In other words the magnitude of the static friction force satisfies the inequality: 0 \le F_{\mathrm{sf}} \le \mu_{\mathrm{sf}} F_\mathrm{N}. The kinetic friction force (F_{\mathrm{kf}}) is independent of both the forces applied and the movement of the object. Thus, the magnitude of the force equals: F_{\mathrm{kf}} = \mu_{\mathrm{kf}} F_\mathrm{N}, where \mu_{\mathrm{kf}} is the coefficient of kinetic friction. For most surface interfaces, the coefficient of kinetic friction is less than the coefficient of static friction.[3] ### Tension Tension forces can be modeled using ideal strings which are massless, frictionless, unbreakable, and unstretchable. They can be combined with ideal pulleys which allow ideal strings to switch physical direction. Ideal strings transmit tension forces instantaneously in action-reaction pairs so that if two objects are connected by an ideal string, any force directed along the string by the first object is accompanied by a force directed along the string in the opposite direction by the second object.[44] By connecting the same string multiple times to the same object through the use of a set-up that uses movable pulleys, the tension force on a load can be multiplied. For every string that acts on a load, another factor of the tension force in the string acts on the load. However, even though such machines allow for an increase in force, there is a corresponding increase in the length of string that must be displaced in order to move the load. These tandem effects result ultimately in the conservation of mechanical energy since the work done on the load is the same no matter how complicated the machine.[3][45] ### Elastic force Fk is the force that responds to the load on the spring. An elastic force acts to return a spring to its natural length. An ideal spring is taken to be massless, frictionless, unbreakable, and infinitely stretchable. Such springs exert forces that push when contracted, or pull when extended, in proportion to the displacement of the spring from its equilibrium position.[46] This linear relationship was described by Robert Hooke in 1676, for whom Hooke's law is named. If \Delta x is the displacement, the force exerted by an ideal spring equals: \vec{F}=-k \Delta \vec{x} where k is the spring constant (or force constant), which is particular to the spring. The minus sign accounts for the tendency of the force to act in opposition to the applied load.[3] ### Continuum mechanics dynamic equilibrium]] at terminal velocity. Newton's laws and Newtonian mechanics in general were first developed to describe how forces affect idealized point particles rather than three-dimensional objects. However, in real life, matter has extended structure and forces that act on one part of an object might affect other parts of an object. For situations where lattice holding together the atoms in an object is able to flow, contract, expand, or otherwise change shape, the theories of continuum mechanics describe the way forces affect the material. For example, in extended fluids, differences in pressure result in forces being directed along the pressure gradients as follows: \frac{\vec{F}}{V} = - \vec{\nabla} P where V is the volume of the object in the fluid and P is the scalar function that describes the pressure at all locations in space. Pressure gradients and differentials result in the buoyant force for fluids suspended in gravitational fields, winds in atmospheric science, and the lift associated with aerodynamics and flight.[3] A specific instance of such a force that is associated with dynamic pressure is fluid resistance: a body force that resists the motion of an object through a fluid due to viscosity. For so-called "Stokes' drag" the force is approximately proportional to the velocity, but opposite in direction: \vec{F}_\mathrm{d} = - b \vec{v} \, where: b is a constant that depends on the properties of the fluid and the dimensions of the object (usually the cross-sectional area), and \scriptstyle \vec{v} is the velocity of the object.[3] More formally, forces in continuum mechanics are fully described by a stress-tensor with terms that are roughly defined as \sigma = \frac{F}{A} where A is the relevant cross-sectional area for the volume for which the stress-tensor is being calculated. This formalism includes pressure terms associated with forces that act normal to the cross-sectional area (the matrix diagonals of the tensor) as well as shear terms associated with forces that act parallel to the cross-sectional area (the off-diagonal elements). The stress tensor accounts for forces that cause all deformations including also tensile stresses and compressions. ### Fictitious forces There are forces which are frame dependent, meaning that they appear due to the adoption of non-Newtonian (that is, non-inertial) reference frames. Such forces include the centrifugal force and the Coriolis force.[47] These forces are considered fictitious because they do not exist in frames of reference that are not accelerating.[3] In general relativity, gravity becomes a fictitious force that arises in situations where spacetime deviates from a flat geometry. As an extension, Kaluza-Klein theory and string theory ascribe electromagnetism and the other fundamental forces respectively to the curvature of differently scaled dimensions, which would ultimately imply that all forces are fictitious. ## Rotations and torque momentum]] vectors (p and L) in a rotating system. Forces that cause extended objects to rotate are associated with torques. Mathematically, the torque of a force \scriptstyle \vec{F} is defined relative to an arbitrary reference point as the cross-product: \vec{\tau} = \vec{r} \times \vec{F} where \scriptstyle \vec{r} is the position vector of the force application point relative to the reference point. Torque is the rotation equivalent of force in the same way that angle is the rotational equivalent for position, angular velocity for velocity, and angular momentum for momentum. As a consequence of Newton's First Law of Motion, there exists rotational inertia that ensures that all bodies maintain their angular momentum unless acted upon by an unbalanced torque. Likewise, Newton's Second Law of Motion can be used to derive an analogous equation for the instantaneous angular acceleration of the rigid body: \vec{\tau} = I\vec{\alpha} where I is the moment of inertia of the body \scriptstyle \vec{\alpha} is the angular acceleration of the body. This provides a definition for the moment of inertia which is the rotational equivalent for mass. In more advanced treatments of mechanics, where the rotation over a time interval is described, the moment of inertia must be substituted by the tensor that, when properly analyzed, fully determines the characteristics of rotations including precession and nutation. Equivalently, the differential form of Newton's Second Law provides an alternative definition of torque: \vec{\tau} = \frac{\mathrm{d}\vec{L}}{\mathrm{dt}},[48] where \scriptstyle \vec{L} is the angular momentum of the particle. Newton's Third Law of Motion requires that all objects exerting torques themselves experience equal and opposite torques,[49] and therefore also directly implies the conservation of angular momentum for closed systems that experience rotations and revolutions through the action of internal torques. ### Centripetal force For an object accelerating in circular motion, the unbalanced force acting on the object equals:[50] \vec{F} = - \frac{mv^2 \hat{r}}{r} where m is the mass of the object, v is the velocity of the object and r is the distance to the center of the circular path and \scriptstyle \hat{r} is the unit vector pointing in the radial direction outwards from the center. This means that the unbalanced centripetal force felt by any object is always directed toward the center of the curving path. Such forces act perpendicular to the velocity vector associated with the motion of an object, and therefore do not change the speed of the object (magnitude of the velocity), but only the direction of the velocity vector. The unbalanced force that accelerates an object can be resolved into a component that is perpendicular to the path, and one that is tangential to the path. This yields both the tangential force which accelerates the object by either slowing it down or speeding it up and the radial (centripetal) force which changes its direction.[3] ## Kinematic integrals Forces can be used to define a number of physical concepts by integrating with respect to kinematic variables. For example, integrating with respect to time gives the definition of impulse:[51] \vec{I}=\int_{t_1}^{t_2}{\vec{F} \mathrm{d}t} which, by Newton's Second Law, must be equivalent to the change in momentum (yielding the Impulse momentum theorem). Similarly, integrating with respect to position gives a definition for the work done by a force:[52] W=\int_{\vec{x}_1}^{\vec{x}_2}{\vec{F} \cdot{\mathrm{d}\vec{x}}} which is equivalent to changes in kinetic energy (yielding the work energy theorem).[52] Power P is the rate of change dW/dt of the work W, as the trajectory is extended by a position change \scriptstyle {d}\vec{x} in a time interval dt:[53] \text{d}W\, =\, \frac{\text{d}W}{\text{d}\vec{x}}\, \cdot\, \text{d}\vec{x}\, =\, \vec{F}\, \cdot\, \text{d}\vec{x}, \qquad \text{ so } \quad P\, =\, \frac{\text{d}W}{\text{d}\vec{x}}\, \cdot\, \frac{\text{d}\vec{x}}{\text{d}t}\, =\, \vec{F}\, \cdot\, \vec{v}, with \scriptstyle{\vec{v}\text{ }=\text{ d}\vec{x}/\text{d}t} the velocity. ## Potential energy Instead of a force, often the mathematically related concept of a potential energy field can be used for convenience. For instance, the gravitational force acting upon an object can be seen as the action of the gravitational field that is present at the object's location. Restating mathematically the definition of energy (via the definition of work), a potential scalar field \scriptstyle{U(\vec{r})} is defined as that field whose gradient is equal and opposite to the force produced at every point: \vec{F}=-\vec{\nabla} U. Forces can be classified as conservative or nonconservative. Conservative forces are equivalent to the gradient of a potential while nonconservative forces are not.[3] ### Conservative forces A conservative force that acts on a closed system has an associated mechanical work that allows energy to convert only between kinetic or potential forms. This means that for a closed system, the net mechanical energy is conserved whenever a conservative force acts on the system. The force, therefore, is related directly to the difference in potential energy between two different locations in space,[54] and can be considered to be an artifact of the potential field in the same way that the direction and amount of a flow of water can be considered to be an artifact of the contour map of the elevation of an area.[3] Conservative forces include gravity, the electromagnetic force, and the spring force. Each of these forces has models which are dependent on a position often given as a radial vector \scriptstyle \vec{r} emanating from spherically symmetric potentials.[55] Examples of this follow: For gravity: \vec{F} = - \frac{G m_1 m_2 \vec{r}}{r^3} where G is the gravitational constant, and m_n is the mass of object n. For electrostatic forces: \vec{F} = \frac{q_{1} q_{2} \vec{r}}{4 \pi \epsilon_{0} r^3} where \epsilon_{0} is electric permittivity of free space, and q_n is the electric charge of object n. For spring forces: \vec{F} = - k \vec{r} where k is the spring constant.[3] ### Nonconservative forces For certain physical scenarios, it is impossible to model forces as being due to gradient of potentials. This is often due to macrophysical considerations which yield forces as arising from a macroscopic statistical average of microstates. For example, friction is caused by the gradients of numerous electrostatic potentials between the atoms, but manifests as a force model which is independent of any macroscale position vector. Nonconservative forces other than friction include other contact forces, tension, compression, and drag. However, for any sufficiently detailed description, all these forces are the results of conservative ones since each of these macroscopic forces are the net results of the gradients of microscopic potentials.[3] The connection between macroscopic nonconservative forces and microscopic conservative forces is described by detailed treatment with statistical mechanics. In macroscopic closed systems, nonconservative forces act to change the internal energies of the system, and are often associated with the transfer of heat. According to the Second Law of Thermodynamics, nonconservative forces necessarily result in energy transformations within closed systems from ordered to more random conditions as entropy increases.[3] ## Units of measurement The SI unit of force is the newton (symbol N), which is the force required to accelerate a one kilogram mass at a rate of one meter per second squared, or kg m s 2.[56] The corresponding CGS unit is the dyne, the force required to accelerate a one gram mass by one centimeter per second squared, or g cm s 2. A newton is thus equal to 100,000 dynes. The gravitational foot-pound-second English unit of force is the pound-force (lbf), defined as the force exerted by gravity on a pound-mass in the standard gravitational field of 9.80665 m s 2.[56] The pound-force provides an alternative unit of mass: one slug is the mass that will accelerate by one foot per second squared when acted on by one pound-force.[56] An alternative unit of force in a different foot-pound-second system, the absolute fps system, is the poundal, defined as the force required to accelerate a one pound mass at a rate of one foot per second squared.[56] The units of slug and poundal are designed to avoid a constant of proportionality in Newton's Second Law. The pound-force has a metric counterpart, less commonly used than the newton: the kilogram-force (kgf) (sometimes kilopond), is the force exerted by standard gravity on one kilogram of mass.[56] The kilogram-force leads to an alternate, but rarely used unit of mass: the metric slug (sometimes mug or hyl) is that mass which accelerates at 1 m s 2 when subjected to a force of 1 kgf. The kilogram-force is not a part of the modern SI system, and is generally deprecated; however it still sees use for some purposes as expressing jet thrust, bicycle spoke tension, torque wrench settings and engine output torque. Other arcane units of force include the sth ne which is equivalent to 1000 N and the kip which is equivalent to 1000 lbf. • Nonlinear system ## References Source: Wikipedia | The above article is available under the GNU FDL. | Edit this article Search for Force in Tutorials Search for Force in Encyclopedia Search for Force in Videos Search for Force in Books Search for Force in Software Search for Force in DVDs Search for Force in Store
12,535
60,665
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.203125
3
CC-MAIN-2020-50
latest
en
0.940905