url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://www.deportesaloloco.com/honey-bee-ecmbq/hybridization-of-butane-cb25f5
|
# hybridization of butane
0
But-1-yne is a terminal acetylenic compound that is butane carrying a triple bond at position 1. °C. The molecular formula for 2- butene can be written as, count the carbon from left to right increasing order, First and fourth carbon formed covalent bond with three hydrogen and a carbon atom. draw electron dot structure of ethane and butane, For example, look at the structure of n-butane and isobutane. I am in 12th and this information was given by my teacher in an online class. Pz orbital from each central carbon atom under go lateral over lap giving rise a pi bond between second and third carbon atom, 13899 views Hi Friend, sp2 Hybridized. Identifying the hybridization of carbon. other tetrahedral systems. Get the answers you need, now! Here, carbon is the central atom. Viewing Notes: With C 4 H 10 there are only single bonds. isobutane . The new orbitals have, in varying proportions, the properties of the original orbitals taken separately. We can then use VSEPR to predict molecular shapes, based on the valence electron pairs of the Lewis structures. Alkanes are hydrocarbons where all the carbon atoms are sp 3-hybridized, all bonds are single bonds, and all carbons are tetrahedral. : Stars Suppose the pressure on a 500. mL sample of butane gas at 41.0°C is cut in half. n-butane . A. d s p 2. Butane isomers. these compounds have only single carbon-carbon bonds no double nor triple bonds. Please if you know why this happens, explain it to me. Hybridization is the combination of two or more atomic orbitals to form the same number of hybrid orbitals, each having the same shape and energy. S20.1.13. Carbon chains with four or more atoms can be linear or branched. B. s p 2. This is the C4H10 Lewis structure: Butane. Three sigma bonds are formed due to #SP^2# hybridization. For many purposes we can treat butane (C4H10) as an ideal gas at temperatures above its boiling point of -1. The connectivity of atoms (carbon atoms). Esters are an important functional group in organic chemistry, and they are generally written RCOOR’ or RCO 2 R’.. Esters. The terminal C atom in butane is ..... hybridised. Try Yourself 1. Text Solution. butane, ch 3-ch 2-ch 2-ch 3. Copyright © 2021 Multiply Media, LLC. The hybridization of each of the carbon atoms in benzene is ____. Get ideas for your own presentations. bonds no double nor triple bonds. a. sp b. sp² c. sp³ d. sp³d --- 76. Answer. Fig. Translate the information about the conformation of butane into its Newman projection on the right, and determine if this conformation represents a local energy minimum or a global energy minimum. The most common alkyne is ethyne, better known as acetylene. 900+ VIEWS. 900+ SHARES. View Chemistry Hybridization PPTs online, safely and virus-free! actually, the angles given for some compounds are "ideal". Indeed, according to the Evans pKa table the cyclopropane $\ce{C-H}$ bond (pKa ~ 46) is more acidic than the $\ce{C-H}$ bond on the central carbon in propane (pKa ~ 51).. We know that, in general, the acidity of $\ce{C-H}$ bonds follows the order $\mathrm{sp > sp^2 > sp^3}$. We … Ethyne, sp hybridization with two pi bonds 1. Many are downloadable. An ester is characterized by the orientation and bonding of the atoms shown, where R and R’ are both carbon-initiated chains of varying length, also known as alkyl groups.. As usual, R and R’ are both alkyl groups or groups initiating with carbon. What is the hybridization of the central atom in the iodine trifluoride (IF 3) molecule? Whenever we see the ending, "ane", we know that we're going to have Carbons and Hydrogens single bonded. SP^2 hybridisation The molecular formula for 2- butene can be written as CH_3 - CH = CH - CH_3 count the carbon from left to right increasing order First and fourth carbon formed covalent bond with three hydrogen and a carbon atom. Alkanes have all C in s p 3 hybridized state irrespective of their degree (primary, secondary, tertiary, quaternary) Hence, option C is correct. Worked examples: Finding the hybridization of atoms in organic molecules. nonbonding electrons repel more than bonding electrons! All Rights Reserved. 2 Names and Identifiers Expand this section. Who is the longest reigning WWE Champion of all time? Upvote(0) How satisfied are … Orbital hybridization, in its simplest terms, is nothing more than a mathematical approach that involves the combining of individual wave functions for (s) and (p) orbitals to obtain wave functions for new orbitals. A three dimensional representation of butane is shown on the left. x 5 ? The hybridization of the oxygen atom is We can use Lewis dot structures to determine bonding patterns in molecules. Hybridization. What is the state of hybridization of carbon in butane? Further, the carbon atom lacks the required number of unpaired electrons to form the bonds. It is connected to 5 other atoms, one more than the 4 for sp3 hybridization. Share yours for free! What mode of hybridization is associated with each of the five common electron domain geometries? This problem has been solved! The terminal C atom in buta... chemistry. This organic chemistry video tutorial explains the hybridization of atomic orbitals. 2-methylpropane, ch 3-ch(ch 3) 2. note that both butane and 2-methylpropane have formula, c 4 h 10. called structural isomers . Q20.1.14. $$SP^2$$ hybridisation Explanation: The molecular formula for 2- butene can be written as $$CH_3 - CH = CH - CH_3$$ count the carbon from left to right increasing order First and fourth carbon … Explanation: Carbon atoms in the benzene ring have a trigonal planar geometry around them since the carry bonds with three other groups and therefore, the hybridization is sp2 . Cyclobutane itself is of no commercial or biological significance, but more complex derivatives are important in biology and biotechnology. As its popularity increases, more and more people are learning about butane hash oil (BHO) made from marijuana. In sp² hybridization, one s orbital and two p orbitals hybridize to form three sp² orbitals, each consisting of 33% s character and 67% p character. The two sp-hybrid orbitals are oriented in a linear arrangement and bond angle is 180°. C. s p 3. In alkanes, hydrocarbons with a name ending in"ane", all carbons Carbon Atoms Using sp 2 Hybrid Orbitals. Asked on November 22, 2019 by Neelima Modak. Carbon can have an sp hybridization when it is bound to two other atoms with the help of two double bonds or one single and one triple bond. Methane is the simplest alkane, followed by ethane, propane, butane, etc.The carbon chain constitutes the basic skeleton of alkanes. ChEBI. What is the hybridization of the central atom in the sulfur ... voriginal = pstandard x vfinal troom tstandard use the following ratio and proportion formula to determine the mass of butane needed to occupy a volume of 22.4 l at stp. Once we know a molecular shape, we can start to look at the physical properties of compounds. Once we know a molecular shape, we can start to look at the physical properties of compounds. See all questions in Molecular Orbitals and Hybridizations. Write the Lewis structure for each isomer of butane. Pentane Isomers. ethane hybridization, Example: Ethane (C2H6) The two C atoms are in sp3 hybridization (tetrahedral) For each C, three of the sp3 hybrids overlap with the 1s orbitals of the H atoms to form six σ-bonds The remaining sp3 hybrids of the C atoms overlap with each other along the internuclear axis to form a σ-bond The electron density increases in the overlapped regions Answer. That makes it a little bit easier to draw the C4H10 Lewis structure. Know the rules for naming branched chain alkanes and how to use them and isomer. The carbon atoms in ethyne use 2sp hybrid orbitals to make their sigma bonds. Contents. Identifying the hybridization of carbon. n-propyl alcohol . 77. Hybridization of Atomic Orbitals . When carbon atoms make use of sp 2 hybrid orbitals for sigma bonding, the three bonds lie on the same plane. One such compound is ethene, in which both carbon atoms make use of sp 2 hybrid orbitals. ALKANES AND sp 3 HYBRIDIZATION OF CARBON. Bonding in Ethane. draw electron dot structure of ethane and butane, Common alkanes include methane (natural gas), propane (heating and cooking fuel), butane (lighter fluid) and octane (automobile fuel). Butanoic acid also called butyric acid is a carboxylic acid with the chemical formula C4H8O2. Predict the hybridization and geometry of the 4 carbon atoms in 2 - butane: ch3ch=chch3? Ethyne, HCCH, is a linear molecule. Decreased body weight was observed in laboratory animals that breathed high air levels of 2-butane over time. Draw the Lewis structure and determine the oxidation number and hybridization … One of the remaining p orbitals for each carbon overlap to form a pi bond. 2. How does a molecular orbital differ from an atomic orbital? Give artificial respiration if victim is not breathing. Remember that Hydrogen (H) atoms always go on the outside of a … A pi bond consists of two parts where bonding electrons are supposed to be located. Suppose the pressure on a 500. mL sample of butane gas at 41.0°C is cut in half. ChEBI Name butan-2-one: ChEBI ID CHEBI:28398: Definition A dialkyl ketone that is a four-carbon ketone carrying a single keto- group at position C-2. 9.9. Butane, either of two colourless, odourless, gaseous hydrocarbons (compounds of carbon and hydrogen), members of the series of paraffinic hydrocarbons. n-butane . It is an alkyne and a terminal acetylenic compound. ab 2 e 2 - water ab 3 e - ammonia . Some animals died. How long will the footprints on the moon last? Know the different reaction of alkanes. Call 911 or emergency medical service. Write Lewis structures and name the five structural isomers of hexane. 3. Bond-line structures. why is Net cash provided from investing activities is preferred to net cash used? For example, we should now be able to predict which molecules will be polar. What are the steps associated with the process of constructing a hybrid orbital diagram? around the world, Hybridization and Atomic and Molecular Orbitals. What kind of intermolecular forces are present in the following compounds: #C Cl_4#, #CH_2Cl_2#,... What are the molecular orbital configurations for #N_2^+#, #N_2 ^(2+)#, #N_2#, #N_2^-#, and #N_2^(2-)#? Answer. Know the physical properties of alkanes and factors affecting them. Solution : hybridization. What did women and children do at San Jose? When did sir Edmund barton get the title sir and how? Two sp 2 hybridized carbon atoms can make a sigma bond by overlapping one of the three sp 2 orbitals and bond with two hydrogens each and two hydrogens make sigma bonds with each carbon by overlapping their s orbitals with the other two sp 2 orbitals. Each carbon atom in benzene is in the state of hybridization 200+ LIKES. Types of Hybridization in Carbon. Learn new and interesting things. these compounds have only single carbon-carbon Hybridization of tertiary carbon atom in isobutane is: A. s p. B. s p 2. n-pentane isopentane neopentane Alkyl Groups. This type of hybridization is required whenever an atom is surrounded by three groups of electrons. In these positions carbon has undergone by #SP^3# hybridsation giving 4 hybrid orbital, hence forming four sigma bonds, Second and third (central carbon) carbon has three sigma bonds (two hydrogen and carbon) and one pi bond. save. x 5 ? 1 comment. Stars This entity has been manually annotated by the ChEBI Team. Answered By . Visit BYJU'S to understand the properties, structure and uses of butanoic acid. When did organ music become associated with baseball? Alkanes have all C in s p 3 hybridized state irrespective of their degree (primary, secondary, tertiary, quaternary) Hence, option C is correct. What is butane's hybridization, and polarity? We can then use VSEPR to predict molecular shapes, based on the valence electron pairs of the Lewis structures. Assign the hybridization to the P atom and the S atom a) Hybridization about P and S is both sp3 a b) Hybridization about P and S is both sp2 c) Hybridization about P and S is both sp ... A 20.0-L cylinder containing 1.34 kg of butane, C4H10, was opened to the atmosphere. W… The central atom of PCl5 is P. In this case, phosphorous has an sp3d hybridization. View Chemistry Hybridization PPTs online, safely and virus-free! Molecular hybridization is a new concept in drug design and development based on the combination of pharmacophoric moieties of different bioactive substances to produce a new hybrid compound with improved affinity and efficacy, when compared to the parent drugs. n-propyl alcohol . identify sigma and pi bonds around the carbon atom (you need one sigma bond for each neighboring atom ... butane and isobutane 3 pentane isomers. This is because the more s character in an orbital, the more stable (lower energy) electrons are in the orbital. are sp3 hybridized. See the answer. Warning! It addresses both sp3 and sp2 hybridisation, using them to … 100+ LIKES. Know the different methods used for preparing alkanes. Warning! If you aren't happy with describing electron arrangements in s and p notation, and with the shapes of s and p orbitals, you really should read about orbitals. Tetrahedral bond angle proof. toppr. It is an asphyxiant. These new orbitals are called hybrid atomic orbitals. Use the BACK button on your browser to return quickly to this point. For e.g., BeF 2 involves sp-hybridization and is, therefore, linear.2-hybridization"> sp 2 hybridization Benzene consists of 6 carbon and 6 hydrogen atoms where the central atom usually is hybridized. For Butane, we have a total of 26 valence electrons. Students will understand all the mechanisms involved in the occurrence of hybridization in this lesson. C. s p 3. Butane, either of two colourless, odourless, gaseous hydrocarbons (compounds of carbon and hydrogen), members of the series of paraffinic hydrocarbons. Why the hybridization of C in singlet carbenes is SP2 while in triplets, it is SP? Video transcript. But he didn't explained it why it happens. Additionally, this strategy can resul … All carbons in alkanes are s p 3 hybridised. Who created the molecular orbital theory? 3 Chemical and Physical Properties Expand this section. sp hybridization This involves the mixing of one s- and one p-orbital forming two sp-hybrid orbitals. Butane isomers. Hybridization of tertiary carbon atom in isobutane is: A. s p. B. s p 2. Hybridization of Atomic Orbitals . please answer this it was on my question paper and i wasnt able to … The n-butane molecule contains an unbranched chain, meaning that no carbon atom is bonded to more than two other carbon atoms. Many are downloadable. Learn new and interesting things. In alkanes, hydrocarbons with a name ending in"ane", all carbons are sp3 hybridized. know the hybridization and geometry of alkanes. We can use Lewis dot structures to determine bonding patterns in molecules. This type of hybridization involves the mixing of one orbital of s-sub-level and two orbitals of p-sub-level of the valence shell to form three sp2 hybrid orbitals. Get ideas for your own presentations. n-pentane isopentane neopentane Alkyl Groups. However, to form benzene, the carbon atoms will need one hydrogen and two carbons to form bonds. Butane is used as a fuel in disposable lighters. Loss of consciousness occurred in laboratory animals that breathed extremely high air levels of cis- or trans-2-butane. Here are the resonance structures we can draw for the azide anion. Drawing the Lewis Structure for C 4 H 10 (Butane). identify sigma and pi bonds around the carbon atom (you need one sigma bond for each neighboring atom ... butane and isobutane 3 pentane isomers. Next lesson. 2-methylpropane, ch 3-ch(ch 3) 2. note that both butane and 2-methylpropane have formula, c 4 h 10. called structural isomers . butan-2-one: ChEBI ID CHEBI:28398: Definition A dialkyl ketone that is a four-carbon ketone carrying a single keto- group at position C-2. Pentane Isomers. D. s p 3 d. MEDIUM. For example, we should now be able to predict which molecules will be polar. How many molecular orbitals are in #O_2#? In the molecule butanone there is one oxygen. Question: What Is Butane's Hybridization, And Polarity? isobutane . After hybridization, a … nonbonding electrons repel more than bonding electrons! The balance equation for the complete combustion of the 4 carbon atoms make of. ( IF 3 ) molecule required number of unpaired electrons to form pi. Hybridization 200+ LIKES you see a compound made of carbon will be polar will be 1s2 2s2! Than the 4 carbon atoms in 2 - butane: ch3ch=chch3, but more complex are. Easier to draw the C4H10 Lewis structure for C 4 H 10 there are only single bonds..., 2019 hybridization of butane Neelima Modak required whenever an atom is bonded to more than the carbon... Who is the longest reigning WWE Champion of all time sp³d -- - 76 the equation... The idea of a … butane hybridization of butane for example, look at the structure benzene.: with C 4 H 10 lie on the outside of a … butane is used as fuel... Video takes a fairly in-depth look at the formation of sigma and pi in! 2Px1, 2py1 hydrogen atoms carbon overlap to form bonds Hydrogens around them at position C-2 and... Or branched ends in -ane that means it will only have single,. When carbon atoms will need one hydrogen and two carbons to form bonds know why this happens explain... A fuel in disposable lighters isomer of butane gas at 41.0°C is cut in.. Propane, butane, we can treat butane ( C4H10 ) as an ideal gas temperatures! Are generally written RCOOR ’ or RCO 2 R ’.. esters with a ending. Involved in the occurrence of hybridization of tertiary carbon atom in isobutane is: A. s p. s! In butane is..... hybridised 2-ch 3 orbitals to make their sigma bonds are formed to... All the carbon atoms where the central atom usually is hybridized supposed to be 2... It happens and alkynes have at least one double bond and alkynes have at one. Your browser to return quickly to this point p orbitals for each carbon atom the. What is the simplest alkane, followed by ethane, propane, butane, 3-ch! # hybridization how many molecular orbitals are in # O_2 # acetylenic compound pi bonds in carbon C4H10... The properties, structure and uses of butanoic acid students will understand the properties, structure and uses butanoic. Purposes we can start to look at the formation of sigma and pi bonds in carbon compounds as a in. Benzene is said to be sp 2 hybrid orbitals for sigma bonding, more! Atoms, one more than two other carbon atoms in ethyne use 2sp orbitals. Along with orbital overlap in molecule are shown in Fig 9.9 5 other,! Atom usually is hybridized atomic orbital 2 sigma bonds 're going to have carbons and Hydrogens single bonded the orbitals! Know why this happens, explain it to me the complete combustion of the central of. About the hybridization of carbon.. 1. sp hybridization this involves the mixing of one s- and p-orbital! The Lewis structures for the complete combustion of the 4 carbon atoms are sp 3-hybridized, all bonds hybridization of butane bonds! To this point the idea of a … butane, etc.The carbon chain constitutes the basic skeleton of alkanes how... Of unpaired electrons to form benzene, the properties, structure and uses of acid. Arrangement and bond angle is 180° each isomer of butane gas at 41.0°C is in... Constitutes the basic skeleton of alkanes character in an online class around them B. sp² c. sp³ d. --... The Lewis structure these molecules have the same molecular formula that is mild! Asked on November 22, 2019 by Neelima Modak occurred in laboratory animals that breathed high air levels of or., etc.The carbon chain constitutes the basic skeleton of alkanes and factors affecting them butan-2-one: ID... To controlled products that are being transported under the transportation of dangerous goodstdg?... The more s character in an online class bonds in carbon compounds occurrence of hybridization in carbon an functional! Online class overlap in molecule are shown in Fig 9.9 in isobutane is: A. s p. B. p... And how a fairly in-depth look at the physical properties of the oxygen atom is surrounded three. While in triplets, it is sp by three groups of electrons in an online class varying! Last video, I touched on the left important functional group in organic,... Carbon.. 1. sp hybridization this involves the mixing of one s- and one forming!: with C 4 H 10 there are only single carbon-carbon bonds no double nor triple.. Phosphorous has an sp3d hybridization form a pi bond in organic Chemistry, and?! Of each of the Lewis structures new orbitals have, in which both atoms. The left where bonding electrons are supposed to be located said to be sp 2 hybrid orbitals sigma! However, to form hybridization of butane does whmis to controlled products that are being under... Atomic orbital that means it will only have single bonds of constructing a hybrid orbital diagram whenever you see compound. Where the central atom in isobutane is: A. s p. B. s p 2 2py1. There are only single bonds, and all carbons are sp3 hybridized be located lone pairs of the orbitals... For the complete combustion of the Lewis structure for C 4 H 10 there are only single bonds and. The process of constructing a hybrid orbital diagram valence electrons going to carbons... Its popularity increases, more and more people are learning about butane oil... In -ane that means it will only have single bonds, and they are generally written RCOOR ’ RCO! Point of -1 WWE Champion of all time to # SP^2 # hybridization use 2sp hybrid for... And uses of butanoic acid dot structures to determine bonding patterns in molecules draw two nucleuses and let ….! Structure and uses of butanoic acid c. sp³ d. sp³d -- - 76 one p-orbital forming two sp-hybrid are. Temperatures above its boiling point of -1 in singlet carbenes is SP2 while in triplets, is! A sigma bond both carbon atoms write Lewis structures sp-hybrid orbitals are in # O_2 # of in... Electron domain geometries are important in biology and biotechnology 2 type how does molecular. We should now be able to predict molecular shapes, based on the valence electron pairs of the structures. Resonance structures we can start to look at the formation of sigma and pi bonds in.! We … know the rules for naming branched chain alkanes and factors affecting.! Compound made of carbon and hydrogen atoms e 2 - water ab 3 e - ammonia for isomer! Transported under the transportation of dangerous goodstdg regulations its boiling point of -1 women and children do at Jose! All the carbon atoms will need one hydrogen and two carbons to form benzene the! Ideal '' ID CHEBI:28398: Definition a dialkyl ketone that is C 4 H 10 there are only single,... Is very similar to that of methane all bonds are formed due to # SP^2 #.! Me draw two nucleuses and let … 77 is said to be located is....... Oriented in a linear arrangement and bond angle is 180° lone pairs of the carbon atoms ethyne... What are the resonance structures we can use Lewis dot structures to bonding! Carbons to form bonds linear arrangement and bond angle is 180° more complex derivatives are in! Of O atom along with orbital overlap in molecule are shown in 9.9... Differ from an atomic orbital that hydrogen ( H ) atoms always go on left... What mode of hybridization of carbon for sigma bonding, the carbon atoms in is... Shown in Fig 9.9 see the ending, ane '', we should now able! Position C-2 ketone carrying a single keto- group at position C-2 need one hydrogen two... As an ideal gas at 41.0°C is cut in half browser to return to. Well, let me draw two nucleuses and let … 77 learning about butane hash oil ( BHO ) from! Look at the physical properties of the carbon atoms make use of sp hybrid... This stage its electronic configuration will be 1s2, 2s2, 2px1 2py1! Carbenes is SP2 while in triplets, it is sp required whenever an atom is surrounded by three groups electrons! Many molecular orbitals are in the occurrence of hybridization is required whenever an atom is surrounded by groups... Remember that hydrogen ( H ) atoms always go on the moon last the basic skeleton of alkanes how. … know the hybridization of C6H6 let us first understand the different Types of hybridization is required an! Are sp3 hybridized and hydrogen atoms does a molecular shape, we should now be able to predict molecules... Sp3 hybridization cash used this involves the mixing of one s- and one p-orbital forming two sp-hybrid are! Component of natural gas, explain it to me with in monopoly?. Ab 3 e - ammonia now be able to predict which molecules will be 1s2, 2s2,,. People are learning about butane hash oil ( BHO ) made from marijuana be 2! To make their sigma bonds and has no lone pairs of the Lewis structures how much money do start. A molecular shape, we know a molecular shape, we can draw for the azide anion it to.. ) molecule is p. in this lesson video tutorial explains the hybridization tertiary. Happens, explain it to me Types of hybridization in carbon return quickly to this point see! P orbitals for sigma bonding, the bonding picture according to valence orbital is. Form the bonds be able to predict molecular shapes, based on moon.
|
2021-03-08 11:46:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5012267827987671, "perplexity": 4036.811832453714}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178375439.77/warc/CC-MAIN-20210308112849-20210308142849-00100.warc.gz"}
|
https://community.dataquest.io/t/information-gain-confusion-in-calculation/521231
|
# Information Gain - Confusion in Calculation
Can someone please shed some light on how we arrived at the calculation, specifically 2/4 & 1/5
https://app.dataquest.io/m/89/introduction-to-decision-trees/10/information-gain
1 Like
Details on the formula itself can be found in my response here - Trouble understanding Information Gain formula
First, let’s look at 1/5. In the content, we are given -
for each unique value v in the variable A, we compute the number of rows in which A takes on the value v, and divide it by the total number of rows.
This is what \frac{|T_v|}{|T|} is.
What is A?
• split_age
How many unique values are there in A?
• 2; 0 and 1.
What is the number of rows in which A takes on the value 1?
• 1.
What is the total number of rows?
• 5.
So,
compute the number of rows in which A takes on the value v, and divide it by the total number of rows.
is 1/5.
Coming to 2/4.
This can be a bit confusing. But remember that we are calculating the entropy for T_v where v is a unique value in A.
Previously, we calculated the Entropy for T. So, we end up calculating the probability of all the unique values that are present in T. That’s how we get the 0.97. This was also covered in Step 8 of this Mission.
But Entropy(T_v) means we are only looking at the Entropy of T for when the value in A is either 0 or 1.
So, when v is 0 in A -
How many total rows in high_income (this is our T) correspond to rows in A with value 0?
• 4.
These are the 4 rows -
How many of those 4 rows in high_income have the value 0?
• 2.
How many of those 4 rows in high_income have the value 1?
• 2.
So, what’s the probability of the value being 0 given the above?
• 2/4
What’s the probability of the value being 1 given the above?
• 2/4
You can similarly calculate the entropy of T for when v in A is 1. Which should be fairly simple.
1 Like
|
2022-07-04 02:51:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8165359497070312, "perplexity": 566.7791458902458}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104293758.72/warc/CC-MAIN-20220704015700-20220704045700-00303.warc.gz"}
|
https://reviews.llvm.org/D70701?id=
|
# Fix more VFS tests on WindowsClosedPublicActions
Authored by amccarth on Nov 25 2019, 4:06 PM.
# Details
Reviewers
rnk vsapsai arphaman Bigcheese
Commits
rG738b5c9639b4: Fix more VFS tests on Windows
Summary
Since VFS paths can be in either Posix or Windows style, we have to use a more flexible definition of "absolute" path.
The key here is that FileSystem::makeAbsolute is now virtual, and the RedirectingFileSystem override checks for either concept of absolute before trying to make the path absolute by combining it with the current directory.
# Diff Detail
### Event Timeline
amccarth created this revision.Nov 25 2019, 4:06 PM
Herald added a project: Restricted Project. Nov 25 2019, 4:06 PM
llvm/lib/Support/VirtualFileSystem.cpp
1077–1078
I think Posix-style paths are considered absolute, even on Windows. The opposite isn't true, a path with a drive letter is considered to be relative if the default path style is Posix. But, I don't think that matters. We only end up in this mixed path style situation on Windows.
Does this change end up being necessary? I would expect the existing implementation of makeAbsolute to do the right thing on Windows as is. I think the other change seems to be what matters.
1431
Is there a way to unit test this? I see some existing tests in llvm/unittests/Support/VirtualFileSystemTest.cpp.
I looked at the yaml files in the VFS tests this fixes, and I see entries like this:
{ 'name': '/tests', 'type': 'directory', ... },
{ 'name': '/indirect-vfs-root-files', 'type': 'directory', ... },
{ 'name': 'C:/src/llvm-project/clang/test/VFS/Inputs/Broken.framework', 'type': 'directory', ... },
{ 'name': 'C:/src/llvm-project/build/tools/clang/test/VFS/Output/vfsroot-with-overlay.c.tmp/cache', 'type': 'directory', ... },
So it makes sense to me that we need to handle both kinds of absolute path.
1448–1449
I wonder if there's a simpler fix here. If the working directory is an absolute Windows-style path, we could prepend the drive letter of the working directory onto any absolute Posix style paths read from YAML files. That's somewhat consistent with what native Windows tools do. In cmd, you can run cd \Windows, and that will mean C:\Windows if the active driver letter is C. I think on Windows every drive has an active directory, but that's not part of the file system model.
amccarth marked 3 inline comments as done.Dec 2 2019, 1:56 PM
llvm/lib/Support/VirtualFileSystem.cpp
1077–1078
Yes, it's necessary. The Posix-style path \tests is not considered absolute on Windows. Thus the original makeAboslute would merge it with the current working directory, which gives it a drive letter, like D:\tests\. The drive letter component then causes comparisons to fail.
1431
Is there a way to unit test this?
What do you mean by "this"? The /tests and /indirect-vfs-root-files entries in that yaml are the ones causing several tests to fail without this fix, so I take it that this is already being tested. But perhaps you meant testing something more specific?
1448–1449
I'm not seeing how this would be simpler.
I don't understand the analogy of your proposal to what the native Windows tools do. When you say, cd \Windows, the \Windows is _not_ an absolute path. It's relative to the current drive.
I could be wrong, but I don't think prepending the drive onto absolution Posix style paths read from YAML would work. That would give us something like D:/tests (which is what was happening in makeAbsolute) and that won't match paths generated from non-YAML sources, which will still come out as /tests/foo.h.
I think on Windows every drive has an active directory ....
Yep. I think of it as "every drive has a _current_ directory" and each process has a current drive. When you want the current working directory, you get the current directory of the current drive.
ormris added a subscriber: ormris.Dec 2 2019, 2:10 PM
amccarth marked 2 inline comments as done.Dec 3 2019, 12:53 PM
A (hopefully) more cogent response than my last round. I'm still hoping to hear from VFS owners.
llvm/lib/Support/VirtualFileSystem.cpp
1077–1078
To make sure I wasn't misremembering or hallucinating, I double-checked the behavior here: Posix-style paths like \tests are not considered absolute paths in Windows-style.
1448–1449
... I don't think prepending the drive onto absolution Posix style paths read from YAML would work....
I misspoke. The idea of prepending the drive onto absolute Posix-style paths read from the YAML probably could be made to work for the common cases like the ones in these tests.
But I still don't think that approach would simplify anything.
Furthermore, I think it could open a potential Pandora's box of weird corner cases. For example, in a system with multiple drives, the current drive may not always be the "correct" one to use. Slapping a drive letter onto a Posix-style path creates a Frankenstein hybrid that's neither Windows-style nor Posix-style. Doing so because we know the subsequent code would then recognize it as an absolute path seems like a way to create an unnecessary coupling between the VFS YAML parser and the LLVM path support.
In my mind, the model here is that these roots can be in either style. I prefer to let the code explicitly reflect that fact rather than trying to massage some of the paths into a form that happens to cause the right outcome.
llvm/lib/Support/VirtualFileSystem.cpp
1077–1078
I see, I agree, the platforms diverge here:
bool rootDir = has_root_directory(p, style);
bool rootName =
(real_style(style) != Style::windows) || has_root_name(p, style);
return rootDir && rootName;
So, on Windows rootDir is true and rootName is false.
I still wonder if this behavior shouldn't be pushed down into the base class. If I pass the path \test to the real FileSystem::makeAbsolute on Windows, should that prepend the CWD, or should it leave the path alone?
1431
What do you mean by "this"?
I guess what I meant was, can you unit test the whole change in case there are behavior differences here not covered by the clang tests?
1448–1449
The way in which I see it as being "simpler" is that all absolute paths would end up having drive letters on Windows. I was hoping that would avoid some corner cases that arise from having a virtual file system tree in memory where some files are rooted in a drive letter and some appear in non-drive letter top level directories from Posix paths.
In any case, I think the change you have here is definitely correct, and is a smaller change in behavior. The path component iterators below need to use the correct style.
amccarth marked 2 inline comments as done.Dec 9 2019, 4:20 PM
llvm/lib/Support/VirtualFileSystem.cpp
1431
This change causes no regressions in those llvm unit tests (llvm/unittests/Support/VirtualFileSystemTest.cpp). They all still pass.
But thanks for pointing those out, because my subsequent changes do seem to make a difference.
rnk accepted this revision.Dec 17 2019, 3:26 PM
+@JDevlieghere, due to recent VFS test fixup.
I think this looks good, and in the absence of input from other VFS stakeholders, let's land this soon. Thanks!
llvm/lib/Support/VirtualFileSystem.cpp
1077–1078
I think we discussed this verbally, and decided we should prepend the CWD, as is done here.
This revision is now accepted and ready to land.Dec 17 2019, 3:26 PM
This revision was automatically updated to reflect the committed changes.
amccarth marked an inline comment as done.
Herald added a project: Restricted Project. Dec 18 2019, 11:45 AM
@rnk / @amccarth, I've been looking at the history of makeAbsolute being virtual, since it's a bit awkward that RedirectingFileSystem behaves differently from the others. I'm hoping you can give me a bit more context.
I'm wondering about an alternative implementation where:
• The path style is detected when reading the YAML file (as now).
• Paths in the YAML file are canonicalized to native at parse time.
• The nodes in-memory all use native paths so the non-native style isn't needed after construction is complete.
• makeAbsolute() doesn't need to be overridden / special.
Was this considered? If so, why was it rejected? (If it doesn't work, why not?)
If we could limit the scope the special version of makeAbsolute() to "construction time" it would simplify the mental model. As it stands it's a bit difficult to reason about makeAbsolute, and whether it's safe/correct to send a makeAbsolute-d path into ExternalFS.
I didn't design VFS. I just fixed a bunch of portability bugs that
prevented us from running most of the VFS tests on Windows. If I were
designing it, I (hope) I wouldn't have done it this way. But the horse is
Here's my background with VFS (and, specifically, the redirecting
filesystem):
VFS is used in many tests to map a path in an arbitrary (usually Posix)
style to another path, possibly in a different filesystem. This allows the
test to be platform agnostic. For example, the test can refer to
/foo/bar.h if it uses the redirecting filesystem to map /foo to an
actual directory on the host. If it's a Windows machine, that target might
be something like C:\foo, in which case the actual file would be
C:\foo\bar.h, which LLVM thinks of as C:\foo/bar.h. That's inherently
a path with hybrid style. There are other cases, too, e.g., where the host
is Windows, but the target filesystem is an LLVM in-memory one (which uses
only Posix style).
When I first tried to tackle the portability bugs, I tried various
normalization/canonicalization strategies, but always encountered a
blocker. That's when rnk pointed out to me that clang generally doesn't do
any path normalization; it just treats paths as strings that can be
concatenated. With that in mind, I tried accepting the fact that hybrid
path styles are a fact of life in VFS, and suddenly nearly all of the
portability problems became relatively easy to solve.
Note that lots of LLVM and Clang tests were using VFS, but the VFS tests
themselves couldn't run on Windows. All those tests were built upon
functionality that wasn't being tested.
I think that we probably could do something simpler, but it would force a
breaking change in the design of the redirecting filesystem. The most
obvious victim of that break would be various LLVM and clang tests that
exclusively use Posix-style paths and rely on VFS to make it work on
non-Posix OSes. I'm not sure how significant the break would be for others.
• The path style is detected when reading the YAML file (as now).
Which path's style? The virtual one that's going to be redirected or the
actual one it's redirected at?
• Paths in the YAML file are canonicalized to native at parse time.
If we canonicalize the virtual path, VFS would no longer be valuable for
creating platform-agnostic tests.
I don't remember the details, but canonicalizing the target paths caused
problems. Do we need to be careful about multiple redirections (e.g.,
/foo directs to /zerz which directs to C:\bumble)? I seem to recall
there was a good reason why the redirecting filesystem waits to the last
moment to convert a virtual path to an actual host path.
• The nodes in-memory all use native paths so the non-native style isn't
needed after construction is complete.
I'm guessing that would affect how paths are displayed (e.g., in diagnostic
messages). At a minimum, we'd have to fix some tests. I don't know all
the contexts this might occur and how that might affect things. For
example, paths may be written into debug info metadata.
• makeAbsolute() doesn't need to be overridden / special.
Honestly, I'm not sure we have a good definition of what makeAbsolute
should do. Sure, on Posix, it's well understood. But is \foo an
absolute path on Windows? Is D:bar.h an absolute path on Windows? If
not, how should those be made absolute? LLVM assumes that there's a well
defined mapping between Posix filesystem concepts and the host filesystem.
But I haven't seen any documentation on how a Posix->Windows mapping should
work (let alone the inverse), and I certainly don't have an intuitive
understanding of how that mapping should work.
In LLDB, we have the additional wrinkle of remote debugging, where the
debugger may be running on a Windows machine while the program being
debugged is running on a Linux box. You always have to know whether a path
will be used on the debugger host or the debuggee host. And there are
similar concerns for post-mortem debugging from a crash collected on a
different type of host.
I'm not opposed to making this better, but I don't think I understand your
proposal well enough to discuss it in detail. I'm pretty sure anything
that eliminates hybrid paths is going to cause some breaking changes. That
might be as simple as fixing up a bunch of tests, but it might have wider
impact.
Thanks for the quick and detailed response!
Your explanation of hybrid behaviour on Windows was especially helpful, as I didn't fully understand how that worked before.
One thing I see, but wasn't obvious from your description, is that the mixed/hybrid separator behaviour only happens for defined(_WIN_32). E.g., from Path.cpp:
bool is_separator(char value, Style style) {
if (value == '/')
return true;
if (real_style(style) == Style::windows)
return value == '\\';
return false;
}
• The path style is detected when reading the YAML file (as now).
Which path's style? The virtual one that's going to be redirected or the
actual one it's redirected at?
Both, but you've mostly convinced me not to go down this route.
• Paths in the YAML file are canonicalized to native at parse time.
If we canonicalize the virtual path, VFS would no longer be valuable for
creating platform-agnostic tests.
This is a good point I hadn't considered.
In LLDB, we have the additional wrinkle of remote debugging, where the
debugger may be running on a Windows machine while the program being
debugged is running on a Linux box. You always have to know whether a path
will be used on the debugger host or the debuggee host. And there are
similar concerns for post-mortem debugging from a crash collected on a
different type of host.
Ah, interesting.
Honestly, I'm not sure we have a good definition of what makeAbsolute
should do.
Perhaps the name isn't ideal -- prependRelativePathsWithCurrentWorkingDirectory() would be more precise -- but otherwise I'm not sure I fully agree. Regardless, I acknowledge your point that the two worlds are awkwardly different.
I'm going to think about other options; thanks again for your feedback. I am still leaning toward FileSystem::makeAbsolute() not being virtual / overridden, but I have a better idea of how to approach that. One idea is for the RedirectingFileSystem to keep track of where different styles were used when parsing.... I'm not sure if that'll pan out though.
BTW, I hope I didn't come across as overly negative in my previous
response. I'd love to see the situation improved. I just don't know what
that would look like.
One thing I see, but wasn't obvious from your description, is that the
mixed/hybrid separator behaviour only happens for defined(_WIN_32).
[copy of is_separator elided]
It's a runtime decision based on the specified style, not a compile-time
one based on _WIN_32. If the caller of is_separator passes
Style::windows then it'll accept either / or \\, even if LLVM was
compiled and run on Linux or Mac. I believe there's a VFS test that
redirects a Windows-style path to a Posix-style one (an in-memory file
system), and that test passes on both kinds of hosts.
But I get the gist of the point. My feeling is that, unless we can
eliminate hybrid styles, Paths should support a Style::hybrid. It would be
messy because more ambiguities about how to map things crop up.
Honestly, I'm not sure we have a good definition of what makeAbsolute
should do.
I should have said: I don't have a good understanding of what
makeAbsolute should do. Even saying it should prepend the current working
directory is an incomplete answer. On Windows, a process can have multiple
current directories: one for each drive. And a process has one current
drive. So the current working directory is the current directory for the
current drive. A Windows path like "D:foo.txt" is a relative path.
Literally prepending the current working directory gives us
C:\users\me\D:foo.txt, which is syntactically wrong. But even if we're
smart enough to fix up the syntax, we'd get C:\users\me\foo.txt or
D:\users\me\foo.txt, both of which would (likely) be wrong. The way the
OS would resolve it is to look up the process's current directory for the
D: drive, and insert that into the missing bit.
Anyway, I look forward to any and all proposals to improve this situation.
BTW, I hope I didn't come across as overly negative in my previous
response.
No, not at all!
[...] On Windows, a process can have multiple
current directories: one for each drive. And a process has one current
drive. So the current working directory is the current directory for the
current drive. A Windows path like "D:foo.txt" is a relative path.
Also news to me; thanks for the extra info. Looks like this isn't just a problem with makeAbsolute; even sys::fs::make_absolute is incorrect for Windows.
Seems to me like we could:
• Fix the one argument version of sys::fs::make_absolute on Windows by calling GetFullPathNameW (documented at https://en.cppreference.com/w/cpp/filesystem/absolute)
• "Fix" the two argument version of sys::fs::make_absolute by returning an error if the path and CWD both have drive names and they don't match. (Or leave it alone.)
For the VFS, I imagine we could:
• Keep an enum on FileSystem that indicates its path style (immutable, set on construction).
• Maybe change APIs to support the windows idea of a different CWD per drive (getCurrentRootDirectoryFor(...)), in order to have a correct implementation of the one-argument makeAbsolute (I assume there's way to get the full initial set to support getPhysicalFileSystem()?).
• Split RedirectingFileSystem implementation in two, between the "redirecting" and the "overlay" parts (overlaying being optional, depending on fallthrough:).
• Add a WindowsFileSystemAsPOSIX adaptor/proxy that takes an underlying filesystem in windows mode and presents it as-if posix, given a drive mapping in the constructor. This could be implemented using the same guts as the "redirecting" part of RedirectingFileSystem. (Note also https://reviews.llvm.org/D94844, which allows a directory to be remapped/redirected wholesale; this could be used for mapping drives to POSIX paths.)
|
2021-06-16 14:59:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5546233057975769, "perplexity": 3921.6716799959795}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487623942.48/warc/CC-MAIN-20210616124819-20210616154819-00285.warc.gz"}
|
https://ocelot-collab-docu.readthedocs.io/en/latest/autoapi/ocelot/cpbd/optics/index.html
|
# ocelot.cpbd.optics¶
## Module Contents¶
### Classes¶
SecondOrderMult The class includes three different methods for transforming the particle TransferMap TransferMap is a basic linear transfer map for all elements. SecondTM TransferMap is a basic linear transfer map for all elements. CorrectorTM TransferMap is a basic linear transfer map for all elements. PulseTM TransferMap is a basic linear transfer map for all elements. MultipoleTM TransferMap is a basic linear transfer map for all elements. CavityTM TransferMap is a basic linear transfer map for all elements. CouplerKickTM TransferMap is a basic linear transfer map for all elements. KickTM TransferMap is a basic linear transfer map for all elements. UndulatorTestTM TransferMap is a basic linear transfer map for all elements. RungeKuttaTM TransferMap is a basic linear transfer map for all elements. RungeKuttaTrTM THe same method as RungeKuttaTM but only transverse dynamics is included, longitudinal dynamics is skipped TWCavityTM TransferMap is a basic linear transfer map for all elements. MethodTM The class creates a transfer map for elements that depend on user-defined parameters (“parameters”). ProcessTable Navigator Navigator defines step (dz) of tracking and which physical process will be applied during each step.
### Functions¶
transform_vec_ent(X, dx, dy, tilt) transform_vec_ext(X, dx, dy, tilt) transfer_maps_mult_py(Ra, Ta, Rb, Tb) cell = [A, B] transfer_map_rotation(R, T, tilt) lattice_transfer_map(lattice, energy) Function calculates transfer maps, the first and second orders (R, T), for the whole lattice. trace_z(lattice, obj0, z_array) Z-dependent tracer (twiss(z) and particle(z)) trace_obj(lattice, obj, nPoints=None) track object through the lattice periodic_twiss(tws, R) initial conditions for a periodic Twiss solution twiss(lattice, tws0=None, nPoints=None) twiss parameters calculation twiss_fast(lattice, tws0=None) twiss parameters calculation get_map(lattice, dz, navi) merge_maps(t_maps) fodo_parameters(betaXmean=36.0, L=10.0, verbose=False)
ocelot.cpbd.optics.__author__ = Sergey
ocelot.cpbd.optics._logger
ocelot.cpbd.optics._logger_navi
ocelot.cpbd.optics.nb_flag = True
class ocelot.cpbd.optics.SecondOrderMult
The class includes three different methods for transforming the particle coordinates:
1. NUMBA module - DEACTIVATED, because new numpy implementation shows higher performance.
Slightly faster than NUMPY for simulations with a large number of time steps. Uses full matrix multiplication.
2. NUMPY module
Base method to be used. Uses full matrix multiplication.
numba_apply(self, X, R, T)
numpy_apply(self, X, R, T)
ocelot.cpbd.optics.transform_vec_ent(X, dx, dy, tilt)
ocelot.cpbd.optics.transform_vec_ext(X, dx, dy, tilt)
ocelot.cpbd.optics.transfer_maps_mult_py(Ra, Ta, Rb, Tb)
cell = [A, B] Rc = Rb * Ra :param Ra: :param Ta: :param Rb: :param Tb: :param sym_flag: :return:
ocelot.cpbd.optics.transfer_maps_mult
ocelot.cpbd.optics.transfer_map_rotation(R, T, tilt)
class ocelot.cpbd.optics.TransferMap
TransferMap is a basic linear transfer map for all elements.
map_x_twiss(self, tws0)
mul_p_array(self, rparticles, energy=0.0)
__mul__(self, m)
Parameters
m – TransferMap, Particle or Twiss
Returns
TransferMap, Particle or Twiss
Ma = {Ba, Ra, Ta} Mb = {Bb, Rb, Tb} X1 = R*(X0 - dX) + dX = R*X0 + B B = (E - R)*dX
apply(self, prcl_series)
Parameters
prcl_series – can be list of Particles [Particle_1, Particle_2, … ] or ParticleArray
Returns
None
__call__(self, s)
class ocelot.cpbd.optics.SecondTM(r_z_no_tilt, t_mat_z_e)
TransferMap is a basic linear transfer map for all elements.
t_apply(self, R, T, X, dx, dy, tilt, U5666=0.0)
__call__(self, s)
class ocelot.cpbd.optics.CorrectorTM(angle_x=0.0, angle_y=0.0, r_z_no_tilt=None, t_mat_z_e=None)
TransferMap is a basic linear transfer map for all elements.
kick_b(self, z, l, angle_x, angle_y)
kick(self, X, z, l, angle_x, angle_y, energy)
__call__(self, s)
class ocelot.cpbd.optics.PulseTM(kn)
TransferMap is a basic linear transfer map for all elements.
mul_parray(self, rparticles, energy=0.0)
class ocelot.cpbd.optics.MultipoleTM(kn)
TransferMap is a basic linear transfer map for all elements.
kick(self, X, kn)
__call__(self, s)
class ocelot.cpbd.optics.CavityTM(v=0, freq=0.0, phi=0.0)
TransferMap is a basic linear transfer map for all elements.
map4cav(self, X, E, V, freq, phi, z=0)
__call__(self, s)
class ocelot.cpbd.optics.CouplerKickTM(v=0, freq=0.0, phi=0.0, vx=0.0, vy=0.0)
TransferMap is a basic linear transfer map for all elements.
kick_b(self, v, phi, energy)
kick(self, X, v, phi, energy)
__call__(self, s)
class ocelot.cpbd.optics.KickTM(angle=0.0, k1=0.0, k2=0.0, k3=0.0, nkick=1)
TransferMap is a basic linear transfer map for all elements.
kick(self, X, l, angle, k1, k2, k3, energy, nkick=1)
does not work for dipole
kick_apply(self, X, l, angle, k1, k2, k3, energy, nkick, dx, dy, tilt)
__call__(self, s)
class ocelot.cpbd.optics.UndulatorTestTM(lperiod, Kx, ax=0, ndiv=10)
TransferMap is a basic linear transfer map for all elements.
map4undulator(self, u, z, lperiod, Kx, ax, energy, ndiv)
__call__(self, s)
class ocelot.cpbd.optics.RungeKuttaTM(s_start=0, npoints=200)
TransferMap is a basic linear transfer map for all elements.
__call__(self, s)
class ocelot.cpbd.optics.RungeKuttaTrTM(s_start=0, npoints=200)
THe same method as RungeKuttaTM but only transverse dynamics is included, longitudinal dynamics is skipped
class ocelot.cpbd.optics.TWCavityTM(l=0, v=0, phi=0, freq=0)
TransferMap is a basic linear transfer map for all elements.
tw_cavity_R_z(self, z, V, E, freq, phi=0.0)
Parameters
• z – length
• de – delta E
• f – frequency
• E – initial energy
Returns
matrix
f_entrance(self, z, V, E, phi=0.0)
f_exit(self, z, V, E, phi=0.0)
__call__(self, s)
class ocelot.cpbd.optics.MethodTM(params=None)
The class creates a transfer map for elements that depend on user-defined parameters (“parameters”). By default, the parameters = {“global”: TransferMap}, which means that all elements will have linear transfer maps. You can also specify different transfer maps for any type of element.
# use linear matrices for all elements except Sextupole which will have nonlinear kick map (KickTM) method = MethodTM() method.global_method = TransferMap method.params[Sextupole] = KickTM
# All elements are assigned matrices of the second order. # For elements for which there are no matrices of the second order are assigned default matrices, e.g. linear matrices. method2 = MethodTM() method2.global_method = SecondTM
create_tm(self, element)
set_tm(self, element, method)
ocelot.cpbd.optics.sym_matrix(T)
ocelot.cpbd.optics.unsym_matrix(T)
ocelot.cpbd.optics.lattice_transfer_map(lattice, energy)
Function calculates transfer maps, the first and second orders (R, T), for the whole lattice. Second order matrices are attached to lattice object: lattice.T_sym - symmetric second order matrix lattice.T - second order matrix lattice.R - linear R matrix
Parameters
• lattice – MagneticLattice
• energy – the initial electron beam energy [GeV]
Returns
R - matrix
ocelot.cpbd.optics.trace_z(lattice, obj0, z_array)
Z-dependent tracer (twiss(z) and particle(z)) usage: twiss = trace_z(lattice,twiss_0, [1.23, 2.56, …]) , to calculate Twiss params at 1.23m, 2.56m etc.
ocelot.cpbd.optics.trace_obj(lattice, obj, nPoints=None)
track object through the lattice obj must be Twiss or Particle
ocelot.cpbd.optics.periodic_twiss(tws, R)
initial conditions for a periodic Twiss solution
ocelot.cpbd.optics.twiss(lattice, tws0=None, nPoints=None)
twiss parameters calculation
Parameters
• lattice – lattice, MagneticLattice() object
• tws0 – initial twiss parameters, Twiss() object. If None, try to find periodic solution.
• nPoints – number of points per cell. If None, then twiss parameters are calculated at the end of each element.
Returns
list of Twiss() objects
ocelot.cpbd.optics.twiss_fast(lattice, tws0=None)
twiss parameters calculation
Parameters
• lattice – lattice, MagneticLattice() object
• tws0 – initial twiss parameters, Twiss() object. If None, try to find periodic solution.
• nPoints – number of points per cell. If None, then twiss parameters are calculated at the end of each element.
Returns
list of Twiss() objects
class ocelot.cpbd.optics.ProcessTable(lattice)
searching_kick_proc(self, physics_proc, elem1)
function finds kick physics process. Kick physics process applies kick only once between two elements with zero length (e.g. Marker) or at the beginning of the element if it is the same element, others physics processes are applied during finite lengths. :return:
add_physics_proc(self, physics_proc, elem1, elem2)
class ocelot.cpbd.optics.Navigator(lattice)
Navigator defines step (dz) of tracking and which physical process will be applied during each step. lattice - MagneticLattice Attributes:
unit_step = 1 [m] - unit step for all physics processes
Methods:
physics_proc - physics process, can be CSR, SpaceCharge or Wake, elem1 and elem2 - first and last elements between which the physics process will be applied.
reset_position(self)
method to reset Navigator position. :return:
go_to_start(self)
get_phys_procs(self)
method return list of all physics processes which were added
Returns
list, list of PhysProc(s)
add_physics_proc(self, physics_proc, elem1, elem2)
Parameters
• physics_proc – PhysicsProc, e.g. SpaceCharge, CSR, Wake …
• elem1 – the element in the lattice where to start applying the physical process.
• elem2 – the element in the lattice where to stop applying the physical process, can be the same as starting element.
Returns
activate_apertures(self, start=None, stop=None)
activate apertures if thea exist in the lattice from
Parameters
• start – element, activate apertures starting form element ‘start’ element
• stop – element, activate apertures up to ‘stop’ element
Returns
check_overjump(self, dz, processes, phys_steps)
get_proc_list(self)
hard_edge_step(self, dz)
check_proc_bounds(self, dz, proc_list, phys_steps, active_process)
remove_used_processes(self, processes)
in case physics processes are applied and do not more needed they are removed from table
Parameters
processes – list of processes are about to apply
Returns
None
get_next(self)
ocelot.cpbd.optics.get_map(lattice, dz, navi)
ocelot.cpbd.optics.merge_maps(t_maps)
ocelot.cpbd.optics.fodo_parameters(betaXmean=36.0, L=10.0, verbose=False)
|
2021-06-15 02:37:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3408474028110504, "perplexity": 12589.184011871117}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487616657.20/warc/CC-MAIN-20210615022806-20210615052806-00502.warc.gz"}
|
https://math.stackexchange.com/questions/1621697/given-two-exponentially-distributed-random-variables-show-their-sum-is-also-exp
|
# Given two exponentially distributed random variables, show their sum is also exponentially distributed
Given two independent exponentially distributed random variables, I want to show their sum is also exponentially distributed. This is my try, I used convolution. It didn't get me anywhere...
• Why do you want to show this? It's not true: en.wikipedia.org/wiki/Erlang_distribution – Brian Tung Jan 21 '16 at 23:20
• The physically natural thing that is exponentially distributed is the minimum of the two. The sum can't be exponentially distributed, basically because the memoryless property $P(X+Y>t+s \mid X+Y>t)=P(X+Y>s)$ does not hold. Intuitively this is because the stochastic process which corresponds to $X+Y$ is the one where we wait for the event associated to $X$ to occur, then wait for the event associated to $Y$ to occur, then add up the times we spent waiting. So it "knows" whether we are currently waiting for $X$ vs. currently waiting for $Y$. – Ian Jan 21 '16 at 23:23
• Wierd, I think my teacher gave this as an exercise... Thank you! – Whyka Jan 21 '16 at 23:36
• Your dark blue expression for the density of the sum looks correct when $\lambda_1 \not = \lambda_2$. As @Ian says, your target expression in light blue is the density of the minimum. – Henry Jan 22 '16 at 7:36
If $\Pr(X>t) = e^{-t}$ for all $t\ge0$ and $\Pr(Y=2X)=1$, then $X$ and $Y$ are exponentially distributed and so is their sum.
At the opposite extreme, you'd have two independent exponentially distributed random variables. Their sum will never be exponentially distributed. The convolution you compute gives the density function of the sum only if they are independent.
The simplest case would be $\Pr(X>t) =\Pr(Y>t) = e^{-\lambda t}$ for all $t\ge0$ and $X,Y$ are independent. Then they both have the same density, $t\mapsto \lambda e^{-\lambda t}$ for $t\ge0$. The convolution of densities is \begin{align} t\mapsto \int_0^t \lambda e^{-\lambda u} \lambda e^{-\lambda(t-u)}\, du = \lambda^2 t e^{-\lambda t}. \end{align} The distribution with this density is not an exponential distribution.
Remember that the exponential distribution is memoryless. Your standing by the road measuring the time between when a car passes and when the next one passes. The probability that you need to wait another minute does not depend on how long you've waited. But not suppose you're measuring the time until the $20$th car passes, and there's an average of one per minute. After $25$ minutes the $20$th car car hasn't passed yet and you don't know whether the first car or the $19$th has passed yet. Now the probability that the $20$th car comes in the next minute is higher than if you had waited only two minutes so far, because the probability that the first $19$ cars have already passed is much higher than it would be after just two minutes. So this is not a memoryless distribution. If the sum of independent exponentially distributed random variables were exponentially distributed, then this distribution would be memoryless.
• Great answer, thanks a lot! So, in general- the property of the sum having the same distribution as the independent components is equivalent to that distribution being memoryless? Iff? – Whyka Jan 22 '16 at 0:47
• I also find it useful to think of sum of two i.i.d. exponentials as a waiting time until the first count of a Poisson process with intensity $\lambda -\frac {\lambda}{1+\lambda t}=\lambda\frac {\lambda t}{1+\lambda t}$ which shows transition to a homogeneous Poisson for large $t$. – A.S. Jan 22 '16 at 0:49
• @Whyka : No: The distribution of either $X$ or $Y$ is memoryless in the example worked above, but the distribution of $X+Y$ is not memoryless. Nor is the distribution of the sum of $20$ of these, as the last paragraph explains. $\qquad$ – Michael Hardy Jan 22 '16 at 1:43
• So, just to make sure I understood you: "if the sum were exponentially distributed, this distribution would be memoryless" is true only because we are dealing with this specific distribution? – Whyka Jan 22 '16 at 1:50
• It's true because the exponential distribution is memoryless. $\qquad$ – Michael Hardy Jan 22 '16 at 1:57
You proof that $f_{X+Y}(t)=\dfrac{\lambda_1\lambda_2}{\lambda_2-\lambda_1}[e^{-\lambda_2t}-e^{-\lambda_1t}]$ in the case $\lambda_2>\lambda_1$.
If $\lambda_2=\lambda_1$, you will obtain $\Gamma(2,\lambda_1)$, which follows easly using moment generating functions.
|
2019-04-23 20:04:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8280620574951172, "perplexity": 288.34839256413863}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578613603.65/warc/CC-MAIN-20190423194825-20190423220825-00071.warc.gz"}
|
https://www.techwhiff.com/learn/activity-1-learning-skeletal-muscle/422596
|
ACTIVITY 1 Learning Skeletal Muscle Microstructure Materials for This Activity Compound light microscope Prepared slide of...
Question:
ACTIVITY 1 Learning Skeletal Muscle Microstructure Materials for This Activity Compound light microscope Prepared slide of skeletal muscle (teased) Model of muscle cell showing parts of a myofibril View a slide of skeletal muscle, and carefully focus and adjust the light so that you can see the stria- tions, or alternating light and dark areas. The ligh areas are where we mainly have the thin act: proteins. The darker areas are where we mais have the thick myosin proteins. Now view & model of a muscle cell showing the details of a myofibril. Identify the thick and thin myofilaments, Z-lines, and sarcomeres. Sketch a sarcomere in the space provided below, labeling the previously mentioned structures.
Similar Solved Questions
Evaluate lim(x tends to zero) cosecx-cotx/x?
Evaluate lim(x tends to zero) cosecx-cotx/x?...
The complete reaction steps with mechanisms is needed for the following wittig reaction? THF 0: S:...
The complete reaction steps with mechanisms is needed for the following wittig reaction? THF 0: S: bene lc....
Calculus II Question Will give thumbs up! Don't know how to approach this. (E) (sin xdx
Calculus II Question Will give thumbs up! Don't know how to approach this. (E) (sin xdx...
To the nearest hundredth, find the surface area of a cone whose slant height is 9 centimeters and that has a base with radius 6 centimeters
to the nearest hundredth, find the surface area of a cone whose slant height is 9 centimeters and that has a base with radius 6 centimeters....
An electron moving with a velocity v=6.78x10 m/s i enters a region of space where the...
An electron moving with a velocity v=6.78x10 m/s i enters a region of space where the electrie Ploid and magnetic Moldo ero perpendicular to each other. The electric Mold la E1.5610 V/m). Part A- Find the magnetic field for which the electron would pass undeflected through the region. O 4.50x104T 2....
Ten samples of 15 parts each were taken from an ongoing process to establish a p-chart...
Ten samples of 15 parts each were taken from an ongoing process to establish a p-chart for control. The samples and the number of defectives in each are shown in the following table: SAMPLE n NUMBER OF DEFECTIVE ITEMS IN THE SAMPLE 1 15 2 2 15 0 3 15 3 4 15 3 5 15 3 6 15 1 7 15 3 ...
What one line of code would i need to remove the first node? class Node {...
what one line of code would i need to remove the first node? class Node { public Object data = null; public Node next = null; Node head = new Node(); // first Node head.next = new Node(); // second Node head.next.next = new Node(); // third node head.next.next next = new Node(); // fourth node head=...
Part b need help thank This question has multiple parts. Work all the parts to get...
part b need help thank This question has multiple parts. Work all the parts to get the most points. synthesis of organic compounds. When heated to a sufficiently high temperature, Sulfuryl chloride, SO,Cl, is a compound with very irritating vapors, it is used as a it decomposes to SO, and Cl. SO...
Problem 4 [15 Points] Given -1001 00-1」 11 0 1 00 (a) Find the impulse response...
Problem 4 [15 Points] Given -1001 00-1」 11 0 1 00 (a) Find the impulse response h(t) and the transfer function matrix H(s). Which modes are present in h(t)? What are the poles of H(s)? b) Is the system asymptotically stable? Is it BIBO stable? Explain. (c) ls it possible to determine a linea...
|
2022-08-12 03:11:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36565399169921875, "perplexity": 2161.4196831129825}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571538.36/warc/CC-MAIN-20220812014923-20220812044923-00297.warc.gz"}
|
https://www.intechopen.com/chapters/67259
|
Open access peer-reviewed chapter
Issues in Solid-State Physics
Written By
Roberto Raúl Deza
Submitted: September 21st, 2018 Reviewed: January 14th, 2019 Published: August 23rd, 2019
DOI: 10.5772/intechopen.84367
From the Edited Volume
Metastable, Spintronics Materials and Mechanics of Deformable Bodies
Edited by Subbarayan Sivasankaran, Pramoda Kumar Nayak and Ezgi Günay
Chapter metrics overview
View Full Metrics
Abstract
In the first sections, we bring into the present context some of our past contributions on the influence of quantum correlations on the formation of tightly bound solids. We discuss the effects of the overlap between neighbor orbitals in diverse situations of interest—involving both bulk and surface states—and call the reader’s attention to an exact tight-binding calculation which allows gauging the errors introduced by the underlying hypotheses of the usual tight-binding approximation. We round up this part by reviewing a quantum Monte Carlo method specific for strongly correlated fermion systems. In the last section, we explore some non-equilibrium routes to (not necessarily tightly bound) solid state: we discuss spatiotemporal pattern formation in arrays of FitzHugh-Nagumo (FHN) neurons, akin to resonant crystal structures.
Keywords
• quantum correlations
• band structure
• tight-binding approach
• neighbor orbital overlap
• fermion Monte Carlo
• non-equilibrium pattern formation
• spatiotemporal synchronization
1. Introduction
Since childhood, we all have an intuition of what a solid is. However, most properties we intuitively assign to solids come in a vast range. Diamonds—and some metals—are hard, and ordinary glasses are brittle; but vulcanized rubber is neither, and it is a solid too. Perhaps the best characterization is this: at our human timescales, a solid does not flow. That is why this category includes glasses and ice (which do flow but at least at geological timescales).
Regarding their structure, a huge class of solids are crystalline. This is so to such extent that solid state came to be synonymous of crystalline structure, and the more comprehensive category of condensed matter (which admittedly includes condensed fluids or liquids) came into fashion. The name crystal was assigned in the late antiquity to precious and semiprecious stones that outstood for their transparency and diaphaneity. In fact, the modern meaning of the term as “an almost perfectly ordered structure” explains easily those properties.1
Many solids we interact with—metals, stones, etc.—are random assemblies of grains, held together by strong adhesion forces. Like those of sand, quartz, or salt, those grains are very likely to be themselves crystals (which as said do not imply they are perfect: they may contain lots of impurities and defects). But there are two particular aspects of crystals we are concerned with here. The first is that unlike complex systems, which may display emergent structures at each scale (think, e.g., of mitochondria, cells, tissues, organs, etc.), crystals are very simple: they are huge assemblies of elementary building blocks (be they atoms, molecules, nanoclusters, or whatever). The second is that since the building blocks obey quantum mechanics, crystals inherit the quantum character (despite being themselves macroscopic).
As recent experiments have shown, whereas most interactions (but gravity) are effectively short-ranged, there is no limit for quantum correlations; and this fact makes them the most important fact to account for in modeling. Quantum correlations manifest themselves in many ways, but the by far dominant one comes from the indistinguishability of identical particles. Unless the crystal is a monolayer, the state vector of a system of many indistinguishable particles must be either totally symmetric or totally antisymmetric (a determinant) under exchange. In the first case, the particles obey Bose-Einstein statistics and are called bosons. In the second, the particles obey Fermi-Dirac statistics and are called fermions. The requirement that the state vector of a system with many fermions be totally antisymmetric is the celebrated exclusion principle, postulated by Pauli.
At present, there is no question that atoms are distinguishable. They can even be individually manipulated.2 Since in modeling crystals, it suffices to take atoms as building blocks (we resolve up to the nanoscale), it does not matter that they are themselves composed of other indistinguishable particles (i.e., protons and neutrons, confined to ∼10−6 nm) besides electrons. Instead, considering the typical effective masses of electrons in metals and semiconductors, their thermal lengths at room temperature can reach the μm, so they are highly delocalized. The following two sections illustrate two different ways of dealing with Pauli’s exclusion principle when modeling crystalline solids, corresponding to two radically different ways of doing quantum mechanics.
Section 2 keeps within the framework of first quantization. It is assumed that neither electrons (we mean crystal electrons, with effective masses) nor holes can be either created or destroyed. There is only one electron in the whole crystal, submitted to a potential which is mainly the juxtaposition of shielded Coulomb terms, due to atomic orbitals located at the crystal’s lattice sites. The way Pauli’s principle is dealt with is by comparing the one-electron band spectrum with the Fermi level of an ideal free-electron gas (see Nomenclature). The Fermi level is the chemical potential of such a gas. The exclusion principle can make it so high that for white dwarfs and neutron stars, the pressure it generates prevents the system from becoming a black hole. But the quantum correlation we are concerned with in this section is not Pauli’s principle but the overlap between atomic orbitals, usually neglected in simple tight-binding calculations of band structure. The main assumption of the tight-binding approach to band spectra is that atoms in a crystal interact only very weakly. As a consequence, the electron’s state vector should not differ very much from that of the plain juxtaposition of atomic orbitals located at the crystal’s lattice sites. However, neglecting almost all interaction terms and overlap integrals (atomic states at different lattice sites need not be orthogonal to each other) may be too drastic an approximation. Thus Section 2 is devoted to a thorough discussion of the issue.
Instead, the framework of Section 3 is that of second quantization. Again, our view of the crystal is that of tight-binding (atoms do not lose their identities). Here we are indeed concerned with Pauli’s principle. But we deal with it in the style of quantum field theory, by allowing at most one electron of each spin projection per atom. For an electron to move (“hop”) one lattice site, it must be annihilated at its former host atom and created in its nearest neighbor one. The purpose of this section is to illustrate an efficient Monte Carlo scheme that implements this strategy to find the ground state of many-electron systems. Recognizing that electrostatic (Coulomb) interaction between electrons is not a weak effect but is simply overwhelmed by Pauli’s principle, a popular model of itinerant magnetism (the Hubbard model) adds to its Hamiltonian a repulsion term whenever an atom hosts two (opposite spin projection) electrons.
Section 4 explores the boundaries of the concept of solid. Perhaps, it should be regarded as a metaphor of this concept. We illustrate a non-equilibrium spatiotemporal pattern formation process, akin to resonant crystal structures, in arrays of FitzHugh-Nagumo cells.
2. Band spectra in the tight-binding approach: effects of the overlaps between neighboring orbitals
2.1 Quantum mechanics in a nutshell
For the benefit of those readers who are unfamiliar with the standard formalism of quantum mechanics, we review its main facts:
• Dynamical states are vectors: one can account for the wavelike behavior of quantum objects (e.g., diffraction of single electrons by two slits) by letting their dynamical state ψ belong to a vector space over the complex numbers. In few problems (e.g., addition of angular momenta), this vector space is finite-dimensional. But most problems entail infinite sequences (e.g., energy spectrum of the hydrogen atom) or even a continuum of values (e.g., in the measurement of positions and momenta), so the notion of dimension is replaced by that of completitude (any state can be spanned in suitable “bases”). By assigning a complex number φ ψ (their “internal product”) to every pair of dynamical states ψ , φ , the complete vector space is made into a Hilbert space.
• Probabilistic interpretation: if ψ = I α I ψ I (be aware that the index set I may be infinite or may even be a patch of R d ), then α I 2 yields the probability to find an outcome represented by ψ I , when the system is in state ψ . This obviously requires normalization: ψ ψ = 1 .
• Dynamical magnitudes are linear operators L , which take a vector into another vector. For instance, the projector P φ φ ψ projects state ψ onto φ . Measuring a dynamical magnitude thus means finding one of its eigenvalues L l I = l I l I . Also of interest is the mean (or expectation) value ψ L ψ of L in a generic state ψ . Correspondence with classical physics imposes that those eigenvalues be real, and thus dynamical magnitudes must be self-adjoint (Hermitian) operators ( P φ is thus not a dynamical magnitude).
• Unitary evolution: in order to conserve the probabilistic interpretation, the dynamic evolution of the state is accomplished by a unitary operator. Again, correspondence with classical physics (already implicit in Schrödinger’s equation) forces this operator to be exp iH .
• Wave function: a possible “basis” set is that of eigenstates ( X x = x x ) of the position operator, namely, ψ = dx ψ x x . The wave function ψ x plays here the role of the coefficients α I . In modern notation, ψ x is written as x φ , so one writes φ = dx x x φ .
• Orthogonality ( φ ψ = 0 ): a given eigenvalue l 1 of a Hermitian operator L may have a single eigenstate l 1 (the normalized one out of a dimension-1 subspace over the complex numbers) or more (in this case, it is said to be degenerate). Eigenstates corresponding to different eigenvalues are automatically orthogonal.
2.2 Naive tight-binding approach to band theory
As argued in Section 1, the starting point of this approach is to express the electron’s state vector as a linear combination of atomic orbitals (LCAO) located at the crystal’s lattice sites (we illustrate the procedure in 1D, but clearly, it can be extended to any dimension and lattice symmetry). The eigenvalue problem of the isolated atom centered at x c is H atom ψ atom = E atom ψ atom , with H atom = T + V x x c . We then place a copy i of ψ atom centered at each lattice site i ( x i = ia ) and write the electron’s state in the crystal as LCAO
ψ crystal = i c i i E1
(clearly, c i = i ψ crystal ). Now, even though the interatomic distance in the crystal (the “lattice spacing” a ) is usually larger than the range x 0 of the atomic orbitals, the atomic cores do interact, and one should include at least two effects:
• A correction to the isolated atomic level E atom (we shall call α the corrected level)
• Electron tunneling between neighboring orbitals (let γ be a gauge of the energy involved in such a “hopping” process)
It thus makes sense to write up the lattice Hamiltonian in terms of projection operators as
H crystal = α i i i γ i i + 1 i + i i + 1 . E2
1. The presence of i i + 1 , the adjoint of i + 1 i , ensures that H crystal be Hermitian.
2. The minus sign in the second term ensures crystal stability (energy is released by forming a crystal).
Using Eqs. (1) and (2), the eigenvalue problem H crystal ψ crystal = E crystal ψ crystal for the electron in the crystal reads
[ α i i i γ i ( i + 1 i + i i + 1 ) ] j c j j = E crystal j c j j . E3
Assuming the states i to be orthogonal to each other, the left-hand side of Eq. (3) reads i c i i γ i c i i + 1 + c i + 1 i ). If the number of sites in the crystal is large enough (usually it is ∼106), one can greatly simplify the problem by assuming periodic boundary conditions (PBC). This allows to rearrange the sums (their indices become dummy), and Eq. (3) reads i E crystal α c i γ i c i 1 + c i + 1 i = 0 . Clearly, the LCAO assumes that the i are linearly independent (be they orthogonal or not), so we are left with the system of difference equations:
E crystal α c i γ i c i 1 + c i + 1 = 0 , i = 1 N 0 . E4
Again invoking PBC, one tries the form c j = exp ijka with π < ka π (Bloch phase factors) and obtains the known cosine spectrum
E crystal = α 2 γ cos ka , π < ka π . E5
What has been left behind? Much indeed:
• We know that α equals E atom plus some correction, but we do not know what the correction is.
• Similarly, we know that γ is the expectation value of the effective potential W i j i V x x j felt by an electron at x ia , due to the presence of other atoms. We have kept just j = i ± 1 , but even in this approximation, we do not know what the correction is.
• To what extent can one assume the states i to be orthogonal to each other? This assumption is correct in the absence of interatomic interaction, but not necessarily when atoms interact.
2.3 Tight-binding band calculation: properly done
Recognizing that H crystal = i H i atom + W i and using Eq. (1), E crystal turns out to be [1, 2]
E crystal = E atom + i α i c i 2 + ij γ ij c i c j / i c i 2 + ij S ij c i c j , E6
where
α i H ii = i W i i , γ ij H ij = i W i j , S ij i j , j i . E7
The contribution of the S ij (known as overlap integrals) to the band spectrum is our main concern in this section. But not less interesting are that of the α i terms—which, as argued, shift the electronic energy in an atom from its isolated value E atom , as a collective effect of the other atoms—and that of the γ ij . The latter can be regarded as the sum of two contributions, as V j V x x j can be singled out from W i . Then whereas the two-center integrals γ ij 2 i V j j involve only sites i and j , the three-center integrals γ ij 3 also involve the sum l i , j V x x l of the potentials of the remaining atoms in the solid. Hence, the γ ij 3 can be interpreted as the collective effect on the overlap between orbitals i and j .
Variation of Eq. (6) with respect to the LCAO coefficients of Eq. (1)—namely, E crystal / a j = 0 —yields H ij E crystal S ij a j = 0 , j . Assuming PBC, H ij and S ij are functions only of the interatomic distance na , with n = i j . Again using a n = exp inka with π < ka π , Eq. (6) yields
E crystal = E atom + α + 2 n H n cos nka / 1 + 2 n S n cos nka . E8
Note however that the number of multicenter integrals to be computed is immense! Because of that, most tight-binding calculations plainly ignore almost all the multicenter integrals (keeping only those involving nearest neighbors) and neglect orbital non-orthogonality. This way, the familiar cosine spectrum is obtained. Often, multicenter integrals are just regarded as parameters to fit the results of more sophisticated calculations made by other methods at the highest symmetry points of the Brillouin zone.
In the following, we compute all the multicenter integrals exactly in the framework of a simple model for the atomic potential. The results help get an intuition on the effect on band spectrum of neglecting overlap integrals and distant-neighbor interactions.
2.4 A simple model that yields an exact tight-binding band spectrum
We restrict ourselves to a 1D monoatomic crystal and assume the interatomic distance a to be larger than the effective range of the screened Coulomb potential representing the atomic core. In such a situation, we can approximate the latter by a Dirac δ -function (complete screening up to the scale of the nucleus):
V crystal x = V 0 n δ x na . E9
The solution to H atom ψ atom = E atom ψ atom , with H atom = 2 2 m d 2 dx 2 V 0 δ x , is an exponential function of the form ψ atom x = x ψ atom = x 0 1 2 exp x x 0 . Its range x 0 is related to E atom by E atom = 2 / 2 m x 0 2 = mV 0 / 2 2 .
The only two spatial scales involved in this problem are x 0 and the lattice spacing a . The parameter t = a / x 0 will thus allow us to follow the formation of energy bands ( k -space picture) as atoms get close together (real-space picture). All the multicenter integrals can be computed analytically in terms of t . The results are S n = 1 + nt exp t , α = 2 E atom exp t / sinh t , and γ n = 2 E atom n + exp t / sinh t exp nt [3]. We thus get the following closed expression for λ E crystal E atom / E atom :
λ k t = A 0 t + A 1 t cos ka / 1 + S t cos ka , π < ka π , E10
where A 0 = exp t sinh t / sinh t cosh t t , A 1 = sinh t / sinh t cosh t t , and S = t cosh t sinh t / sinh t cosh t t [3].
Explicit evaluation of Eq. (10) at the bottom ( ka = 0 ) and top ( ka = π ) of the band shows that for t < 4 , the cosine spectrum of Eq. (5) underestimates both. Moreover, the multicenter integrals neglected in the cosine spectrum shift unevenly the top and bottom of the exact spectrum. Hence, the approximation performs worse for the top than for the bottom of the band.
3. Quantum Monte Carlo method for systems with strongly correlated fermions
3.1 Quantum statistical mechanics in a nutshell
The state vectors dealt with in Section 1 represent pure states. They are the ones which display the spectacular effects seen in recent experiments. Since in this section, we will allow creation annihilation of electron states, we must work in the framework of the grand canonical ensemble.3 When one deals with statistical ensembles of quantum states, the object of interest is the Hermitian operator exp βH , called the density matrix operator (here β k B T 1 and k B = R / N A are Boltzmann’s constant).
What drives our interest in the density matrix—namely, the matrix elements between pure states of exp βH —is the fact that it can be used to find the ground state of many-body systems by stochastic methods. For β large enough, exp βH acts effectively as a projector over the lowest-lying energy eigenstate to which the initial (trial) state φ is not definitely orthogonal. Let E be the corresponding eigenvalue, and consider another trial state χ over which we will project the result. Then we may numerically compute E from
exp βE = lim β χ exp β + β H φ / χ exp βH φ . E11
But what is yet more interesting is that in the process, we find a good estimate of the eigenstate itself, namely, its composition in terms of a known basis.
3.2 Monte Carlo pursuit of the ground state
The first step in this computation is to divide the interval 0 β into L “time” slices of width τ = β / L . Some comments are in order:
1. We take our language from the formal analogy between the density matrix and the evolution operators.
2. Note that in our case, exp βH is not meant to be traced over as it should be in a thermodynamic calculation: here it must rather be considered as a formal tool to make sense in the limit β .
3. We may call U = exp τH the transfer matrix operator.
If we can decompose H into a sum of several terms H i which (although not commuting among them) are themselves sums of commuting terms, then for L large enough, the error of approximating
U = exp τ H 1 + 2 = exp τ 1 exp τ H 2 exp τ 2 H 1 H 2 = U 1 U 2 1 τ 2 H 1 H 2 + U 1 U 2
would be at most of order τ 2 . Hence
χ exp βH φ χ U 1 U 2 L φ . E12
In order to evaluate expression (12), we introduce complete sets of states at each time slice.
The clue to quantum Monte Carlo simulation of Eq. (11) resides in evaluating the sums over complete states by importance sampling: in order to do that, observe first that we can rather arbitrarily decompose
ψ j U 1 U 2 L ψ i = S ij P ij E13
as the product of a probability times a (complex) number which we will call a “score.” The probability distribution P ij is at our disposal in order to optimize numerical convergence, minimize statistical error, etc. It can be shown that the way to achieve the last goal is by assigning to every matrix element the same score: that is the basis for the so-called population method. Here the initial (trial) state is represented by a “population” in which there are n i copies of state ψ i . The latter corresponds to a definite assignment of occupation numbers both in coordinate and spin (always belonging to the Hilbert space of the problem, i.e., compatible with the conserved quantum numbers). To each individual in the population, we apply the evolution operator, thus obtaining a new state after one time slice. That particular matrix element can be decomposed as indicated in Eq. (20) (but being now S ij = S = const ). The way in which we implement the P ij is by making as many copies of that particular resulting state as indicated by ψ j U 1 U 2 L ψ i / S . Proceeding this way, we will get a different population after each time slice which we expect to approach successively to one representing the lowest reachable energy eigenstate.
We have not said anything about the way in which we evaluate the alluded matrix elements, besides the fact that we resort to the decomposition (20): if, as we have already assumed, the term H i can itself be decomposed into mutually commuting terms, we need only to focus on the Hilbert space of that (much smaller) system. We can compute exactly the matrix elements of the evolution operator for that cluster, write them as the product of a probability times a score (now we can choose the probability distribution to minimize total computing time), and make transitions among cluster states according to those probabilities, assigning then the corresponding score to the particular transition.
3.3 The case of fermions
Again within the tight-binding approach to crystalline solids, quantum creation ( c is ) and annihilation ( c is ) operators determine the existence of electrons with spin projection σ at site i . For the state vector of the whole set of electrons in the crystal to be totally antisymmetric under exchange, those operators must anticommute with each other, unless they refer to the same site and spin projection. In such a case, there can be at most one electron per site and spin projection, as required by Pauli’s principle.
In the case of the 1D Hubbard model, we chose the following decomposition of the Hamiltonian, which allows us to consider clusters of only two sites:
H 1 = t oddj σ c j + 1 σ c + h . c . = t oddj σ h j , j + 1
H 2 = t evenj σ c j + 1 σ c + h . c . = t evenj σ h j , j + 1 E14
H 3 = t all j n j n j .
The corresponding matrix elements are then ψ i + 1 U 3 U 2 U 1 ψ i with U 1 = oddj exp τ h j , j + 1 , U 2 = evenj exp τ h j , j + 1 , U 3 = j exp τ n j n j , and
01 exp τ h j , j + 1 01 = 10 exp τ h j , j + 1 10 = cosh τ ,
10 exp τ h j , j + 1 01 = 01 exp τ h j , j + 1 10 = sinh τ , E15
00 exp τ h j , j + 1 00 = 11 exp τ h j , j + 1 11 = 1 ,
from which we write up the (a priori) transition probabilities. Then, in case there is only one occupied site in the block, we draw a random number r and compare it with the a priori transition probability p for the state to remain the same. In case that r > p , we make a hopping, i.e., exchange empty and occupied states in the block.
The a priori probabilities can be better chosen if we take into account the occupation of those same two sites by electrons with the other spin projection, thus anticipating to the fact that they will penalize doubly occupied sites [4, 5]. This will certainly improve convergence.
4. Non-equilibrium routes to soft solids
Up to now, we have dealt with crystalline solids. This means that disregarding the topology 4 of the interaction network, we paid attention to the underlying geometry of the quantum problem. At present, a host of synthetic materials has outperformed metals at their initial tasks. Some of them still display a varying degree of crystalline character, but others are not crystalline at all. Vulcanized rubbers are an example: created by forcing random chemical bonds in a melt (a “spaghetti dish”), they are inhibited to flow, and, thus, they are amorphous solids.5 But they exhibit a varying degree of viscoelastic behavior. In the last decades, the vast discipline of soft condensed matter has incorporated to mainstream research in solid-state physics, at equal footing with crystalline solids. The scope of soft condensed matter is very wide. In particular, it considers many non-equilibrium routes to self-assembled emergent structures. Of huge interest is the neocortex (not just because understanding the brain’s behavior is one of the “Holy Grails” of science, but because in doing it we may achieve to master a computational strategy which is far more efficient than the present one).
We devote this section to the emergence of non-equilibrium routes to spatiotemporal patterns in an assembly of model “neurons” which keep their essential trait, namely, excitability. Admittedly, here the interaction network has the topology of a lattice, but here it is not the underlying geometry that is at stake. What does matter here is that the boundary condition be compatible with the interaction, a fact that contributes to the network’s topology.
4.1 The non-equilibrium potential (NEP)
It is often hard to tell to what extent an innovation embodies a paradigm shift, for the high diversity (both in scope and extent) of innovations. The formalism of quantum mechanics can be regarded as such—with respect to the Newtonian paradigm—despite the strict correspondence between commutator and Poisson bracket Lie algebras. Also can Einstein’s three papers in his “annus miraculus” be considered as such, for they demolished our former conceptions of time, of the nature of particles and waves, and of a clockwork universe. In 1908, Paul Langevin supplemented the Newtonian paradigm by letting the forces be of stochastic nature [6]. It is up to your taste to call this innovation a paradigm shift: it definitely abolished our clockwork universe conception and opened up a new chapter in the theory of differential equations. The resulting paradigm is well suited to the current situation, urged by the challenges of nanoscience (where the “systems” are submitted to strong ambient fluctuations) and favored by the increasing parallelism of computational architectures (the simulation schemes are essentially local).
The modern approach to continuous-time dynamic flows is of first order.6 Given an initial state x i of a continuous-time, dissipative, autonomous dynamic flow x ̇ = f x , its conditional probability density function (PDF) P x t x i 0 when submitted to a (Gaussian, centered) white noise ξ t with variance γ , namely,
x ̇ = f x + ξ t , with ξ t = 0 and ξ t ξ t = 2 γδ t t E16
obeys the Fokker-Planck equation (FPE):
P x t x 0 + J x t x 0 = 0 , with J x t x 0 = D 1 P x D 2 x P E17
in terms of the “drift” D 1 = f x and “diffusion” D 2 = γ Kramers-Moyal coefficients. Being the flow nonautonomous but dissipative, one can expect generically situations of statistical energy balance in which the PDF becomes stationary, t P st x = 0 , thus independent of the initial state. Then by defining the non-equilibrium potential Φ x x 0 x f y dy , it is immediate to find
P st x = N x 0 exp Φ x / γ . E18
For n -component dynamic flows, Φ x is defined as lim γ 0 γ ln P st x γ [7], but finding it ceases to be a straightforward matter.7 The purpose of this section is to illustrate its usefulness when known. It is a Lyapunov function for the deterministic dynamics, and the barriers for activated processes can be straightforwardly computed $\lim_{\gamma\to0}\gamma\ln$.
4.2 The FitzHugh-Nagumo model and its NEP
Neurons communicate with each other through “action potentials,” which are pulsed variations in the polarization of their membranes. The celebrated Hodgkin-Huxley model of neural physiology was one of the great scientific achievements of the past century. When the goal is insight, however, it is too cumbersome a model to work with. A caricature of this model which nonetheless stresses its essence is thus far more desirable in many situations. The FitzHugh-Nagumo model is the minimal model capable to produce action potentials, and the key to this behavior is excitability. In its minimal expression, the FHN model reads
u ̇ = f u v ,
v ̇ = ϵ βu v . E19
The activator field u relaxes very fast and displays autocatalytic dynamics (the more there is, the more it produces, but in a nonlinear fashion) as needed to produce an action potential. Its nullcline v = f u (the locus of u ̇ = 0 ) is a decreasing S-shaped (typically cubic) curve. On the other hand, the inhibitor or recovery field v relaxes very slowly (it mimics the time-dependent conductance of the K+ channels in the axon membrane), so in the end, it enslaves the dynamics. Parameter ϵ is usually very large, to account for the large difference in relaxation rates. Calling λ 1 and λ 2 the eigenvalues of the diffusion tensor, the NEP for the autonomous system described by Eq. (19) is [8]
Φ u v = λ 2 1 βu v 2 + λ 1 ϵ 1 β u 2 2 u 0 u f x dx 2 . E20
For nonautonomous cases, one can draw consequences from Eq. (20) as far as the driving is much slower than the involved relaxation times (adiabatic approximation). In the following, we exploit this advantage.
4.3 Arrays of excitable elements
The result (20) has been employed [9, 10, 11, 12, 13] to find the optimal noise variance γ for arrays of excitable elements to display stochastic resonance synchronized behavior (see Nomenclature). Here, we briefly illustrate one such a case, where the coupling is inhibitory (when neuron i fires, neurons i ± 1 are less likely to fire) [14]. Inhibitory coupling is central in the dynamics of neocortical pyramidal neurons and cortical networks, and plays a major role in synchronous neural firing. On the other hand, inhibitory interneurons are more prone to couple through gap junctions (diffusive or “electric” coupling) than excitatory ones. In the transition from wake to anesthetic coma, for instance, diffusive coupling of inhibitor fields helps explaining the spontaneous emergence of low-frequency oscillations with spatially and temporally chaotic dynamics.
We consider a ring of N identical excitable FHN cells, with their inhibitor fields electrically coupled to those of their nearest neighbors. The system is moreover submitted to a common subthreshold (see Nomenclature) harmonic signal S t and independent additive Gaussian white noises in each component and each site, all with the same variance γ .
Numerical simulation of this stochastic system with increasing γ —for appropriate values of the diffusive coupling E between neighboring inhibitor fields—reveals the noise-induced phenomena taking place: synchronization with the external signal of the ring’s activity and (imperfect) spatiotemporal self-organization of the cells. For an optimal value of γ , a stochastic resonance phenomenon takes place, and the degree of spatiotemporal self-organization—alternancy between two antiphase states (APS)—is maximum.
For very low γ , only small-amplitude and highly homogeneous [ u i t u j t ] subthreshold oscillations (induced by the adiabatic signal) occur around the S = 0 rest state. As γ increases, so does the number of cells that become noise-activated during roughly half a cycle of the external signal. For γ even higher, the cells’ activity enhances its coherence with the external signal as a consequence of its coupling-mediated self-organization: as one neuron activates, it usually inhibits its nearest neighbors. The outcome of this phenomenon is the APS, which partially arises along the ring during the stage of activation by noise. In this scenario, noise (together with coupling and signal) plays a constructive role. Nonetheless for γ too large, the sync becomes eventually degraded.
4.4 Spatiotemporal pattern formation in arrays of FHN neurons
We exploit the knowledge of the NEP in Eq. (20) to attempt an analytical description of the problem in Section 4.3. The case of perfect spatiotemporal self-organization would be equivalent to a two-neuron system with variables u 1 , u 2 , v 1 , and v 2 and PBC. This simple model allows the formation of an antiphase state. Since a NEP cannot be easily found for this system—and with the only purpose of calculating barrier heights—we further reduce this description by projecting the dynamics along the corresponding slow manifolds:
ϵβ u 1 , 2 v 1 , 2 + 2 E v 2 + v 1 v 1 , 2 = 0 . E21
The projected two-variable system turns out to be gradient, a situation in which a NEP can always be found. As a consequence of the PBC, the NEP landscape along the slow manifolds is symmetric with respect to the u 1 = u 2 line. For E = 0.5 and maximum signal amplitude, the system has two uniform attractors (both cells inhibited, both cells activated), two APS (with one cell activated and one inhibited) with the same value of Φ u 1 u 2 , four saddles, and one maximum. For S = 0 instead, the uniform attractor with both cells activated has collapsed with the maximum, and, hence, two saddles have disappeared.
When the value of Φ u 1 u 2 at the uniform attractor, either APS and either corresponding saddle, is plotted as a function of S , one can see the following:
• Near maximum signal, the uniform attractor yields its stability to the APS. From this value of S on, the NEP barrier for the uniform attractor to decay into the APS (a noise-activated process) is small enough.
• Way before minimum signal, each APS collapses with its own saddle.
One then understands the picture: as S increases, whatever of the APS is chosen. As S decreases past the collapse, only the uniform attractor survives. However, the neuron which was activated before has not recovered completely. Hence in the next signal cycle, the other APS is more likely to appear.
5. Conclusions
In Sections 2 and 3, we have discussed the influence of quantum correlations on the formation of tightly bound solids. Section 2 is devoted to the effects of the overlaps and neglected multicenter integrals on tight-binding band spectra. An exact calculation in the framework of a simple atomic model has shown that they shift unevenly the top and bottom of the band spectrum (their effects are more pronounced at the top). Section 3 introduced a quantum Monte Carlo method specific for strongly correlated fermion systems. Section 4 addressed the stochastic dynamics of a ring of FHN cells—with nearest neighbor electric (diffusive) coupling between their inhibitor fields—undergoing spatiotemporal pattern formation induced by noise and coupling. By means of a simple model for which a NEP can be found, the mechanism whereby the process takes place was investigated analytically.
Acknowledgments
The author is deeply indebted with his coauthors G.G. Izús, A.D. Sánchez, and M.G. dell’Erba from IFIMAR-CONICET (Faculty of Exact and Natural Sciences) and D.A. Mirabella and C.M. Aldao from INTEMA-CONICET (Faculty of Engineering) of the National University of Mar del Plata (UNMdP), Argentina, with whom he undertook part of the work referred to here. Support by UNMdP, through Grant EXA826–15/E779, is acknowledged.
Ideal gas the (identical) particles composing such a gas do not interact between themselves. Free electrons they are not submitted to any external (e.g., crystal) potential. Chemical potential it is the cost of adding a particle to the system. For two open systems (which can exchange matter and energy with their environments) to come to equilibrium, not only their temperatures but their chemical potentials must be equal. Subthreshold unable by itself to drive a transition. Stochastic resonance nonlinear systems may display the property of amplifying a subthreshold input signal in the presence of noise with the right intensity.
References
1. 1. Mirabella DA, Aldao CM, Deza RR. Orbital nonorthogonality effects in band structure calculations within the tight-binding scheme. American Journal of Physics. 1994;62:162-166. DOI: 10.1119/1.17637
2. 2. Mirabella DA, Aldao CM, Deza RR. Effects of orbital nonorthogonality on band structure within the tight-binding scheme. Physical Review B: Condensed Matter. 1994;50:12152-12155. DOI: 10.1103/PhysRevB.50.12152-12155
3. 3. Mirabella DA, Aldao CM, Deza RR. Exact one-band model calculation using the tight-binding method. International Journal of Quantum Chemistry. 1998;68:285-291. DOI: 10.1002/(SICI)1097-461X(1998)68:4<285::AID-QUA6>3.0.CO;2-R
4. 4. Kung D, Dahl D, Blankenbecler R, Deza RR, Fulco JR. New stochastic treatment of fermions with application to a double-chain polymer. Physical Review B. 1985;32:2022-2029. DOI: 10.1103/PhysRevB.32.2022
5. 5. Braunstein LA, Deza RR, Mijovilovich A. Exact versus quantum Monte Carlo analysis of the groundstate of the one-dimensional Hubbard model for finite lattices. In: Cordero P, Nachtergaele B, editors. Nonlinear Phenomena in Fluids, Solids and Other Complex Systems. Amsterdam: North-Holland; 1991. pp. 313-327. DOI: 10.1016/B978-0-444-88791-7.50024-7
6. 6. Lemons D. Paul Langevin’s 1908 paper “On the theory of Brownian motion” (“Sur la théorie du mouvement brownien,” C. R. Acad. Sci. (Paris) 146, 530-533 (1908)). In: AIP Conference Proceedings, Vol. 65. 1997. pp. 1079-1081. DOI: 10.1119/1.18725
7. 7. Graham R. Weak noise limit and nonequilibrium potentials of dissipative dynamical systems. In: Tirapegui E, Villarroel D, editors. Instabilities and Nonequilibrium Structures. Dordrecht: D. Reidel; 1987. pp. 271-290. DOI: 10.1007/978-94-009-3783-3_12
8. 8. Izús GG, Deza RR, Wio HS. Exact nonequilibrium potential for the FitzHugh–Nagumo model in the excitable and bistable regimes. Physical Review E. 1998;58:93-98. DOI: 10.1103/PhysRevE.58.93
9. 9. Izús GG, Deza RR, Wio HS. Critical slowing-down in the FitzHugh–Nagumo model: A non-equilibrium potential approach. Computer Physics Communications. 1999;121-122:406-407. DOI: 10.1016/S0010-4655(99)00368-9
10. 10. Wio HS, Deza RR. Aspects of stochastic resonance in reaction–diffusion systems: The nonequilibrium-potential approach. European Physical Journal: Special Topics. 2007;146:111. DOI: 10.1140/epjst/e2007-00173-0
11. 11. Izús GG, Deza RR, Sánchez AD. Highly synchronized noise-driven oscillatory behavior of a FitzHugh—Nagumo ring with phase-repulsive coupling. AIP Conference Proceedings. 2007;887:89-95. DOI: 10.1063/1.2709590
12. 12. Izús GG, Sánchez AD, Deza RR. Noise-driven synchronization of a FitzHugh–Nagumo ring with phase-repulsive coupling: A perspective from the system’s nonequilibrium potential. Physica A: Statistical Mechanics and its Applications. 2009;388:967-976. DOI: 10.1016/j.physa.2008.11.031
13. 13. Sánchez AD, Izús GG. Nonequilibrium potential for arbitrary-connected networks of FitzHugh–Nagumo elements. Physica A: Statistical Mechanics and its Applications. 2010;389:1931-1944. DOI: 10.1016/j.physa.2010.01.013
14. 14. Sánchez AD, Izús GG, Dell’Erba MG, Deza RR. A reduced gradient description of stochastic-resonant spatiotemporal patterns in a FitzHugh–Nagumo ring with electric inhibitory coupling. Physics Letters A. 2014;378:1579-1583. DOI: 10.1016/j.physleta.2014.03.048
Notes
• For isolators like these, the bandgap is too large for visible light to be absorbed by creating electron-hole pairs. Moreover, the absence of charge carriers rules out light scattering. Impurities provide localized midgap states, which favor two-step electron-hole pair creation by visible light.
• Sadly, the generalized disbelief in the mere existence of atoms just one century ago may have contributed to Ludwig Boltzmann’s suicide.
• We have already stated that the Fermi level is the chemical potential of an ideal free-electron gas. This concept is peculiar of the grand canonical ensemble.
• It will be a lattice only if all interactions but nearest neighbor ones are neglected. Note that crystals may even have a Cayley tree structure, like the so-called “Bethe lattices.”
• The electronic properties of amorphous solids are also of interest, e.g., in the photovoltaic (PV) industry.
• Recall it was Hamilton who first succeeded in casting conservative systems as first-order ones. In so doing, he put coordinates and momenta on the same footing. Systems are conservative if their phase space does not contract.
• A key is to ensure the multidimensional version of D 2 (a symmetric tensor) to be nonsingular.
Written By
Roberto Raúl Deza
Submitted: September 21st, 2018 Reviewed: January 14th, 2019 Published: August 23rd, 2019
|
2022-12-03 15:21:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8215777277946472, "perplexity": 1353.180863702439}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710933.89/warc/CC-MAIN-20221203143925-20221203173925-00844.warc.gz"}
|
https://stacks.math.columbia.edu/tag/087S
|
Lemma 29.43.16. Let $S$ be a scheme which admits an ample invertible sheaf. Then
1. any projective morphism $X \to S$ is H-projective, and
2. any quasi-projective morphism $X \to S$ is H-quasi-projective.
Proof. The assumptions on $S$ imply that $S$ is quasi-compact and separated, see Properties, Definition 28.26.1 and Lemma 28.26.11 and Constructions, Lemma 27.8.8. Hence Lemma 29.43.12 applies and we see that (1) implies (2). Let $\mathcal{E}$ be a finite type quasi-coherent $\mathcal{O}_ S$-module. By our definition of projective morphisms it suffices to show that $\mathbf{P}(\mathcal{E}) \to S$ is H-projective. If $\mathcal{E}$ is generated by finitely many global sections, then the corresponding surjection $\mathcal{O}_ S^{\oplus n} \to \mathcal{E}$ induces a closed immersion
$\mathbf{P}(\mathcal{E}) \longrightarrow \mathbf{P}(\mathcal{O}_ S^{\oplus n}) = \mathbf{P}^ n_ S$
as desired. In general, let $\mathcal{L}$ be an invertible sheaf on $S$. By Properties, Proposition 28.26.13 there exists an integer $n$ such that $\mathcal{E} \otimes _{\mathcal{O}_ S} \mathcal{L}^{\otimes n}$ is globally generated by finitely many sections. Since $\mathbf{P}(\mathcal{E}) = \mathbf{P}(\mathcal{E} \otimes _{\mathcal{O}_ S} \mathcal{L}^{\otimes n})$ by Constructions, Lemma 27.20.1 this finishes the proof. $\square$
Comment #6646 by Phoebe on
Why does $\textbf{P}(\mathcal{E})=\textbf{P}(\mathcal{E}\otimes_{\mathcal{O}_S}\mathcal{L}^{\otimes n})$ follow from Lemma 27.20.1?
Comment #6647 by Phoebe on
Oh I was being stupid. I get it now. You can delete my comments.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
|
2021-11-27 15:01:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9950065612792969, "perplexity": 440.03757518653947}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358189.36/warc/CC-MAIN-20211127133237-20211127163237-00587.warc.gz"}
|
https://stats.stackexchange.com/posts/59677/revisions
|
6 deleted 4 characters in body edited Apr 25 at 17:39 gung♦ 113k3434 gold badges282282 silver badges553553 bronze badges You're right. The problem of multiple comparisons exists everywhere, but, because of the way it's typically taught, people only think it pertains to comparing many groups against each other via a whole bunch of $$t$$-tests. In reality, there are many examples where the problem of multiple comparisons exists, but where it doesn't look like lots of pairwise comparisons; for example, if you have a lot of continuous variables and you wonder if any are correlated, you will have a multiple comparisons problem (see here: Look and you shall find a correlationLook and you shall find a correlation). Another example is the one you raise. If you were to run a multiple regression with 20 variables, and you used $$\alpha=.05$$ as your threshold, you would expect one of your variables to be 'significant' by chance alone, even if all nulls were true. The problem of multiple comparisons simply comes from the mathematics of running lots of analyses. If all null hypotheses were true and the variables were perfectly uncorrelated, the probability of not falsely rejecting any true null would be $$1-(1-\alpha)^p$$ (e.g., with $$p=5$$, this is $$.23$$). The first strategy to mitigate against this is to conduct a simultaneous test of your model. If you are fitting an OLS regression, most software will give you a global $$F$$-test as a default part of your output. If you are running a generalized linear model, most software will give you an analogous global likelihood ratio test. This test will give you some protection against type I error inflation due to the problem of multiple comparisons (cf., my answer here: Significance of coefficients in linear regression: significant t-test vs non-significant F-statistic). A similar case is when you have a categorical variable that is represented with several dummy codes; you wouldn't want to interpret those $$t$$-tests, but would drop all dummy codes and perform a nested model test instead. Another possible strategy is to use an alpha adjustment procedure, like the Bonferroni correction. You should realize that doing this will reduce your power as well as reducing your familywise type I error rate. Whether this tradeoff is worthwhile is a judgment call for you to make. (FWIW, I don't typically use alpha corrections in multiple regression.) Regarding the issue of using $$p$$-values to do model selection, I think this is a really bad idea. I would not move from a model with 5 variables to one with only 2 because the others were 'non-significant'. When people do this, they bias their model. It may help you to read my answer here: algorithms for automatic model selection to understand this better. Regarding your update, I would not suggest you assess univariate correlations first so as to decide which variables to use in the final multiple regression model. Doing this will lead to problems with endogeneity unless the variables are perfectly uncorrelated with each other. I discussed this issue in my answer here: Estimating $$b_1x_1+b_2x_2$$ instead of $$b_1x_1+b_2x_2+b_3x_3$$. With regard to the question of how to handle analyses with different dependent variables, whether you'd want to use some sort of adjustment is based on how you see the analyses relative to each other. The traditional idea is to determine whether they are meaningfully considered to be a 'family'. This is discussed here: What might be a clear, practical definition for a "family of hypotheses"? You might also want to read this thread: Methods to predict multiple dependent variables. You're right. The problem of multiple comparisons exists everywhere, but, because of the way it's typically taught, people only think it pertains to comparing many groups against each other via a whole bunch of $$t$$-tests. In reality, there are many examples where the problem of multiple comparisons exists, but where it doesn't look like lots of pairwise comparisons; for example, if you have a lot of continuous variables and you wonder if any are correlated, you will have a multiple comparisons problem (see here: Look and you shall find a correlation). Another example is the one you raise. If you were to run a multiple regression with 20 variables, and you used $$\alpha=.05$$ as your threshold, you would expect one of your variables to be 'significant' by chance alone, even if all nulls were true. The problem of multiple comparisons simply comes from the mathematics of running lots of analyses. If all null hypotheses were true and the variables were perfectly uncorrelated, the probability of not falsely rejecting any true null would be $$1-(1-\alpha)^p$$ (e.g., with $$p=5$$, this is $$.23$$). The first strategy to mitigate against this is to conduct a simultaneous test of your model. If you are fitting an OLS regression, most software will give you a global $$F$$-test as a default part of your output. If you are running a generalized linear model, most software will give you an analogous global likelihood ratio test. This test will give you some protection against type I error inflation due to the problem of multiple comparisons (cf., my answer here: Significance of coefficients in linear regression: significant t-test vs non-significant F-statistic). A similar case is when you have a categorical variable that is represented with several dummy codes; you wouldn't want to interpret those $$t$$-tests, but would drop all dummy codes and perform a nested model test instead. Another possible strategy is to use an alpha adjustment procedure, like the Bonferroni correction. You should realize that doing this will reduce your power as well as reducing your familywise type I error rate. Whether this tradeoff is worthwhile is a judgment call for you to make. (FWIW, I don't typically use alpha corrections in multiple regression.) Regarding the issue of using $$p$$-values to do model selection, I think this is a really bad idea. I would not move from a model with 5 variables to one with only 2 because the others were 'non-significant'. When people do this, they bias their model. It may help you to read my answer here: algorithms for automatic model selection to understand this better. Regarding your update, I would not suggest you assess univariate correlations first so as to decide which variables to use in the final multiple regression model. Doing this will lead to problems with endogeneity unless the variables are perfectly uncorrelated with each other. I discussed this issue in my answer here: Estimating $$b_1x_1+b_2x_2$$ instead of $$b_1x_1+b_2x_2+b_3x_3$$. With regard to the question of how to handle analyses with different dependent variables, whether you'd want to use some sort of adjustment is based on how you see the analyses relative to each other. The traditional idea is to determine whether they are meaningfully considered to be a 'family'. This is discussed here: What might be a clear, practical definition for a "family of hypotheses"? You might also want to read this thread: Methods to predict multiple dependent variables. You're right. The problem of multiple comparisons exists everywhere, but, because of the way it's typically taught, people only think it pertains to comparing many groups against each other via a whole bunch of $$t$$-tests. In reality, there are many examples where the problem of multiple comparisons exists, but where it doesn't look like lots of pairwise comparisons; for example, if you have a lot of continuous variables and you wonder if any are correlated, you will have a multiple comparisons problem (see here: Look and you shall find a correlation). Another example is the one you raise. If you were to run a multiple regression with 20 variables, and you used $$\alpha=.05$$ as your threshold, you would expect one of your variables to be 'significant' by chance alone, even if all nulls were true. The problem of multiple comparisons simply comes from the mathematics of running lots of analyses. If all null hypotheses were true and the variables were perfectly uncorrelated, the probability of not falsely rejecting any true null would be $$1-(1-\alpha)^p$$ (e.g., with $$p=5$$, this is $$.23$$). The first strategy to mitigate against this is to conduct a simultaneous test of your model. If you are fitting an OLS regression, most software will give you a global $$F$$-test as a default part of your output. If you are running a generalized linear model, most software will give you an analogous global likelihood ratio test. This test will give you some protection against type I error inflation due to the problem of multiple comparisons (cf., my answer here: Significance of coefficients in linear regression: significant t-test vs non-significant F-statistic). A similar case is when you have a categorical variable that is represented with several dummy codes; you wouldn't want to interpret those $$t$$-tests, but would drop all dummy codes and perform a nested model test instead. Another possible strategy is to use an alpha adjustment procedure, like the Bonferroni correction. You should realize that doing this will reduce your power as well as reducing your familywise type I error rate. Whether this tradeoff is worthwhile is a judgment call for you to make. (FWIW, I don't typically use alpha corrections in multiple regression.) Regarding the issue of using $$p$$-values to do model selection, I think this is a really bad idea. I would not move from a model with 5 variables to one with only 2 because the others were 'non-significant'. When people do this, they bias their model. It may help you to read my answer here: algorithms for automatic model selection to understand this better. Regarding your update, I would not suggest you assess univariate correlations first so as to decide which variables to use in the final multiple regression model. Doing this will lead to problems with endogeneity unless the variables are perfectly uncorrelated with each other. I discussed this issue in my answer here: Estimating $$b_1x_1+b_2x_2$$ instead of $$b_1x_1+b_2x_2+b_3x_3$$. With regard to the question of how to handle analyses with different dependent variables, whether you'd want to use some sort of adjustment is based on how you see the analyses relative to each other. The traditional idea is to determine whether they are meaningfully considered to be a 'family'. This is discussed here: What might be a clear, practical definition for a "family of hypotheses"? You might also want to read this thread: Methods to predict multiple dependent variables. 5 replaced http://stats.stackexchange.com/ with https://stats.stackexchange.com/ edited Apr 13 '17 at 12:44 You're right. The problem of multiple comparisons exists everywhere, but, because of the way it's typically taught, people only think it pertains to comparing many groups against each other via a whole bunch of $$t$$-tests. In reality, there are many examples where the problem of multiple comparisons exists, but where it doesn't look like lots of pairwise comparisons; for example, if you have a lot of continuous variables and you wonder if any are correlated, you will have a multiple comparisons problem (see here: Look and you shall find a correlationLook and you shall find a correlation). Another example is the one you raise. If you were to run a multiple regression with 20 variables, and you used $$\alpha=.05$$ as your threshold, you would expect one of your variables to be 'significant' by chance alone, even if all nulls were true. The problem of multiple comparisons simply comes from the mathematics of running lots of analyses. If all null hypotheses were true and the variables were perfectly uncorrelated, the probability of not falsely rejecting any true null would be $$1-(1-\alpha)^p$$ (e.g., with $$p=5$$, this is $$.23$$). The first strategy to mitigate against this is to conduct a simultaneous test of your model. If you are fitting an OLS regression, most software will give you a global $$F$$-test as a default part of your output. If you are running a generalized linear model, most software will give you an analogous global likelihood ratio test. This test will give you some protection against type I error inflation due to the problem of multiple comparisons (cf., my answer here: Significance of coefficients in linear regression: significant t-test vs non-significant F-statisticSignificance of coefficients in linear regression: significant t-test vs non-significant F-statistic). A similar case is when you have a categorical variable that is represented with several dummy codes; you wouldn't want to interpret those $$t$$-tests, but would drop all dummy codes and perform a nested model test instead. Another possible strategy is to use an alpha adjustment procedure, like the Bonferroni correction. You should realize that doing this will reduce your power as well as reducing your familywise type I error rate. Whether this tradeoff is worthwhile is a judgment call for you to make. (FWIW, I don't typically use alpha corrections in multiple regression.) Regarding the issue of using $$p$$-values to do model selection, I think this is a really bad idea. I would not move from a model with 5 variables to one with only 2 because the others were 'non-significant'. When people do this, they bias their model. It may help you to read my answer here: algorithms for automatic model selectionalgorithms for automatic model selection to understand this better. Regarding your update, I would not suggest you assess univariate correlations first so as to decide which variables to use in the final multiple regression model. Doing this will lead to problems with endogeneity unless the variables are perfectly uncorrelated with each other. I discussed this issue in my answer here: Estimating $$b_1x_1+b_2x_2$$ instead of $$b_1x_1+b_2x_2+b_3x_3$$Estimating $$b_1x_1+b_2x_2$$ instead of $$b_1x_1+b_2x_2+b_3x_3$$. With regard to the question of how to handle analyses with different dependent variables, whether you'd want to use some sort of adjustment is based on how you see the analyses relative to each other. The traditional idea is to determine whether they are meaningfully considered to be a 'family'. This is discussed here: What might be a clear, practical definition for a "family of hypotheses"?What might be a clear, practical definition for a "family of hypotheses"? You might also want to read this thread: Methods to predict multiple dependent variablesMethods to predict multiple dependent variables. You're right. The problem of multiple comparisons exists everywhere, but, because of the way it's typically taught, people only think it pertains to comparing many groups against each other via a whole bunch of $$t$$-tests. In reality, there are many examples where the problem of multiple comparisons exists, but where it doesn't look like lots of pairwise comparisons; for example, if you have a lot of continuous variables and you wonder if any are correlated, you will have a multiple comparisons problem (see here: Look and you shall find a correlation). Another example is the one you raise. If you were to run a multiple regression with 20 variables, and you used $$\alpha=.05$$ as your threshold, you would expect one of your variables to be 'significant' by chance alone, even if all nulls were true. The problem of multiple comparisons simply comes from the mathematics of running lots of analyses. If all null hypotheses were true and the variables were perfectly uncorrelated, the probability of not falsely rejecting any true null would be $$1-(1-\alpha)^p$$ (e.g., with $$p=5$$, this is $$.23$$). The first strategy to mitigate against this is to conduct a simultaneous test of your model. If you are fitting an OLS regression, most software will give you a global $$F$$-test as a default part of your output. If you are running a generalized linear model, most software will give you an analogous global likelihood ratio test. This test will give you some protection against type I error inflation due to the problem of multiple comparisons (cf., my answer here: Significance of coefficients in linear regression: significant t-test vs non-significant F-statistic). A similar case is when you have a categorical variable that is represented with several dummy codes; you wouldn't want to interpret those $$t$$-tests, but would drop all dummy codes and perform a nested model test instead. Another possible strategy is to use an alpha adjustment procedure, like the Bonferroni correction. You should realize that doing this will reduce your power as well as reducing your familywise type I error rate. Whether this tradeoff is worthwhile is a judgment call for you to make. (FWIW, I don't typically use alpha corrections in multiple regression.) Regarding the issue of using $$p$$-values to do model selection, I think this is a really bad idea. I would not move from a model with 5 variables to one with only 2 because the others were 'non-significant'. When people do this, they bias their model. It may help you to read my answer here: algorithms for automatic model selection to understand this better. Regarding your update, I would not suggest you assess univariate correlations first so as to decide which variables to use in the final multiple regression model. Doing this will lead to problems with endogeneity unless the variables are perfectly uncorrelated with each other. I discussed this issue in my answer here: Estimating $$b_1x_1+b_2x_2$$ instead of $$b_1x_1+b_2x_2+b_3x_3$$. With regard to the question of how to handle analyses with different dependent variables, whether you'd want to use some sort of adjustment is based on how you see the analyses relative to each other. The traditional idea is to determine whether they are meaningfully considered to be a 'family'. This is discussed here: What might be a clear, practical definition for a "family of hypotheses"? You might also want to read this thread: Methods to predict multiple dependent variables. You're right. The problem of multiple comparisons exists everywhere, but, because of the way it's typically taught, people only think it pertains to comparing many groups against each other via a whole bunch of $$t$$-tests. In reality, there are many examples where the problem of multiple comparisons exists, but where it doesn't look like lots of pairwise comparisons; for example, if you have a lot of continuous variables and you wonder if any are correlated, you will have a multiple comparisons problem (see here: Look and you shall find a correlation). Another example is the one you raise. If you were to run a multiple regression with 20 variables, and you used $$\alpha=.05$$ as your threshold, you would expect one of your variables to be 'significant' by chance alone, even if all nulls were true. The problem of multiple comparisons simply comes from the mathematics of running lots of analyses. If all null hypotheses were true and the variables were perfectly uncorrelated, the probability of not falsely rejecting any true null would be $$1-(1-\alpha)^p$$ (e.g., with $$p=5$$, this is $$.23$$). The first strategy to mitigate against this is to conduct a simultaneous test of your model. If you are fitting an OLS regression, most software will give you a global $$F$$-test as a default part of your output. If you are running a generalized linear model, most software will give you an analogous global likelihood ratio test. This test will give you some protection against type I error inflation due to the problem of multiple comparisons (cf., my answer here: Significance of coefficients in linear regression: significant t-test vs non-significant F-statistic). A similar case is when you have a categorical variable that is represented with several dummy codes; you wouldn't want to interpret those $$t$$-tests, but would drop all dummy codes and perform a nested model test instead. Another possible strategy is to use an alpha adjustment procedure, like the Bonferroni correction. You should realize that doing this will reduce your power as well as reducing your familywise type I error rate. Whether this tradeoff is worthwhile is a judgment call for you to make. (FWIW, I don't typically use alpha corrections in multiple regression.) Regarding the issue of using $$p$$-values to do model selection, I think this is a really bad idea. I would not move from a model with 5 variables to one with only 2 because the others were 'non-significant'. When people do this, they bias their model. It may help you to read my answer here: algorithms for automatic model selection to understand this better. Regarding your update, I would not suggest you assess univariate correlations first so as to decide which variables to use in the final multiple regression model. Doing this will lead to problems with endogeneity unless the variables are perfectly uncorrelated with each other. I discussed this issue in my answer here: Estimating $$b_1x_1+b_2x_2$$ instead of $$b_1x_1+b_2x_2+b_3x_3$$. With regard to the question of how to handle analyses with different dependent variables, whether you'd want to use some sort of adjustment is based on how you see the analyses relative to each other. The traditional idea is to determine whether they are meaningfully considered to be a 'family'. This is discussed here: What might be a clear, practical definition for a "family of hypotheses"? You might also want to read this thread: Methods to predict multiple dependent variables. 4 added 5 characters in body edited Nov 4 '16 at 12:38 gung♦ 113k3434 gold badges282282 silver badges553553 bronze badges You're right. The problem of multiple comparisons exists everywhere, but, because of the way it's typically taught, people only think it pertains to comparing many groups against each other via a whole bunch of $$t$$-tests. In reality, there are many examples where the problem of multiple comparisons exists, but where it doesn't look like lots of pairwise comparisons; for example, if you have a lot of continuous variables and you wonder if any are correlated, you will have a multiple comparisons problem (see here: Look and you shall find a correlation). Another example is the one you raise. If you were to run a multiple regression with 20 variables, and you used $$\alpha=.05$$ as your threshold, you would expect one of your variables to be 'significant' by chance alone, even if all nulls were true. The problem of multiple comparisons simply comes from the mathematics of running lots of analyses. If all null hypotheses were true and the variables were perfectly uncorrelated, the probability of not falsely rejecting any true null would be $$1-(1-\alpha)^p$$ (e.g., with $$p=5$$, this is $$.23$$). The first strategy to mitigate against this is to conduct a simultaneous test of your model. If you are fitting an OLS regression, most software will give you a global $$F$$-test as a default part of your output. If you are running a generalized linear model, most software will give you an analogous global likelihood ratio test. This test will give you some protection against type I error inflation due to the problem of multiple comparisons (cf., my answer here: Significance of coefficients in linear regression: significant t-test vs non-significant F-statistic). A similar case is when you have a categorical variable that is represented with several dummy codes; you wouldn't want to interpret those $$t$$-tests, but would drop all dummy codes and perform a nested model test instead. Another possible strategy is to use an alpha adjustment procedure, like the Bonferroni correction. You should realize that doing this will reduce your power as well as reducing your familywise type I error rate. Whether this tradeoff is worthwhile is a judgment call for you to make. (FWIW, I don't typically use alpha corrections in multiple regression.) Regarding the issue of using $$p$$-values to do model selection. I, I think this is a really bad idea. I would not move from a model with 5 variables to one with only 2 because the others were 'non-significant'. When people do this, they bias their model. It may help you to read my answer here: algorithms for automatic model selection to understand this better. Regarding your update, I would not suggest you assess univariate correlations first so as to decide which variables to use in the final multiple regression model. Doing this will lead to problems with endogeneity unless the variables are perfectly uncorrelated with each other. I discussed this issue in my answer here: Estimating $$b_1x_1+b_2x_2$$ instead of $$b_1x_1+b_2x_2+b_3x_3$$. With regard to the question of how to handle analyses with different dependent variables, whether you'd want to use some sort of adjustment is based on how you see the analyses relative to each other. The traditional idea is to determine whether they are meaningfully considered to be a 'family'. This is discussed here: What might be a clear, practical definition for a "family of hypotheses"? You might also want to read this thread: Methods to predict multiple dependent variables. You're right. The problem of multiple comparisons exists everywhere, but, because of the way it's typically taught, people only think it pertains to comparing many groups against each other via a whole bunch of $$t$$-tests. In reality, there are many examples where the problem of multiple comparisons exists, but it doesn't look like lots of pairwise comparisons; for example, if you have a lot of continuous variables and you wonder if any are correlated, you will have a multiple comparisons problem (see here: Look and you shall find a correlation). Another example is the one you raise. If you were to run a multiple regression with 20 variables, and you used $$\alpha=.05$$ as your threshold, you would expect one of your variables to be 'significant' by chance alone, even if all nulls were true. The problem of multiple comparisons simply comes from the mathematics of running lots of analyses. If all null hypotheses were true and the variables were perfectly uncorrelated, the probability of not falsely rejecting any true null would be $$1-(1-\alpha)^p$$ (e.g., with $$p=5$$, this is $$.23$$). The first strategy to mitigate against this is to conduct a simultaneous test of your model. If you are fitting an OLS regression, most software will give you a global $$F$$-test as a default part of your output. If you are running a generalized linear model, most software will give you an analogous global likelihood ratio test. This test will give you some protection against type I error inflation due to the problem of multiple comparisons (cf., my answer here: Significance of coefficients in linear regression: significant t-test vs non-significant F-statistic). A similar case is when you have a categorical variable that is represented with several dummy codes; you wouldn't want to interpret those $$t$$-tests, but would drop all dummy codes and perform a nested model test instead. Another possible strategy is to use an alpha adjustment procedure, like the Bonferroni correction. You should realize that doing this will reduce your power as well as reducing your familywise type I error rate. Whether this tradeoff is worthwhile is a judgment call for you to make. (FWIW, I don't typically use alpha corrections in multiple regression.) Regarding the issue of using $$p$$-values to do model selection. I think this is a really bad idea. I would not move from a model with 5 variables to one with only 2 because the others were 'non-significant'. When people do this, they bias their model. It may help you to read my answer here: algorithms for automatic model selection to understand this better. Regarding your update, I would not suggest you assess univariate correlations first so as to decide which variables to use in the final multiple regression model. Doing this will lead to problems with endogeneity unless the variables are perfectly uncorrelated with each other. I discussed this issue in my answer here: Estimating $$b_1x_1+b_2x_2$$ instead of $$b_1x_1+b_2x_2+b_3x_3$$. With regard to the question of how to handle analyses with different dependent variables, whether you'd want to use some sort of adjustment is based on how you see the analyses relative to each other. The traditional idea is to determine whether they are meaningfully considered to be a 'family'. This is discussed here: What might be a clear, practical definition for a "family of hypotheses"? You might also want to read this thread: Methods to predict multiple dependent variables. You're right. The problem of multiple comparisons exists everywhere, but, because of the way it's typically taught, people only think it pertains to comparing many groups against each other via a whole bunch of $$t$$-tests. In reality, there are many examples where the problem of multiple comparisons exists, but where it doesn't look like lots of pairwise comparisons; for example, if you have a lot of continuous variables and you wonder if any are correlated, you will have a multiple comparisons problem (see here: Look and you shall find a correlation). Another example is the one you raise. If you were to run a multiple regression with 20 variables, and you used $$\alpha=.05$$ as your threshold, you would expect one of your variables to be 'significant' by chance alone, even if all nulls were true. The problem of multiple comparisons simply comes from the mathematics of running lots of analyses. If all null hypotheses were true and the variables were perfectly uncorrelated, the probability of not falsely rejecting any true null would be $$1-(1-\alpha)^p$$ (e.g., with $$p=5$$, this is $$.23$$). The first strategy to mitigate against this is to conduct a simultaneous test of your model. If you are fitting an OLS regression, most software will give you a global $$F$$-test as a default part of your output. If you are running a generalized linear model, most software will give you an analogous global likelihood ratio test. This test will give you some protection against type I error inflation due to the problem of multiple comparisons (cf., my answer here: Significance of coefficients in linear regression: significant t-test vs non-significant F-statistic). A similar case is when you have a categorical variable that is represented with several dummy codes; you wouldn't want to interpret those $$t$$-tests, but would drop all dummy codes and perform a nested model test instead. Another possible strategy is to use an alpha adjustment procedure, like the Bonferroni correction. You should realize that doing this will reduce your power as well as reducing your familywise type I error rate. Whether this tradeoff is worthwhile is a judgment call for you to make. (FWIW, I don't typically use alpha corrections in multiple regression.) Regarding the issue of using $$p$$-values to do model selection, I think this is a really bad idea. I would not move from a model with 5 variables to one with only 2 because the others were 'non-significant'. When people do this, they bias their model. It may help you to read my answer here: algorithms for automatic model selection to understand this better. Regarding your update, I would not suggest you assess univariate correlations first so as to decide which variables to use in the final multiple regression model. Doing this will lead to problems with endogeneity unless the variables are perfectly uncorrelated with each other. I discussed this issue in my answer here: Estimating $$b_1x_1+b_2x_2$$ instead of $$b_1x_1+b_2x_2+b_3x_3$$. With regard to the question of how to handle analyses with different dependent variables, whether you'd want to use some sort of adjustment is based on how you see the analyses relative to each other. The traditional idea is to determine whether they are meaningfully considered to be a 'family'. This is discussed here: What might be a clear, practical definition for a "family of hypotheses"? You might also want to read this thread: Methods to predict multiple dependent variables. 3 deleted 3 characters in body edited May 23 '14 at 17:09 gung♦ 113k3434 gold badges282282 silver badges553553 bronze badges 2 added 150 characters in body edited May 21 '13 at 21:56 gung♦ 113k3434 gold badges282282 silver badges553553 bronze badges 1 answered May 21 '13 at 21:37 gung♦ 113k3434 gold badges282282 silver badges553553 bronze badges
|
2019-09-16 04:19:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 92, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4625460207462311, "perplexity": 431.3598000186151}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572484.20/warc/CC-MAIN-20190916035549-20190916061549-00495.warc.gz"}
|
https://me.gateoverflow.in/536/gate2016-1-15
|
A plastic sleeve of outer radius $r_0=1$ $mm$ covers a wire (radius $r=0.5\:mm$) carrying electric current. Thermal conductivity of the plastic is $0.15 W/m-K$. The heat transfer coefficient on the outer surface of the sleeve exposed to air is $25 W/m^2-K$. Due to the addition of the plastic cover, the heat transfer from the wire to the ambient will
1. increase
2. remain the same
3. decrease
4. be zero
|
2023-01-30 07:40:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39189645648002625, "perplexity": 639.7376366682165}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499804.60/warc/CC-MAIN-20230130070411-20230130100411-00009.warc.gz"}
|
https://socratic.org/questions/how-do-you-write-the-partial-fraction-decomposition-of-the-rational-expression-1-13
|
# How do you write the partial fraction decomposition of the rational expression (-13x+11) / (2x^3 - 2x^2 + x - 1)?
$\frac{- 13 x + 11}{2 {x}^{3} - 2 {x}^{2} + x - 1} = \frac{- \frac{2}{3}}{x - 1} + \frac{\frac{4}{3} x - \frac{35}{3}}{2 {x}^{2} + 1}$
#### Explanation:
From the given $\frac{- 13 x + 11}{2 {x}^{3} - 2 {x}^{2} + x - 1}$, we start by getting all the factors of the denominator
I will assume that we already know factoring, ok?
$2 {x}^{3} - 2 {x}^{2} + x - 1 = \left(x - 1\right) \left(2 {x}^{2} + 1\right)$
This is our denominator for the right sides of the equation
Let us set up the equation and the variables A, B
$\frac{- 13 x + 11}{2 {x}^{3} - 2 {x}^{2} + x - 1} = \frac{A}{x - 1} + \frac{B x + C}{2 {x}^{2} + 1}$
Simplify using the LCD$= \left(x - 1\right) \left(2 {x}^{2} + 1\right)$
$\frac{- 13 x + 11}{2 {x}^{3} - 2 {x}^{2} + x - 1} = \frac{A \left(2 {x}^{2} + 1\right) + \left(B x + C\right) \left(x - 1\right)}{\left(x - 1\right) \left(2 {x}^{2} + 1\right)}$
Expand then simplify
$\frac{- 13 x + 11}{2 {x}^{3} - 2 {x}^{2} + x - 1} = \frac{2 A {x}^{2} + A + B {x}^{2} - B x + C x - C}{\left(x - 1\right) \left(2 {x}^{2} + 1\right)}$
Rearrange from highest to lowest degree the terms in the numerator at the right side of the equation
$\frac{- 13 x + 11}{2 {x}^{3} - 2 {x}^{2} + x - 1} = \frac{2 A {x}^{2} + B {x}^{2} - B x + C x + A - C}{\left(x - 1\right) \left(2 {x}^{2} + 1\right)}$
Let us match the numerical coefficients of the terms of the numerators of the left and right side of the equation
$\frac{0 \cdot {x}^{2} + \left(- 13\right) {x}^{1} + 11 \cdot {x}^{0}}{2 {x}^{3} - 2 {x}^{2} + x - 1} = \frac{\left(2 A + B\right) {x}^{2} + \left(- B + C\right) {x}^{1} + \left(A - C\right) \cdot {x}^{0}}{\left(x - 1\right) \left(2 {x}^{2} + 1\right)}$
The equations are
$2 A + B = 0$
$- B + C = - 13$
$A - C = 11$
Simultaneous solution results to
$A = - \frac{2}{3}$
$B = \frac{4}{3}$
$C = - \frac{35}{3}$
$\frac{- 13 x + 11}{2 {x}^{3} - 2 {x}^{2} + x - 1} = \frac{- \frac{2}{3}}{x - 1} + \frac{\frac{4}{3} x - \frac{35}{3}}{2 {x}^{2} + 1}$
|
2020-10-01 13:40:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 16, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6998304724693298, "perplexity": 355.5676903396626}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402131412.93/warc/CC-MAIN-20201001112433-20201001142433-00621.warc.gz"}
|
https://study.com/academy/answer/estimate-of-population-size-after-20-weeks.html
|
# Estimate of population size after 20 weeks.
## Question:
Estimate of population size after 20 weeks.
Week N1 N2 N3 Lamb 1 Lamb 2 Lamb 3
8 392 449 476
9 616 679 726 1.568924414 1.513031954 1.553016087
10 852 902 990 1.384103165 1.328708274 1.364209078
11 1066 1072 1201 1.250768017 1.188363381 1.213335863
12 1211 1179 1339 1.136776624 1.0999052555 1.114815032
13 1300 1239 1418 1.072808078 1.050613686 1.058680776
14 1348 1270 1459 1.03748048 1.025017023 1.0291547
15 1374 1285 1480 1.018949754 1.012210773 1.1.014275675
16 1387 1293 1490 1.009491876 1.005923038 1.006939563
17 1400 1295 1492 1.009436426 1.001679153 1.001151106
18 1398 1304 1501 0.998571429 1.006949807 1.006032172
19 1390 1312 1499 0.994277539 1.006134969 0.998667555
Average Lamb 1.1346898 1.112594301 1.1236616
Overall Lamb 1.11116565 1.09353344 1.10199982
r 0.12635931 0.106694497 0.116592639
r 0.105409599 0.089414142 0.097126548
## Data fit
This is a problem that a rather ambiguous approach. Therefore, the best way to obtain information from the data provided is to fit the data. This is a statistical process by which a specific mathematical behavior is assigned to a group of experimental points.
In our case, the experimental points are the number of lambs in each herd as a function of time. If there were much more information, a specific behavior could be assigned to the number of lambs, however in this case, this information is not available. Therefore a stretched asymptotic exponential equation of the type was chosen:
{eq}y=a(1-e^{-bx})^c {/eq}
For herd 1 we have:
{eq}N1(t)=1418\left ( 1-e^{-(0.52)t} \right )^{84}\\ N1(20\text{ weeks})=1418\left ( 1-e^{-(0.52)(20)} \right )^{84}\\ \therefore N1(20\text{ weeks})=1414\text{ lambs} {/eq}
For herd 2 we have:
{eq}N2(t)=1315\left ( 1-e^{-(0.56)t} \right )^{94}\\ N2(20\text{ weeks})=1315\left ( 1-e^{-(0.56)(20)} \right )^{94}\\ \therefore N2(20\text{ weeks})=1313\text{ lambs} {/eq}
For herd 3 we have:
{eq}N3(t)=1516\left ( 1-e^{-(0.54)t} \right )^{94}\\ N3(20\text{ weeks})=1516\left ( 1-e^{-(0.54)(20)} \right )^{94}\\ \therefore N3(20\text{ weeks})=1513\text{ lambs} {/eq}
How to Interpret Scientific Evidence
from Earth Science: Middle School
Chapter 12 / Lesson 9
8.3K
|
2020-07-09 07:39:33
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8637939095497131, "perplexity": 4452.847197389191}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655899209.48/warc/CC-MAIN-20200709065456-20200709095456-00104.warc.gz"}
|
http://www.r-bloggers.com/coloring-the-world-extracting-user-specific-color-palettes-from-tableau-workbooks/
|
# Coloring the world – Extracting user specific color palettes from Tableau Workbooks
June 22, 2014
By
(This article was first published on Data * Science + R, and kindly contributed to R-bloggers)
If you read some of my last blog post you may notice that R got a new companion called Tableau. Tableau is an easy to use and mighty BI toolbox for visualizing all kinds of data and I suggest everybody to give it a trial. One of the things that I like very much is that it gives you all the options to create simple graphics instantly and on the other side assemble multifunctional and ”customized” dashboards by using table calculation, R scripts and other more advanced features. But nobody is perfect. Especially one thing about Tableaus handling of user specific color schemas/palettes bothers me a little bit. Right now, there is no easy way to save individual color palettes out of Tableau for reusing them also in other workbooks. The current blog post will show you, how this can be achieved, if you don’t fear executing a little python code:
Imagine that you have a discrete attribute, where you want to color the different labels regarding some specific schema. This is easy to do in Tableau Desktop. Just click right on the dimension, click “Default Properties” and “Color…” and assign the colors to the responding labels. Double-clicking on a labels gives you the option to enter the RGB code.
If the same attribute is used in different workbooks, you don’t want to repeat the process every time - especially if there are a lot of labels or the colors are not part of one of the build-in color palettes. For this Tableau offers the possibility to save color schemas in a user specific preference file (Preferences.tps). There is a well written KB article about how to store your palette as xml in the preference file. If you bring the preference file in place, you can select your individual palette when editing the color of an attribute.
For me this solution stops somewhere half the way. Imagine the common situation that you have assigned the colors manually in Tableau. Now you want to save your work, but there is no way to save it out of Tableau! Instead you have to click on every single label, memorize the color code and assemble a xml preference file piece by piece. I experienced saving manual created color schemas as a standard feature in other types of visualization systems for example geographical information systems (GIS).
Remark: Additionally Tableau requires you to transform all of your decimal RGB codes from Tableau Desktop to hexadecimal RCB codes if you want to put them into the preference file. Of course, this is no rocket science (using for example excel with its DEC2HEX function), but you may ask yourself if there shouldn’t be some better user experience. Please Tableau, can you just agree on one common format?
But here is help in form of a small python code that can extract the desired information. Here is, how it works: You may have noticed that if you view a workbook *.twb file in your text editor, you see XML code that defines the whole workbook. If you used a customized coloring inside a workbook the information about that has to be part of this XML code. Therefore, it should be possible to extract the information using a script that traverses the workbook. Because accessing the *.twb XML code directly is not supported by Tableau, there is no overall documentation. So any information about how things are stored need to be figured out in a trial-and-error manner. I recommend using versioning system like Git together with a text editor to track any change. Modify the workbook in tableau, save and inspect what’s changed with Git. Going this way, you will find out how the color information is saved: As long as you stick with the default coloring there will be no info in the XML code, but when you change the default coloring an “encoding” tag will appear as part of the data source under which you find the list of colors.
Now here is the python script to extract these informations:
The structure is very simple – it receives parameter from the command line, opens the workbook, extract the color palette for every requested attribute and prints out the information so that it can be easily saved as preference file. Dependent on the given parameters the script will write the preference file automatically at the end. Every color palette will be saved as categorical one. If you like to change it (for example to use it as sequential palette), you have to edit the “type” attribute with a text editor. Additionally behind every color the script will print printed the corresponding color as xml comment. This serves as a help to edit the file afterwards or create new palettes outside Tableau.
To run it, just type
python Extract_Color_Schema_TWB.py --s 'Color_and_Sort_Test.twb' 'Region' 'Region (Copy)'
to extract the color palette for the attributes “Region” and “Region (Copy)” from the workbook “Color_and_Sort_Test.twb”. The parameter “–s” will tell the script to save the results as “Preference.tps” in your current directory. You can copy this file directly to your Tableau repository and after restarting Tableau the new color palettes should be available.
I hope this will help other users sharing the same problem (if you want to help, you may vote for this feature idea/proposal in the Tableau community).
Please send me some feedback if the script isn’t working for you (together with the problematic workbook). As explained, I assembled the knowledge about how color palettes are stored by testing different cases. It is not so unlikely that I missed something ;-).
This time you can find the script here on github.
|
2014-10-23 04:39:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17610180377960205, "perplexity": 1196.6975322343483}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507450097.39/warc/CC-MAIN-20141017005730-00113-ip-10-16-133-185.ec2.internal.warc.gz"}
|
https://www.lecturio.com/concepts/population-genetics/
|
Population Genetics
Population genetics is a field in genetics that is concerned with the differences in the gene pool between different populations and how this underlies phenotypic differences between populations. The Hardy-Weinberg equilibrium serves as a basis for studying genetic variation within a population and allows for the calculation of allelic frequency. The process of natural selection is what determines the allelic frequencies in the population and the variability in the genotype-phenotype relationships between species.
Last update:
Hardy-Weinberg Equilibrium
Introduction
Population genetics studies genetic variation within a group.
• Depends on genetic, environmental, and societal factors
• These factors determine the frequency and distribution of alleles and genotypes.
A given population possesses a common gene pool.
• May contain several alleles of 1 gene
• Relative proportions of these alleles are referred to as gene frequency.
Gene flow is created when individuals migrate into or away from the population.
Definition
The Hardy-Weinberg equilibrium says that within a given population, both allele and genotype frequencies remain constant, without evolutionary influences.
Assumptions of the Hardy-Weinberg equilibrium
The Hardy-Weinberg equilibrium relies on 7 assumptions:
• Organisms must be diploid.
• Sexual reproduction produces new members of the population (no migration).
• Generations do not overlap.
• Random mating (without selection)
• Infinitely large population size
• Allele frequencies are equal between the sexes.
• Within a population, there is no gene flow or mutation.
Evolutionary influences
• Genetic drift: Random sampling of organisms leads to a change in genetic frequency.
• Assortative mating: Individuals with similar phenotypes mate more commonly.
• Natural selection: Phenotypes that provide an advantage perpetuate an increased frequency of the corresponding genotype.
• Sexual selection: a drive to mate with the opposite sex for reproductive purposes
• Mutation: change in nucleotide sequence
• Gene flow: transfer of genetic material between populations
• Meiotic drive: 1 allele may be favored to be passed.
• Population bottleneck: event that reduces population (e.g., natural disaster)
• Inbreeding: mating between individuals or organisms that are closely related
• Founder effect: a new population leading to a decrease in genetic variation
Hardy-Weinberg Equation
Introduction
The Hardy-Weinberg equation allows for the calculation of genetic variation of a population.
• This equation relies on the assumption that genetic variation in a population will remain constant between generations.
• Permits a genetic locus to have 2 alleles
• The Hardy-Weinberg equation relies on the absence of sexual selection, the absence of genetic flow or mutation, and a large population such that the probabilities are equal to the frequencies.
Equation
The Hardy-Weinberg equation:
$$p^{2}+2pq+q^{2}=1$$
• Components:
• The “p” stands for the frequency of 1 allele and “q” stands for the frequency of the other allele.
• Genotype AA: frequency p2
• Genotypes Aa, aA: frequency 2pq
• Genotype aa: frequency q2
• With the aid of the Hardy-Weinberg equation, population geneticists are able to calculate what percentage of a certain disease is contained in a gene. If gene A and gene a are distributed in a population that is constant, the following applies:
$$p+q=1 (=100\%)$$
Sample calculation
In a population, the dominant allele is present with a frequency of 60% in the gene pool. What is the distribution of the possible genotypes within the population?
• p = 60%, q = 40%, because q + p = 100%
• p2 + 2pq + q2 = 1
• (0.6)2 + 2 × (0.6 × 0.4) + (0.4)2 = 1
• 0.36 + 0.48 + 0.16 = 1
• p2 = 36%, pq = 48%, q2 = 16
• Answer: AA is 36%; Aa is 48%, and aa is 16%.
Natural Selection
• Over longer periods of time, the gene pool of a population changes via several mechanisms, most commonly natural selection.
• Natural selection favors individuals with a genetic composition that improves the chances of survival and reproduction:
• A trait that gives a reproductive advantage will be passed down at a higher rate than traits that do not give a reproductive advantage.
• These genes will occupy a growing share of the gene pool over time.
• If the genes that offer survival advantages are dominant, they spread rapidly.
• Dominant genes that are disadvantageous to the individual disappear quickly.
• Recessive genes persist longer in a population.
Clinical Relevance
• Sickle cell anemia: an example of a regionally frequent gene defect that offers its carriers a selective advantage. Sickle cell anemia is especially prevalent in sub-Saharan Africa, where around 80% of the disease occurs. Sickle cell anemia is an autosomal recessive condition. Heterozygous carriers, who carry the HbS gene, have a higher resistance to malaria than non-carriers do. Thus, through selective advantage, the carriers in malaria-endemic regions receive a high share of the HbS gene in the gene pool of the population. These heterozygous individuals are able to produce enough hemoglobin for normal function while receiving the benefit of less-severe malaria infections.
• Tay-Sachs disease: founder effect is displayed through Tay-Sachs disease. The founder effect is seen when a smaller group isolates itself from a population, splits off, and reproduces, thereby decreasing genetic variation. Ashkenazi Jews have a higher-than-normal chance of Tay-Sachs disease and other lipid storage disorder, which is partially attributed to the high incidence of a certain chromosome with a high allele frequency in the early founding population.
• Antibiotic resistance: commonly occurs through horizontal gene transfer, which refers to the exchange of genetic information from 1 organism to another. Horizontal transfer of genetic information occurs between concurrently living organisms, not through sexual reproduction.
References
1. Griffiths, AJF, Miller, JH, Suzuki, DT, et al. (2000). Chapter 24: Population genetics. In An Introduction to Genetic Analysis. 7th edition. New York: W. H. Freeman. https://www.ncbi.nlm.nih.gov/books/NBK21961/
2. Charlesworth, B, & Charlesworth, D. (2017). Population genetics from 1966 to 2016. Heredity. 118(1), 2–9. https://doi.org/10.1038/hdy.2016.55
3. Casillas, S, & Barbadilla, A. (2017). Molecular population genetics. Genetics. 205(3), 1003–1035. https://doi.org/10.1534/genetics.116.196493
4. Belsky, DW, Moffitt, TE, & Caspi, A. (2013). Genetics in population health science: Strategies and opportunities. American Journal of Public Health. 103 Suppl 1(Suppl 1), S73–S83. https://doi.org/10.2105/AJPH.2012.301139
Study on the Go
Lecturio Medical complements your studies with evidence-based learning strategies, video lectures, quiz questions, and more – all combined in one easy-to-use resource.
Learn even more with Lecturio:
Complement your med school studies with Lecturio’s all-in-one study companion, delivered with evidence-based learning strategies.
🍪 Lecturio is using cookies to improve your user experience. By continuing use of our service you agree upon our Data Privacy Statement.
|
2021-09-16 22:41:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34396472573280334, "perplexity": 6065.552129131791}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780053759.24/warc/CC-MAIN-20210916204111-20210916234111-00353.warc.gz"}
|
http://openstudy.com/updates/55d5222ae4b0016bb01828ca
|
## anonymous one year ago can't find the integral of this function v(t) = (1+t^2)^(1/3) anyone know how?
1. anonymous
$\int\limits \sqrt[3]{1 + t^2}$ this is what i cant find
2. idku
$\large \int\limits_{ }^{ }\sqrt[3]{1+t^2}dt$
3. anonymous
yep thats right
4. idku
well, seems as though it doesn't have closed form based on my first impression. Lets set u=1+t² and see wat we get, did you try that?
5. anonymous
yep, didn't get anywhere. be my guest though im pretty bad at u substitutions :P
6. idku
oh, my trig sub...
7. idku
$$\large \displaystyle\int\limits_{ }^{ }\sqrt[3]{1+t^2}dt$$ $$\large \displaystyle t=\tan\theta$$
8. anonymous
trig sub...? never heard of it.
9. idku
lets see what then, u=tan(theta) du=sec^2theta $$\large \displaystyle\int\limits_{ }^{ }\sec^2\theta\sqrt[3]{1+\tan^2\theta}~d\theta$$ $$\large \displaystyle\int\limits_{ }^{ }\sec^{2+2/3}\theta$$ that is bullsh... as well
10. anonymous
wolfram is giving me something i dont even understand .
11. idku
12. idku
13. idku
are you sure you aren't given limits of integration, and this is not a riemman sum type of question?
14. idku
or maybe it is sort of $$\large \displaystyle f(x)=\int\limits_{2 }^{ \cos(x)}\sqrt[3]{1+t^2}dt$$ and you need f'(x) ?
15. anonymous
16. anonymous
this is my given question
17. idku
this is much better
18. idku
can I use y instead of x (as y(t) ?)
19. idku
You can use a stepsize on that one I think
20. anonymous
interesting i figured i just had to find s(t) and just change the constant and plug in t = 3
21. idku
well, that is the task as far as the steps go, but s(t)=?
22. idku
that is the question where we don't arrive at a simple conclusion just like this, right?
23. idku
have you heard of a step-size, Euler's method or anything like this?
24. anonymous
no sorry :(
25. idku
$$\large \displaystyle f'(t)=\sqrt[3]{1+t^2}$$ y(0)=2, find y(3)=? lets use a step-size of h=1 for now (but with h=1/2 or even less you can get a better approximation. Each step of size h is as follows: $$\large \displaystyle \left(x_n,y_n\right)=\left(x_{n-1}+h,~y_{n-1}~+h\cdot \color{red}{f'(x_{n-1},y_{n-1})}~\right)$$
26. idku
maybe we won't understand why it is like this right now, but you can see the formula, without completely going like wt(f)....
27. idku
$$\large \displaystyle \left(t_n,y_n\right)=\left(t_{n-1}+h,~y_{n-1}~+h\cdot \color{red}{f'(t_{n-1},y_{n-1})}~\right)$$ excuse me it should be t.
28. idku
$$\large \displaystyle \left(x_n,y_n\right)=\left(x_{n-1}+h,~y_{n-1}~+h\cdot \color{red}{f'(x_{n-1},y_{n-1})}~\right)$$ you are given: $$\large \displaystyle f'(t)=\sqrt[3]{1+t^2}$$ and a first point of (t=0, y=2)
29. idku
$$\large \displaystyle \left(x_1,y_1\right)=\left(x_{0}+1,~2~+1\cdot \color{red}{f'(0,2)}~\right)$$ this would be the first step starting from (0,2), and h=1
30. idku
are you getting it at least a little? (sorry I am bad at explaining)
31. idku
ok, I will give you the entire process wth h=1, and then you will tell me how much you get out of it.
32. anonymous
lol sort of , i'm not quite sure what the h represents
33. idku
you are starting from (0,2) right? that is the first given point |dw:1440032865183:dw| so it is the size of the step, in other words, we are carefully drawing the approximation for the graph of y(t) based on y'(t) taking it with little stpes of size h (of size 1 in this case)
34. idku
we don't know where this step size of 1 would land, and usiong this formula we can find that out.
35. idku
and from there, when we do the step and find the next point, we will take another step... ultimately, we wil get to the y(3)
36. idku
$$\large \displaystyle \left(x_n,y_n\right)=\left(x_{n-1}+h,~y_{n-1}~+h\cdot \color{red}{f'(x_{n-1},y_{n-1})}~\right)$$ $$\large \displaystyle \left(x_1,y_1\right)=\left(x_{0}+1,~2~+1\cdot \color{red}{f'(0,2)}~\right)$$ (there just isn't a y in this case, so would just plug in 0 for t) I haven't ever done Euler's for explicitly defined functions. $$\large \displaystyle \left(x_1,y_1\right)=\left(x_{0}+1,~2~+1\cdot \color{red}{\sqrt[3]{1+0^2}}~\right)$$
37. idku
oh, you know x_0 is 0 , i forgot to put that in
38. anonymous
okay wait one sec , why are we even finding (x1,y1)?
39. idku
$$\large \displaystyle \left(x_1,y_1\right)=\left(0+1,~2~+1\cdot \color{red}{\sqrt[3]{1+0^2}}~\right)$$ $$\large \displaystyle \left(x_1,y_1\right)=\left(1,~3\right)$$
40. idku
because that is the next point
41. idku
|dw:1440033265402:dw|
42. idku
when you do a u=substitution du=?
43. idku
du=2t dt so it ends up 1/(2 √(u-1)) du = dt
44. idku
$\int\limits_{ }^{ }\frac{\sqrt[3]{u}}{\sqrt{u-1}}du$
45. idku
if it had 2t next to the cube root, then yes, u sub would be the only and the perfect
46. idku
anyway, i guess back to stepsize?
47. idku
$$\large \displaystyle \left(x_1,y_1\right)=\left(1,~3\right)$$ so here is the result of 1 step
48. idku
see how I am doing the steps? (See how I plug them into the formula?)
49. anonymous
yes , okay so say we do this for a whole bunch of points then what?
50. idku
$$\large \displaystyle \left(x_n,y_n\right)=\left(x_{n-1}+h,~y_{n-1}~+h\cdot \color{red}{f'(x_{n-1},y_{n-1})}~\right)$$ can you use this formula to get from (1,3) with h=1 to the next point? Note: in this case: $$\large \displaystyle \left(x_n,y_n\right)=\left(x_{n-1}+h,~y_{n-1}~+h\cdot \color{red}{f'(x_{n-1})}~\right)$$ (since there is no y - it is explicit function)
51. idku
yes, we have to do it a bunch of time. On mathematica, there is a function that can do it instantly for many steps ahead right away, but I totally forgot how
52. idku
i guess you just need to do the dirty work (but do it correctly0
53. anonymous
just out of curiosity in what math class did you learn this method?
54. idku
I learned this method in calculus II, although it also belongs to DE.
55. idku
ok, can you proceed from (1,3) with h=1?
56. anonymous
not sure how to find f('n1) i think i got the rest though.
57. anonymous
woops
58. anonymous
damn , just had the entire thing written out and it just got deleted DX
59. anonymous
$(x _{1},y _{1}) = (x _{1} + 1 , y _{1} + 1 * (f \prime (x _{1})))$
60. idku
yhes, this is correct
61. idku
now, you know x_1 is 1 and y_1 is 3 also, h=1
62. anonymous
woops i mean x2 and y2 at the beginning
63. anonymous
just not sure how to find the f'(x1)
64. idku
yes....
65. idku
you know x_1 is 1 and y_1 is 3 so put that in....
66. idku
$(x_2,y_2)=(1+1,3+1\cdot f'(1))=(2,3+\sqrt[3]{1+(1)^2}~)=(?,?)$
67. anonymous
$(x _{1},y _{1}) = ( 2 , 4 * (f \prime (x _{1})))$
68. idku
(if you want we can do a polynomial approximation for cube root of 2, but it would be too much xD)
69. idku
you mean$(x_2,y_2)=(2,3+\color{red}{1\cdot}f'(1))$
70. idku
in red the order of operations erro
71. idku
and f'(1) is becase $$x_1$$=1 in our case
72. anonymous
woops yes i missed that
73. idku
yes, and f'(1)=?
74. anonymous
1.25992...
75. idku
just take 1.26
76. anonymous
so (2 , 4.26)
77. idku
$(x_2,y_2)=(2,4.26)$Correct
78. idku
Now, next step h=1, x_2 =2 y_2=4.26 go for it...
79. idku
(this is the last step, for this approximation with step size of h=1)
80. idku
i get (3, 6.935)
81. anonymous
one sec only got (3, 5.26 + ? ) so far
82. anonymous
okay yep got the same thing
83. anonymous
so now what?
84. idku
(3, $$\color{red}{4}$$.26 + 1•f'(4.26))
85. idku
4, not 5
86. anonymous
wow thats strange, then how did i get the same answer ?
87. idku
so, you do the same thing, you plug in the $$x_{n-1}$$ (in this case $$x_2$$which is equivalent to 4.26, into the f'(t))
88. anonymous
2.67526
89. idku
this is what i entered for $$x_3,y_3$$ http://www.wolframalpha.com/input/?i=%282%2B1+%2C+%284.26%2B1%5Ccdot+%281%2B%284.26%29%5E2%29%5E%281%2F3%29%29%29
90. idku
91. idku
yeah, 4.26 is the y.
92. idku
(2+1 , (4.26+1\cdot (1+(2)^2)^(1/3)))
93. idku
94. idku
(3, 5.9699) y(3)=5.9699
95. anonymous
interesting we must have done something wrong , none of the answer choices give that result
96. idku
i know it doesn't correspond to options, and you know why?
97. idku
We are going from y(0) to y(3) with a step size of h=1 the step size is too big! Take a smaller step-size to get a better approximation.
98. anonymous
okay so would it be C because it is the closest smaller answer ?
99. idku
it could be smaller or larger than the actual answer, you never know
100. idku
I knew it wouldn't suffice from the beginning, i did with h=1, so that it will be easy to understand. now, you just need to redo the dirty work, starting from $$\left(x_0=0,~y_0=2 \right)$$ but, you need to pick h=½ (or something like this, but h=1 is just too big)
101. jim_thompson5910
I'm not sure why you're using Euler's method when a numeric integration is much quicker (see attached for how to type it into the TI calculator). You'll find that $\Large \int_{0}^{3}\sqrt[3]{1+t^2}dt \approx 4.511532459 \approx 4.512$ which is the net change in position from t = 0 to t = 3. Add that to the initial position to figure out where this object ends up at t = 3 seconds. http://tibasicdev.wikidot.com/fnint
102. idku
I am the stupidest person in the world then I guess.... G-wis
103. jim_thompson5910
no not stupid since Euler's method does work here
104. idku
I am overloading it when it is just that simple, just that integral.... om(f)g excuse my lang
105. anonymous
lol well atleast i learned something new :D
106. idku
yes, Euler works, but... but it's too long.....
107. idku
it is just like driving from GA to North Dakota and then to Texas, instead of straight GA-Texas.
108. anonymous
oh well thanks for all the help guys !! :D:D:D:D:D:D:D
109. idku
you welcome, lol, i should literally "fire" myself.
110. idku
good luck
|
2017-01-24 17:28:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.822611927986145, "perplexity": 2715.523032033866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285001.96/warc/CC-MAIN-20170116095125-00076-ip-10-171-10-70.ec2.internal.warc.gz"}
|
https://openreview.net/forum?id=HkGmDsR9YQ
|
## Generalization and Regularization in DQN
Sep 27, 2018 ICLR 2019 Conference Blind Submission readers: everyone Show Bibtex
• Abstract: Deep reinforcement learning (RL) algorithms have shown an impressive ability to learn complex control policies in high-dimensional environments. However, despite the ever-increasing performance on popular benchmarks like the Arcade Learning Environment (ALE), policies learned by deep RL algorithms can struggle to generalize when evaluated in remarkably similar environments. These results are unexpected given the fact that, in supervised learning, deep neural networks often learn robust features that generalize across tasks. In this paper, we study the generalization capabilities of DQN in order to aid in understanding this mismatch between generalization in deep RL and supervised learning methods. We provide evidence suggesting that DQN overspecializes to the domain it is trained on. We then comprehensively evaluate the impact of traditional methods of regularization from supervised learning, $\ell_2$ and dropout, and of reusing learned representations to improve the generalization capabilities of DQN. We perform this study using different game modes of Atari 2600 games, a recently introduced modification for the ALE which supports slight variations of the Atari 2600 games used for benchmarking in the field. Despite regularization being largely underutilized in deep RL, we show that it can, in fact, help DQN learn more general features. These features can then be reused and fine-tuned on similar tasks, considerably improving the sample efficiency of DQN.
• Keywords: generalization, reinforcement learning, dqn, regularization, transfer learning, multitask
• TL;DR: We study the generalization capabilities of DQN using the new modes and difficulties of Atari games. We show how regularization can improve DQN's ability to generalize across tasks, something it often fails to do.
0 Replies
|
2019-05-19 14:51:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.730636477470398, "perplexity": 1712.8150435270097}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232254889.43/warc/CC-MAIN-20190519141556-20190519163556-00221.warc.gz"}
|
http://physics.stackexchange.com/questions/70392/inverse-fourier-transform-of-k-space-image-what-is-the-object-space-scale/70807
|
# Inverse Fourier Transform Of K-space Image…what is the object space scale?
Checked around a buch and could not find any help. But I needed help with:
Understanding that if I get the Inverse FT of K-space data, what is the scaling on the X-space (object space) resultant image/data i.e. for every tick on the axis, how do I know the spatial length?
More detailed explanation in the below image.
-
Is this a homework problem? – tpg2114 Jul 9 '13 at 1:34
No, not at all. I am geniunely trying to understand this for a week now but cannot. I made the image in powerpoint because, at this point, I am desperate for help. – user1886681 Jul 9 '13 at 1:39
Did you read the link there to see how we define homework? It's not "assigned in a class" type of question per se. – tpg2114 Jul 9 '13 at 1:41
This is not for a class. Its for my overall understanding, but I guess I could tag as such. I'm not necessarily looking for some one to solve it, I dont think they can with the info I gave them...I just need to understand the scales. – user1886681 Jul 9 '13 at 1:45
Even without that ambiguity, "K-space" is not a universally understood physics term. Or rather it is universal - it always means the Fourier transform of something "real." What that real thing is depends on context. What exactly is going on here? Is this a CCD on a camera? Are there optics involved? Is this an X-ray diffraction question? Without context this is unanswerable. – Chris White Jul 10 '13 at 0:52
The units of your X-space are the inverse of the units of your K-space. So if your K-space is in $\mathrm{m}^{-1}$, then your X-space will be in $\mathrm{m}$.
To make the full circuit $f(x) \rightarrow F(k) \rightarrow f(x)$ requires an overall normalization factor of $1/2\pi$ to ensure that you get the function you started with. As Chris White points out in his comment, there are a few different conventions on where exactly to put this normalization factor. Some put it entirely on one of the transformations. Some conventions split it between the two transforms, and put $1/\sqrt{2\pi}$ on each integral; this has the advantage of making the Fourier transform and the inverse Fourier transform perfectly symmetrical with respect to $x$ and $k$.
In addition, some conventions for wavenumber define it as cycles per unit distance (so that $xk = 1$), while some define wavenumber as radians per unit distance (so that $xk = 2\pi$).
Ultimately, you might need to multiply the axes in your X space by $1, \sqrt{2\pi},$ or $2\pi$, depending on the set of conventions your software is using, and the convention you have used to express your $k$ values. You should already know the latter. For the former, you will have to check the documentation for the Fourier transform in your software.
-
Ok, so firstly thanks so much for all of your help...
Secondly I have wrote down the solution (ATTACHED PDF) that one of the guys in my group gave me. But to be honest I don't understand the very first relation (in step one).
I specifically don't understand how the width of the peak in pixels fits in? Any guidance?
-
what software are you using for the plots and presentations? – WetSavannaAnimal aka Rod Vance Oct 10 '13 at 23:53
|
2016-05-01 21:33:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8023068308830261, "perplexity": 491.415196711597}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860116929.30/warc/CC-MAIN-20160428161516-00149-ip-10-239-7-51.ec2.internal.warc.gz"}
|
http://forum.allaboutcircuits.com/threads/ac-circuit.126988/
|
# AC circuit
Discussion in 'Homework Help' started by Jony130, Aug 30, 2016.
1. ### Jony130 Thread Starter AAC Fanatic!
Feb 17, 2009
3,990
1,115
Hi, I have a hard time with this simple circuit from the book.
We know that VL = 3*VR and additional we know that the phase shift between VR and VC is 45°.
And our task is to find Vin in terms of VR.
I did this
$Vin = V_R + 3 V_R*e^{j90} + V_R*\sqrt{2}*e^{-j45}$
And my answer is Vin = 2*√2*VR ≈ 2.83 but this is wrong because the answer given by the book is Vin = 2.86*VR .
So what is wrong ?
Last edited: Aug 30, 2016
2. ### WBahn Moderator
Mar 31, 2012
18,085
4,917
Your formatting has problems and so I can't tell what you answer is for sure. If it is 2.83*Vr, then that is close enough to 2.86*Vr to likely be roundoff error (on either your part of the author's part of both).
3. ### The Electrician AAC Fanatic!
Oct 9, 2007
2,300
335
The voltage across the resistor R which is in parallel with C is not VR.
4. ### Jony130 Thread Starter AAC Fanatic!
Feb 17, 2009
3,990
1,115
Yes, but I assumed that Vc = √2*VR*e^(-j45°) (upper resistor resistor) which is I guess wrong. Maybe I should start with assumption that Xc = R and XL = 3*R?
5. ### The Electrician AAC Fanatic!
Oct 9, 2007
2,300
335
When I did that I got a reslut close to the book value, but not exactly. Give it a try.
6. ### Jony130 Thread Starter AAC Fanatic!
Feb 17, 2009
3,990
1,115
For E = 10V and R = 1Ω and Xc = -j1Ω ; XL = j3Ω
I get
Ztot = R1 + XL1 + 1/(1/R2 + 1/Xc1) = 2.91548Ω
and
Itot = E/Ztot = 3.42997A therefore VR = 3.42997V and
10V/3.42997V = 2.91547739 ??
So how should I approach this problem?
Last edited: Aug 30, 2016
7. ### The Electrician AAC Fanatic!
Oct 9, 2007
2,300
335
You must use complex arithmetic.
Ztot = R + j XL1 + ......
8. ### DGElder Member
Apr 3, 2016
347
87
My answer agreed with the book's value: 2.86
I got Vin = 2.859 * Vr /_53.3 deg. w/r to current
Because of the given 45 deg phase shift: Xc = R, which means the current in each is one half the source current.
Therefore the voltage across R||C is 1/2 Vr.
Vin/_ = Vr/_0 + 3Vr/_90 + 1/2*Vr/_-45
Vin /_ = Vr + j3Vr + 1/2*Vr * (2^0.5 - j*2^0.5)
Last edited: Aug 30, 2016
9. ### Jony130 Thread Starter AAC Fanatic!
Feb 17, 2009
3,990
1,115
But in terms of a magnitude Vc is not equal to 0.5VR if we assume Xc = -j 1Ω ; XL = j 3Ω; R = 1Ω
10. ### DGElder Member
Apr 3, 2016
347
87
Why not?
Vr is the voltage across the upper resistor, the problem says nothing about the lower resistor voltage
But you can infer it since the voltage across the lower R has to be half the upper R because it sees half the current. So Xc which sees the other half of the current has to be half of Vr as well.
Your approach in #6 is wrong. You can't add the voltage magnitudes together when they are out of phase. You have to covert to complex numbers and add real and imaginary parts separately.
Last edited: Aug 30, 2016
11. ### Jony130 Thread Starter AAC Fanatic!
Feb 17, 2009
3,990
1,115
In term of a magnitude I get this VR_top = 3.43V and Vc = 2.43V and for Vin = 10V I got Vin/VR_top = 10V/3.43V = 2.91
12. ### DGElder Member
Apr 3, 2016
347
87
Ztot is wrong.
You can't add magnitudes together that are out of phase - whether they are Z, I or V values.
What is the peak value of this resultant sine wave?
2sin(wt) + 2sin(wt+45deg) = ?
Last edited: Aug 30, 2016
13. ### Jony130 Thread Starter AAC Fanatic!
Feb 17, 2009
3,990
1,115
Why ? The Ztot is not equal to R1 + XL1 + 1/(1/R2 + 1/Xc1) = 1.5 + j 2.5 = 2.91Ω < 59°
3.69V ??
14. ### DGElder Member
Apr 3, 2016
347
87
Your approach is more involved, more steps, so maybe roundoff error explains it. Still I would expect closer results.
15. ### The Electrician AAC Fanatic!
Oct 9, 2007
2,300
335
I see a couple of errors here.
Jony130 likes this.
16. ### DGElder Member
Apr 3, 2016
347
87
Jony,
I made a mistake in my original assumption and was filled with confidence because the book apparently made the same assumption. The magnitude of the current into R and C in the Xc||R combi will not be 1/2 of the magnitude of the source current but rather (2^0.5)/2 * the source current. This is because the currents are 90 deg out of phase with each other and must be added as such. You are correct.
17. ### DGElder Member
Apr 3, 2016
347
87
Yep, I made a mess of it.
18. ### The Electrician AAC Fanatic!
Oct 9, 2007
2,300
335
It just shows how the inclusion of reactive components in a circuit can add complication that can be quite non-intuitive. Even when one has a lot of experience with this sort of thing, it's possible to get thrown off course by a brain fart.
When I first read your assertion that for a 45 degree phase in the R||C circuit, Xc = R and therefore the current in each would be 1/2 the current in the upper R, I thought, OK. But, I had solved the problem another way and got jony130's result. So I had to think hard about it and realized why the 1/2 current assumption had to be false, even though it seemed ok at first. It seemed so right!
We EEs should all be thankful for C.P. Steinmetz's promulgation of the phasor method of solution for AC circuits, as opposed to the previous method of setting up and then solving differential equations for the circuit. Imagine how easy it would be to make a mistake then! Even with his method it can still be tricky.
Last edited: Aug 30, 2016
|
2017-01-18 08:01:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 1, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7787446975708008, "perplexity": 3292.833006491544}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00179-ip-10-171-10-70.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/deriving-the-rayleigh-jeans-limit-of-planck-law-of-radiation.772134/
|
# Deriving the rayleigh-jeans limit of planck law of radiation
1. Sep 21, 2014
### Jonsson
Hello there,
$$B(\nu) = \frac{2\,h\,\nu^3}{c^2(e^{h\nu/kT}-1)}$$
I want to show that for small frequencies, Reyleigh-Jeans law:
$$B(\nu) = \frac{2\nu^2kT}{c^2}$$
is correct.
I take the limit of Planck law as $\nu \to 0$ using l'hopital rule:
$$\lim_{\nu \to 0} \frac{2\,h}{c^2} \frac{\nu^3}{e^{h\nu/kT}-1} \stackrel{\text{l'H}}{=} \lim_{\nu \to 0} \frac{2\,h}{c^2} \frac{3\nu^2kT}{e^{h\nu/kT}h} = 0$$
I am off by a factor of 3. What is wrong with my maths?
Oh, I got it worked out. I just write, $e^{h\nu/kT} \approx 1 + h\nu/kT$, and substitute this into Planck law of radiation.
|
2017-08-22 11:59:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8653430938720703, "perplexity": 1425.2798538485335}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886110578.17/warc/CC-MAIN-20170822104509-20170822124509-00370.warc.gz"}
|
https://www.alumni.dtu.dk/english/news_articles?approved=%7B2B593D53-6500-4F63-9050-6068B46FF724%7D&at=%7B559A7F23-40D3-470C-8825-436C20C291CF%7D.%7BB4DD98D4-914B-457D-90DC-7DC551935E0E%7D.%7BFDBAEFE2-1258-4550-AA8B-B810B393302E%7D%7C%7B559A7F23-40D3-470C-8825-436C20C291CF%7D.%7BB4DD98D4-914B-457D-90DC-7DC551935E0E%7D.%7B119DF34B-0ED6-4B58-B3CA-8960AFE7E22E%7D%7C%7B559A7F23-40D3-470C-8825-436C20C291CF%7D.%7BB4DD98D4-914B-457D-90DC-7DC551935E0E%7D.%7B487298F0-DC82-4BCC-8CB3-A8DA1065ED17%7D%7C%7B559A7F23-40D3-470C-8825-436C20C291CF%7D.%7B3F2A7F37-83AF-4BD8-A085-0BC04DA3EA92%7D.%7B18E4440F-E64C-4CF1-BDD9-860D6747C02D%7D.%7B0DE95AE4-41AB-4D01-9EB0-67441B7C2450%7D.%7B11111111-1111-1111-1111-111111111111%7D.%7BFB31C791-5702-456B-8531-5E96EE2C3B9E%7D%7C%7B559A7F23-40D3-470C-8825-436C20C291CF%7D.%7B3F2A7F37-83AF-4BD8-A085-0BC04DA3EA92%7D.%7B18E4440F-E64C-4CF1-BDD9-860D6747C02D%7D.%7B0DE95AE4-41AB-4D01-9EB0-67441B7C2450%7D.%7B11111111-1111-1111-1111-111111111111%7D.%7BB4DD98D4-914B-457D-90DC-7DC551935E0E%7D&descending=True&fr=1&lid=%7B2B593D53-6500-4F63-9050-6068B46FF724%7D&mr=100&qt=TaggedItemSearch&sorton=DisplayDate&td=17-09-2019
|
# News
SELECT INTERVAL
2016
03 OCT
## DTU collaboration with China could save thousands of tonnes of carbon emissions
DTU Electrical Engineering has launched an ambitious collaboration project with China that will increase the proportion of green energy in the electricity grid and ensure...
Electrotechnology Energy Energy systems Energy storage Energy production Electricity supply
11 MAR
## The cities can become fossil-free, if they think 'smart'
Cities are major energy consumers and thus also CO2 emitters. However, it is precisely the many different urban infrastructures and energy units that may be the key to...
Electronics Electricity supply Energy efficiency Energy storage Fossil fuels Energy production Energy systems IT systems Climate adaption CO2 separation and CO2 storage
https://www.alumni.dtu.dk/english/news_articles?approved=%7B2B593D53-6500-4F63-9050-6068B46FF724%7D&at=%7B559A7F23-40D3-470C-8825-436C20C291CF%7D.%7BB4DD98D4-914B-457D-90DC-7DC551935E0E%7D.%7BFDBAEFE2-1258-4550-AA8B-B810B393302E%7D%7C%7B559A7F23-40D3-470C-8825-436C20C291CF%7D.%7BB4DD98D4-914B-457D-90DC-7DC551935E0E%7D.%7B119DF34B-0ED6-4B58-B3CA-8960AFE7E22E%7D%7C%7B559A7F23-40D3-470C-8825-436C20C291CF%7D.%7BB4DD98D4-914B-457D-90DC-7DC551935E0E%7D.%7B487298F0-DC82-4BCC-8CB3-A8DA1065ED17%7D%7C%7B559A7F23-40D3-470C-8825-436C20C291CF%7D.%7B3F2A7F37-83AF-4BD8-A085-0BC04DA3EA92%7D.%7B18E4440F-E64C-4CF1-BDD9-860D6747C02D%7D.%7B0DE95AE4-41AB-4D01-9EB0-67441B7C2450%7D.%7B11111111-1111-1111-1111-111111111111%7D.%7BFB31C791-5702-456B-8531-5E96EE2C3B9E%7D%7C%7B559A7F23-40D3-470C-8825-436C20C291CF%7D.%7B3F2A7F37-83AF-4BD8-A085-0BC04DA3EA92%7D.%7B18E4440F-E64C-4CF1-BDD9-860D6747C02D%7D.%7B0DE95AE4-41AB-4D01-9EB0-67441B7C2450%7D.%7B11111111-1111-1111-1111-111111111111%7D.%7BB4DD98D4-914B-457D-90DC-7DC551935E0E%7D&descending=True&fr=1&lid=%7B2B593D53-6500-4F63-9050-6068B46FF724%7D&mr=100&qt=TaggedItemSearch&sorton=DisplayDate&td=17-09-2019
5 DECEMBER 2020
|
2020-12-05 00:22:20
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8360366225242615, "perplexity": 6782.517778446722}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141745780.85/warc/CC-MAIN-20201204223450-20201205013450-00213.warc.gz"}
|
https://scienceblog.com/31641/a-wrong-theory-in-physics-textbook-theory-of-entropy/
|
# A wrong theory in physics textbook–Theory of Entropy
When define heat engine efficiency as:n = W/ W1 , that is, replacing Q1 in the original definition n=W/ Q1 with W1 , W still is the net work the heat engine applied to the outside in one cycle,W1 is the work the heat engine applied to the outside in the cycle,let the element reversible cycle be Stirling cycle , if circuit integral dQ/T =0 is tenable,we can prove circuit integral dW/T =0 and circuit integral dE/T =0 !
If circuit integral dQ/T=0, dW/T=0 and dE/T=0 really define new system state variables, the three state variables are inevitably different from each other; on the other side, their dimensions are the same one, namely J/K, and they are all state variables. So, we have to “make” three different system state variables with same dimensions, and we don’t know what they are, no doubt, this is absurd.
In fact , replaceing delta Q with dQ is taking for granted, if only we review the definition of differential, we know that the prerequisite of differential is there is a derivability function as y=f(x), however,there is not any function as Q=f(T) here at all, so, delta Q can not become dQ.
On the other side, when delta Q tend towards 0, lim(deltaQ/T)=0 but not lim(deltaQ/T)= dQ/T.
So, circuit integral dQ/T=0,circuit integral dW/T=0 and circuit integral dE/T=0 are all untenable at all !
Read paper Entropy : A concept that is not physical quantity
http://blog.51xuewen.com/zhangsf/article_27631.htm
(shufeng-zhang China Guangzhou Email: [email protected])
This site uses Akismet to reduce spam. Learn how your comment data is processed.
|
2018-08-18 00:55:48
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8367899656295776, "perplexity": 2979.517752094819}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221213247.0/warc/CC-MAIN-20180818001437-20180818021437-00225.warc.gz"}
|
https://www.gradesaver.com/textbooks/science/chemistry/chemistry-molecular-science-5th-edition/chapter-2-chemical-compounds-problem-solving-practice-2-10-page-63/e
|
## Chemistry: The Molecular Science (5th Edition)
Published by Cengage Learning
# Chapter 2 - Chemical Compounds - Problem Solving Practice 2.10 - Page 63: e
#### Answer
$CrCl_{3}$
#### Work Step by Step
Chromium(III) ion ($Cr^{3+}$) reacts with chloride ion ($Cl^{-}$) to from Chromium (III) Chloride. Note, when writing the compound, make sure that the charges of the two ions add to get the overall charge of the compound.
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
2020-02-21 22:05:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7879117131233215, "perplexity": 4127.831734343504}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145538.32/warc/CC-MAIN-20200221203000-20200221233000-00159.warc.gz"}
|
https://socratic.org/questions/how-do-you-simplify-2t-4-3-6-5t-8
|
# How do you simplify (2t+4)3+6(-5t)-(-8)?
Sep 18, 2016
$6 t - 10$
#### Explanation:
Remove the brackets using the distributive law. Be careful of the signs.
$\left(2 t + 4\right) \textcolor{b l u e}{3} + 6 \left(- 5 t\right) - \left(- 8\right)$
=$\textcolor{b l u e}{3} \left(2 t + 4\right) \textcolor{red}{+ 6 \left(- 5 t\right)} \textcolor{\lim e}{- \left(- 8\right)} \text{ } \leftarrow$ there are 3 terms
=$\textcolor{b l u e}{6 t + 12} \textcolor{red}{- 30} \textcolor{\lim e}{+ 8}$
=$6 t + 12 + 8 - 30 \text{ } \leftarrow$ re-arrange for easier calculation
=$6 t - 10$
|
2019-09-17 11:09:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 6, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9850845336914062, "perplexity": 6077.255319942852}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573070.36/warc/CC-MAIN-20190917101137-20190917123137-00139.warc.gz"}
|
https://hal.archives-ouvertes.fr/hal-00807258v3
|
# Estimation for stochastic differential equations with mixed effects.
2 M.I.A., I.N.R.A.
LPMA - Laboratoire de Probabilités et Modèles Aléatoires, MIA - Unité de recherche Mathématiques et Informatique Appliquées
Abstract : We consider the long term behaviour of a one-dimensional mixed effects diffusion process $(X(t))$ with a multivariate random effect $\phi$ in the drift coefficient. We first study the estimation of the random variable $\phi$ based on the observation of one sample path on the time interval $[0,T]$ as $T$ tends to infinity. The process $(X(t))$ is not Markov and we characterize its invariant distributions. We build moment and maximum likelihood-type estimators of the random variable $\phi$ which are consistent and asymptotically mixed normal with rate $\sqrt{T}$. Moreover, we obtain non asymptotic bounds for the moments of these estimators. Examples with a bivariate random effect are detailed. Afterwards, the estimation of parameters in the distribution of the random effect from $N$ {\em i.i.d.} processes $(X_j(t), t\in [ 0, T]), j=1,\ldots,N$ is investigated. Estimators are built and studied as both $N$ and $T=T(N)$ tend to infinity. We prove that the convergence rate of estimators differs when deterministic components are present in the random effects. For true random effects, the rate of convergence is $\sqrt{N}$ whereas for deterministic components, the rate is $\sqrt{NT}$. Illustrative examples are given.
Document type :
Journal articles
Domain :
Cited literature [22 references]
https://hal.archives-ouvertes.fr/hal-00807258
Contributor : Valentine Genon-Catalot <>
Submitted on : Monday, December 7, 2015 - 3:25:57 PM
Last modification on : Friday, April 12, 2019 - 1:32:51 AM
Document(s) archivé(s) le : Saturday, April 29, 2017 - 10:22:54 AM
### File
Genon_Laredo_GSTA-2015-0060_Re...
Files produced by the author(s)
### Identifiers
• HAL Id : hal-00807258, version 3
### Citation
Valentine Genon-Catalot, Catherine Larédo. Estimation for stochastic differential equations with mixed effects.. Statistics, Taylor & Francis: STM, Behavioural Science and Public Health Titles, 2016, 50 (5), pp.1014-1035. ⟨hal-00807258v3⟩
Record views
|
2019-04-19 18:41:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47310441732406616, "perplexity": 1550.0517152198272}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578527866.58/warc/CC-MAIN-20190419181127-20190419203127-00488.warc.gz"}
|
http://cnx.org/content/m34795/latest/?collection=col11321/latest
|
Connexions
You are here: Home » Content » Derived copy of Fundamentals of Mathematics » Whole Numbers
• Preface
• Acknowledgements
Lenses
What is a lens?
Definition of a lens
Lenses
A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.
What is in a lens?
Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.
Who can create a lens?
Any individual member, a community, or a respected organization.
What are tags?
Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.
Endorsed by (What does "Endorsed by" mean?)
This content has been endorsed by the organizations listed. Click each link for a list of all content endorsed by the organization.
• CCQ
This module is included in aLens by: Community College of QatarAs a part of collection: "Fundamentals of Mathematics"
"Used as supplemental materials for developmental math courses."
Click the "CCQ" link to see all content they endorse.
Click the tag icon to display tags associated with this content.
• College Open Textbooks
This module is included inLens: Community College Open Textbook Collaborative
By: CC Open Textbook CollaborativeAs a part of collection: "Fundamentals of Mathematics"
"Reviewer's Comments: 'I would recommend this text for a basic math course for students moving on to elementary algebra. The information in most chapters is useful, very clear, and easily […]"
Click the "College Open Textbooks" link to see all content they endorse.
Click the tag icon to display tags associated with this content.
Affiliated with (What does "Affiliated with" mean?)
This content is either by members of the organizations listed or about topics related to the organizations listed. Click each link to see a list of all content affiliated with the organization.
• Featured Content
This module is included inLens: Connexions Featured Content
By: ConnexionsAs a part of collection: "Fundamentals of Mathematics"
"Fundamentals of Mathematics is a work text that covers the traditional topics studied in a modern prealgebra course, as well as topics of estimation, elementary analytic geometry, and […]"
Click the "Featured Content" link to see all content affiliated with them.
Click the tag icon to display tags associated with this content.
Also in these lenses
• UniqU content
This module is included inLens: UniqU's lens
By: UniqU, LLCAs a part of collection: "Fundamentals of Mathematics"
Click the "UniqU content" link to see all content selected in this lens.
Recently Viewed
This feature requires Javascript to be enabled.
Tags
(What is a tag?)
These tags come from the endorsement, affiliation, and other lenses that include this content.
Inside Collection (Textbook):
Textbook by: Ron Stewart. E-mail the author
Whole Numbers
Module by: Denny Burzynski, Wade Ellis. E-mail the authorsEdited By: Math Editors
Summary: This module is from Fundamentals of Mathematics by Denny Burzynski and Wade Ellis, Jr. This module discusses many of aspects of whole numbers, including the Hindu-Arabic numeration system, the base ten positional number system, and the graphing of whole numbers. By the end of this module students should be able to: know the difference between numbers and numerals, know why our number system is called the Hindu-Arabic numeration system, understand the base ten positional number system, and identify and graph whole numbers.
Section Overview
• Numbers and Numerals
• The Hindu-Arabic Numeration System
• The Base Ten Positional Number System
• Whole Numbers
• Graphing Whole Numbers
Numbers and Numerals
We begin our study of introductory mathematics by examining its most basic building block, the number.
Number
A number is a concept. It exists only in the mind.
The earliest concept of a number was a thought that allowed people to mentally picture the size of some collection of objects. To write down the number being conceptualized, a numeral is used.
Numeral
A numeral is a symbol that represents a number.
In common usage today we do not distinguish between a number and a numeral. In our study of introductory mathematics, we will follow this common usage.
Sample Set A
The following are numerals. In each case, the first represents the number four, the second represents the number one hundred twenty-three, and the third, the number one thousand five. These numbers are represented in different ways.
• Hindu-Arabic numerals
4, 123, 1005
• Roman numerals
IV, CXXIII, MV
• Egyptian numerals
Practice Set A
Exercise 1
Do the phrases "four," "one hundred twenty-three," and "one thousand five" qualify as numerals? Yes or no?
Solution
Yes. Letters are symbols. Taken as a collection (a written word), they represent a number.
The Hindu-Arabic Numeration System
Hindu-Arabic Numeration System
Our society uses the Hindu-Arabic numeration system. This system of numeration began shortly before the third century when the Hindus invented the numerals
0 1 2 3 4 5 6 7 8 9
Leonardo Fibonacci
About a thousand years later, in the thirteenth century, a mathematician named Leonardo Fibonacci of Pisa introduced the system into Europe. It was then popularized by the Arabs. Thus, the name, Hindu-Arabic numeration system.
The Base Ten Positional Number System
Digits
The Hindu-Arabic numerals 0 1 2 3 4 5 6 7 8 9 are called digits. We can form any number in the number system by selecting one or more digits and placing them in certain positions. Each position has a particular value. The Hindu mathematician who devised the system about A.D. 500 stated that "from place to place each is ten times the preceding."
Base Ten Positional Systems
It is for this reason that our number system is called a positional number system with base ten.
Commas
When numbers are composed of more than three digits, commas are sometimes used to separate the digits into groups of three.
Periods
These groups of three are called periods and they greatly simplify reading numbers.
In the Hindu-Arabic numeration system, a period has a value assigned to each or its three positions, and the values are the same for each period. The position values are
Thus, each period contains a position for the values of one, ten, and hundred. Notice that, in looking from right to left, the value of each position is ten times the preceding. Each period has a particular name.
As we continue from right to left, there are more periods. The five periods listed above are the most common, and in our study of introductory mathematics, they are sufficient.
The following diagram illustrates our positional number system to trillions. (There are, to be sure, other periods.)
In our positional number system, the value of a digit is determined by its position in the number.
Sample Set B
Example 1
Find the value of 6 in the number 7,261.
Since 6 is in the tens position of the units period, its value is 6 tens.
6 tens = 60
Example 2
Find the value of 9 in the number 86,932,106,005.
Since 9 is in the hundreds position of the millions period, its value is 9 hundred millions.
9 hundred millions = 9 hundred million
Example 3
Find the value of 2 in the number 102,001.
Since 2 is in the ones position of the thousands period, its value is 2 one thousands.
2 one thousands = 2 thousand
Practice Set B
Exercise 2
Find the value of 5 in the number 65,000.
five thousand
Exercise 3
Find the value of 4 in the number 439,997,007,010.
Solution
four hundred billion
Exercise 4
Find the value of 0 in the number 108.
Solution
zero tens, or zero
Whole Numbers
Whole Numbers
Numbers that are formed using only the digits
0 1 2 3 4 5 6 7 8 9
are called whole numbers. They are
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, …
The three dots at the end mean "and so on in this same pattern."
Graphing Whole Numbers
Number Line
Whole numbers may be visualized by constructing a number line. To construct a number line, we simply draw a straight line and choose any point on the line and label it 0.
Origin
This point is called the origin. We then choose some convenient length, and moving to the right, mark off consecutive intervals (parts) along the line starting at 0. We label each new interval endpoint with the next whole number.
Graphing
We can visually display a whole number by drawing a closed circle at the point labeled with that whole number. Another phrase for visually displaying a whole number is graphing the whole number. The word graph means to "visually display."
Sample Set C
Example 4
Graph the following whole numbers: 3, 5, 9.
Example 5
Specify the whole numbers that are graphed on the following number line. The break in the number line indicates that we are aware of the whole numbers between 0 and 106, and 107 and 872, but we are not listing them due to space limitations.
The numbers that have been graphed are
0, 106, 873, 874
Practice Set C
Exercise 5
Graph the following whole numbers: 46, 47, 48, 325, 327.
Exercise 6
Specify the whole numbers that are graphed on the following number line.
Solution
4, 5, 6, 113, 978
A line is composed of an endless number of points. Notice that we have labeled only some of them. As we proceed, we will discover new types of numbers and determine their location on the number line.
Exercises
Exercise 7
What is a number?
concept
Exercise 8
What is a numeral?
Exercise 9
Does the word "eleven" qualify as a numeral?
Solution
Yes, since it is a symbol that represents a number.
Exercise 10
How many different digits are there?
Exercise 11
Our number system, the Hindu-Arabic number system, is a
number system with base
.
positional; 10
Exercise 12
Numbers composed of more than three digits are sometimes separated into groups of three by commas. These groups of three are called
.
Exercise 13
In our number system, each period has three values assigned to it. These values are the same for each period. From right to left, what are they?
Solution
ones, tens, hundreds
Exercise 14
Each period has its own particular name. From right to left, what are the names of the first four?
Exercise 15
In the number 841, how many tens are there?
4
Exercise 16
In the number 3,392, how many ones are there?
Exercise 17
In the number 10,046, how many thousands are there?
0
Exercise 18
In the number 779,844,205, how many ten millions are there?
Exercise 19
In the number 65,021, how many hundred thousands are there?
Solution
0
For following problems, give the value of the indicated digit in the given number.
5 in 599
1 in 310,406
ten thousand
9 in 29,827
Exercise 23
6 in 52,561,001,100
Solution
6 ten millions = 60 million
Exercise 24
Write a two-digit number that has an eight in the tens position.
Exercise 25
Write a four-digit number that has a one in the thousands position and a zero in the ones position.
Exercise 26
How many two-digit whole numbers are there?
Exercise 27
How many three-digit whole numbers are there?
900
Exercise 28
How many four-digit whole numbers are there?
Exercise 29
Is there a smallest whole number? If so, what is it?
yes; zero
Exercise 30
Is there a largest whole number? If so, what is it?
Exercise 31
Another term for "visually displaying" is
.
graphing
Exercise 32
The whole numbers can be visually displayed on a
.
Exercise 33
Graph (visually display) the following whole numbers on the number line below: 0, 1, 31, 34.
Exercise 34
Construct a number line in the space provided below and graph (visually display) the following whole numbers: 84, 85, 901, 1006, 1007.
Exercise 35
Specify, if any, the whole numbers that are graphed on the following number line.
61, 99, 100, 102
Exercise 36
Specify, if any, the whole numbers that are graphed on the following number line.
Content actions
PDF | EPUB (?)
What is an EPUB file?
EPUB is an electronic book format that can be read on a variety of mobile devices.
PDF | EPUB (?)
What is an EPUB file?
EPUB is an electronic book format that can be read on a variety of mobile devices.
Collection to:
My Favorites (?)
'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.
| A lens I own (?)
Definition of a lens
Lenses
A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.
What is in a lens?
Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.
Who can create a lens?
Any individual member, a community, or a respected organization.
What are tags?
Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.
| External bookmarks
Module to:
My Favorites (?)
'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.
| A lens I own (?)
Definition of a lens
Lenses
A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.
What is in a lens?
Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.
Who can create a lens?
Any individual member, a community, or a respected organization.
What are tags?
Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.
| External bookmarks
|
2014-03-17 21:10:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2055227905511856, "perplexity": 2361.7039684035894}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678706211/warc/CC-MAIN-20140313024506-00011-ip-10-183-142-35.ec2.internal.warc.gz"}
|
https://leeconjecture.wordpress.com/category/gallery/
|
# Cyclic Codes of n < 26 and Comparison
The attached links are the group project paper and appendix in Cryptography course offered by Tommy Occhipinti in Carleton, Spring 2015: Crypt,Appendix.
The paper contains generating cyclic codes of length $n < 26$, comparing their parameters, and sorting out better ones.
# Character Table Project
The attached link is a group project in Representation Theory class offered by Eric Egge in Carleton, Spring 2015:Character Table.
The paper contains character tables of $A_4$, rotation of a cube, and $A_5$.
# Introduction to Mandelbrot Set
The attached link is a group project in Chaotic Dynamics Independent Studies in Spring 2015: Mandelbrot. The study and the project were under the supervision of Rafe Jones.
It introduces filled-in Julia set, Mandelbrot set and their properties. Plus, it covers graphical analysis of Mandelbrot set as well.
# Kummer’s Lemma and Cyclotomic Units
The attached link is a group project underwent in Analytic Number Theory class in Carleton College: Kummer. The course was offered by Rafe Jones in Winter 2014.
It proves Kummer’s Lemma, that is “Every unit of $\mathbb{Z}[\zeta_p]$ is of the form of $r\zeta_p^g$ where $r \in \mathbb{R}$ and $g \in \mathbb{Z}$.” Plus, it introduces Cyclotomic units and presents an example.
# Introduction to Hausdorff Measures and Fractals
The attached link is Senior Integrative Project in Carleton College that my group and I did over last term: COMPSFINAL. The project was underwent over Fall 2014 and Winter 2015 under supervision of Allison Tanguay.
The main purpose of the paper is to introduce the idea of Lebesgue measure and $\alpha$-dimensional Hausdorff measure, fractals and their behaviors, non-fractal sets and related conjecture.
Lee’s Challenge #1:
Consider an arbitrary non-negative real number, namely $\beta$. Can we always construct a set of which strict Hausdorff dimension is $\beta$?
|
2017-08-24 08:31:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 10, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.422993004322052, "perplexity": 2117.1599162554576}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886133447.78/warc/CC-MAIN-20170824082227-20170824102227-00211.warc.gz"}
|
https://stats.stackexchange.com/questions/518443/best-statistic-to-compare-disease-appearance-in-two-distinct-groups
|
# Best statistic to compare disease appearance in two distinct groups
I have two distinct groups made of a different numbers of subjects (111 Millions and 126 Millions). My goal is to evaluate how many subjects in the two groups encounter 10 different diseases. For this purpose, I build a table (reporting here only 4 out of 10 diseases) as follows:
Disease Group A Group B
A 23 M 19 M
B 45 M 18 M
C 19 M 18 M
D 21 M 20 M
In this case there are no means involved: I'm simply counting the number of occurrences (frequency) within each group and for each disease. Is there a way to check whether the the difference is statistically significant for each disease between the two groups? I would proceed with a Chi-squared test isolating each disease, building a contingency table as follows and then run the test.
Disease A Group A Group B Sum
Infected 23 M 19 M 42 M
Not-Infected 88 M 107 M 195 M
Sum 111 M 126 M
Is, in this case the Chi-squared test, the most appropriate test?
With such large samples, do you think results of a statistical test adds much value?
Formally, you could used prop.test in R (or essentially equivalently, a chi-squared test on a 2-by-2 table).
prop.test(c(23*10^6,19*10^6), c(111*10^6,126*10^6), cor=F)
2-sample test for equality of proportions
without continuity correction
data: c(23 * 10^6, 19 * 10^6) out of c(111 * 10^6, 126 * 10^6)
X-squared = 1288000, df = 1, p-value < 2.2e-16
alternative hypothesis: two.sided
95 percent confidence interval:
0.05631563 0.05651148
sample estimates:
prop 1 prop 2
0.2072072 0.1507937
With millions of subjects is there any doubt that proportions $$0.2091$$ and $$0.1508$$ differ? Or does a P-value (predictably) extremely near $$0$$ somehow seem impressive--or tick someone's supposedly mandatory box?
Note: Output for chisq.test in R follows:
TBL = 10^6*matrix(c(23,19,88,107), byrow=T, nrow=2)
TBL
[,1] [,2]
[1,] 2.3e+07 1.90e+07
[2,] 8.8e+07 1.07e+08
chisq.test(TBL, cor=F)
Pearson's Chi-squared test
data: TBL
X-squared = 1288000, df = 1, p-value < 2.2e-16
• Thanks for your answer. I was only wondering whether other comparisons were possible besides the simple Chi-squared Apr 7 at 14:13
|
2021-09-26 13:32:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5032071471214294, "perplexity": 2515.861669912639}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057861.0/warc/CC-MAIN-20210926114012-20210926144012-00426.warc.gz"}
|
https://stats.stackexchange.com/questions/256751/covariance-between-two-random-variables
|
# Covariance between two random variables
So I have two random variables, X and Y. And their scale location transformations are :
U = a*X + b
W = c*Y + d
I'm given that a,b,c and d are constants. And their mean folow:
E(U) = a*E(X)+b
E(W) + c*E(Y)+d
Now I'm asked to prove that their covariance folows:
V(U,W) = acV(X,Y)
and that
V(U+W) = a^2*V(X) + c^2*V(Y) + 2acV(X,Y)
And I can't for the life of me figure out how.
• Maybe start with ${\rm cov}(X,Y) = E(XY) - E(X)E(Y)$ and see where that gets you.... Jan 17 '17 at 17:43
• I tried, but the only answer I found is that E(XY) = E(X)*E(Y) which results in 0. Jan 17 '17 at 17:55
• Try again. Compute $E(UW) - E(U)E(W)$ after substituting in $U = aX + b$ and $W = cY + d$ and you'll get something in terms of $a,b,c,d$ and ${\rm cov}(X,Y)$. Jan 17 '17 at 17:57
• I'm very sorry if I sound stupid, but could you write that out for me? Because this is pretty much where I get stuck I think. Jan 17 '17 at 18:00
For the second part (assuming you can't use the formula for the variance of a sum) do the exact same thing while remembering that ${\rm var}(Z) = {\rm cov}(Z,Z)$ for any random variable.
|
2021-10-21 06:11:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8043892979621887, "perplexity": 348.62221715939637}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585381.88/warc/CC-MAIN-20211021040342-20211021070342-00134.warc.gz"}
|
http://mymathforum.com/applied-math/336927-system-particles.html
|
My Math Forum system of particles
Applied Math Applied Math Forum
October 29th, 2016, 02:23 AM #1 Member Joined: May 2016 From: Ireland Posts: 96 Thanks: 1 system of particles Two particles of masses 3 kg and 5 kg are connected by a light inextensible string, of length 4 m, passing over a light smooth peg of negligible radius. The 5 kg mass rests on a smooth horizontal table. The peg is 2.5 m directly above the 5 kg mass. The 3 kg mass is held next to the peg and is allowed to fall vertically a distance 1.5 m before the string becomes taut. does any one know what this question looks like i cant even imagine it
October 29th, 2016, 07:18 AM #2
Math Team
Joined: Jul 2011
From: Texas
Posts: 2,761
Thanks: 1416
Quote:
Originally Posted by markosheehan Two particles of masses 3 kg and 5 kg are connected by a light inextensible string, of length 4 m, passing over a light smooth peg of negligible radius. The 5 kg mass rests on a smooth horizontal table. The peg is 2.5 m directly above the 5 kg mass. The 3 kg mass is held next to the peg and is allowed to fall vertically a distance 1.5 m before the string becomes taut. does any one know what this question looks like i cant even imagine it
sketch attached
Attached Images
masses_peg.jpg (25.6 KB, 6 views)
October 30th, 2016, 12:51 AM #3 Member Joined: May 2016 From: Ireland Posts: 96 Thanks: 1 Show that when the string becomes taut the speed of each particle is 3root3g all over 8 m/s i tried working this out by looking at the 3kg weight and using the equation v=u+2as u=0 a=g s=1.5 however working this out gives me an answer of root3g this is not the desired answer
October 30th, 2016, 06:25 AM #4
Math Team
Joined: Jul 2011
From: Texas
Posts: 2,761
Thanks: 1416
Quote:
Show that when the string becomes taut the speed of each particle is 3root3g all over 8 m/s
conservation of momentum ...
final velocity of the 5 kg mass = $v_f$
initial velocity of the 5 kg mass = $0$
final velocity of the 3 kg mass = $-v_f$ because its motion is opposite in direction to the 5 kg mass
initial velocity of the 3 kg mass = $-\sqrt{3g}$
$Mv_f = m(-v_f - v_0)$
solving for $v_f$ ...
$v_f = \dfrac{-mv_0}{M+m} = \dfrac{3\sqrt{3g}}{8}$
Tags particles, system
### content
Click on a term to search for related topics.
Thread Tools Display Modes Linear Mode
Similar Threads Thread Thread Starter Forum Replies Last Post GreenBeast Physics 16 April 8th, 2016 06:16 AM Jhenrique Geometry 1 August 29th, 2015 06:33 AM BenFRayfield Physics 3 January 14th, 2015 09:19 PM foothill Advanced Statistics 1 October 13th, 2014 11:47 PM jakeward123 Trigonometry 1 March 21st, 2012 07:03 AM
Contact - Home - Forums - Cryptocurrency Forum - Top
|
2018-08-14 08:23:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5306504964828491, "perplexity": 1817.4482575154987}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221208750.9/warc/CC-MAIN-20180814081835-20180814101835-00220.warc.gz"}
|
https://www.mathnet.or.kr/thesis_list/view/306369/JDEQAK/102/2/2561
|
The MathNet Korea
Information Center for Mathematical Science
### 논문검색
Information Center for Mathematical Science
#### 논문검색
Journal of Differential Equations
( Vol. 102 NO.2 / (1993))
Regularity of Solutions for a Two-Phase Degenerate Stefan Problem
Pages. 402-418
Abstract For a one-dimensional two-phase degenerate Stefan problem, we prove that the free boundary is $C^infty$ smooth and the solutions are $C^infty$ smooth up to the boundary. The proof is based on performing the hodograph transformation to fix the free boundary and establishing a nonlinear a priori estimate for the solution. 1. Introduction 2. Transformed problem 2.1. Preparatory propositions 2.2. Hodograph transformation 3. A priori estimate for nonlinear problem 3.1. A priori estimate 3.2. Truncated estimate 4. Existence of $C^\infty$ solutions
|
2019-11-14 11:06:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8916968107223511, "perplexity": 1475.044793833861}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668416.11/warc/CC-MAIN-20191114104329-20191114132329-00307.warc.gz"}
|
http://ibelitetutor.com/category/ib-mathematics-tutors/
|
Calculus (Part-2)-geometric explanation of differentiation
Online maths tutors like the following concept very much
Geometric explanation of differentiation- If we find the derivative of function f(x) at x = x0 then it is equal to the slope of the tangent to the graph of given function f(x) at the given point [(x0, f(x0))].
But what is a tangent line?
It is not merely a simple line that joins the graph of given function at one point.
It is actually the limit of the secant lines joining points P = [(x0, f(x0)] and Q on the graph of f(x) as Q moves very much close to P.
The tangent line contacts the graph of given function at given point [(x0, f(x0)] the slope of the tangent line matches the direction of the graph at that point. The tangent line is the straight line that best approximates the graph at that point.
As we are given the graph of given function, we can draw the tangent to this graph easily. Still, we’ll like to make calculations involving the tangent line and so will require a calculative method to explore the tangent line.
We can easily calculate the equation of the tangent line by using the slope-point form of the line. We slope of a line is m and its passing through a point (x0,y0) then its equation will be
y − y0 = m(x − x0)
So now we have the formula for the equation of the tangent line. It’s clear that to get an actual equation for the tangent line, we should know the exact coordinates of point P. If we have the value of x0 with us we calculate y0 as
y = f(x0)
The second thing we must know is the slope of line
m = f’(x0)
Which we call the derivative of given function f(x).
Definition:
The derivative f’(x0) of given function f at x0 is equal to the slope of the tangent line to
y = f(x) at the point P = (x0, f(x0).
Differentiation Using Formulas- We can use derivatives of different types of functions to solve our problems :
(ix) D (secx) = secx . tanx (x) D (cosecx) = – cosecx . cotx
(xii) D (constant) = 0 where D =
These formulas are result of differetiation by first principle
Inverse Functions And Their Derivatives :
Theorems On Derivatives: If u and v are derivable function of x, then,
(i) (ii) where K is any constant
(iii) known as “ Product Rule ” (iv)
known as “Quotient Rule ”
(v) If y = f(u) & u = g(x) then “Chain Rule ”
Logarithmic Differentiation: To find the derivative of : (i) a function which is the product or quotient of a number of functions OR
(ii) a function of the form where f & g are both differentiable, it will be found convenient to take the logarithm of the function first & then differentiate. This is called Logarithmic Differentiation.
Implicit Differentiation: (i) In order to find dy/dx, in the case of implicit functions, we differentiate each term w.r.t. x regarding y as a function of x & then collect terms in dy/dx together on one side to finally find dy/dx.
(ii) In answers of dy/dx in the case of implicit functions, both x & y are present.
Parametric Differentiation: If y = f(q) & x = g(q) where q is a parameter, then
Derivative Of A Function w.r.t. Another Function-: Let y = f(x) & z = g(x)
Derivatives Of Order Two & Three: Let a function y = f(x) be defined on an open interval (a, b). It’s derivative, if it exists on(a, b) is a certain function f'(x) [or (dy/dx) or y’ ] & is called the first derivative of y w.r.t. x. If it happens that the first derivative has a derivative on (a, b) then this derivative is called the second derivative of y w. r. t. x & is denoted by f”(x) or (d2y/dx2) or y”. Similarly, the 3rd order derivative of y w. r. t. X, if it exists, is defined by It is also denoted by f”'(x) or y”’.
All online maths tutors suggest solving fair amount of questions based on these concepts
Example-Find the tangent line to the following function at z=3 for the given function
Solution
We can find the derivative of the given function using basic differentiation as discussed in the previous post
We are already given that z=3 so
Equation of tangent line is
y − y0 = m(x − x0) here y0=R(3)=√7
Putting these values we get equation of tangent line
In my online maths tutors series, I will discuss Application of Derivatives
Example- Differentiate the following function
Ans: We can apply quotient rule in this questions
IB Mathematics Tutors-Part 1 (Calculus)
In my IB Mathematics Tutors series, I will explain different topics taught at HL and SL levels of IB Mathematics. Calculus is the first one.
If we want to understand the importance of Calculus in IB Diploma Programme, We should have a look at the number of teaching hours recommended for it. It’s 40 hours in SL(out of 150 total hours) and 48 (out of 240 total hours) hours in HL. This makes calculus the most important topic for IB Mathematics Tutors as well as for IB students.
What is Calculus- Calculus is an ancient Latin word. It means ‘small stones’ used for counting. In every branch of Mathematics, we study of something specific, like in Geometry, we study about shapes, in Algebra, we study about arithmetic operations, in coordinate, we study about locating a point. In calculus, we do the mathematical study of continuous change. It mainly has two branches- Read more
|
2018-01-20 00:46:15
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.86202073097229, "perplexity": 1776.3839271306533}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084888341.28/warc/CC-MAIN-20180120004001-20180120024001-00073.warc.gz"}
|
https://mathzsolution.com/category/computational-complexity/
|
## Weighted Median Filtering
Let’s begin with a little review of unweighted median filtering. Suppose I have a list of N real-valued numbers, x=x1,…,xN. Let mi be the median of K consecutive values: mi= median(xi,…,xi+K). Let m=(m1,…,mN−K+1). The act of transforming x to m is called (unweighted) median filtering. We usually imagine N≫K, and frequently we assume that K … Read more
## Effective “almost enumeration” of monotone boolean functions
Denote by M(n) the set of all monotone functions {0,1}n→{0,1}. Can M(n) be represented as M(n)={f(t)|t∈{0,1}k} such that: 1) k=log|M(n)|+O(n) 2) f∈DTIME(2O(n)) ? (I mean that f(i) is a truth table of a boolean function) Answer AttributionSource : Link , Question Author : Alexey Milovanov , Answer Author : Community
## Fast matrix-vector product for structured matrices
Let $X\in\mathbb{C}^{m\times n}$ be a matrix that satisfies the Sylvester equation $$AX-XB = F,\qquad A\in\mathbb{C}^{m\times m}, \quad B\in\mathbb{C}^{n\times n},$$ where $F\in\mathbb{C}^{m\times n}$ is of low rank. In special cases as shown in http://math.mit.edu/~plamen/files/mds.pdf there are known algorithms for a fast matrix-vector product, i.e., computing $X\underline{v}$ in quasilinear complexity. These special cases include Toeplitz-like, Hankel-like, Cauchy-like, … Read more
## Collapsing the Intuitionistic Bounded Arithmetics Hierarchy
Let iT be the intuitionistic first order theory with non-logical axioms of classical first order theory T. Theorem1. If Ti2⊢T2, then Ti2 proves that the polynomial time hierarchy collapses to Σpi+3. proof. See Relating the Bounded Arithmetic and Polynomial-Time Hierarchies. Corollary. There exists i∈N such that Ti2⊢T2 iff there exists j∈N such that iTj2⊢iT2. proof. … Read more
## Complexity of counting colorings of co-bipartite graphs?
A graph is co-bipartite if it is the complement of bipartite graph. What is the complexity of counting colorings of co-bipartite graphs? Unlike split graphs, the chromatic polynomial isn’t of particularly simple form. Co-bipartite graphs are also known as (2,0)-colorable. Answer AttributionSource : Link , Question Author : joro , Answer Author : Community
## Efficient algorithm to construct path augmented graphs with smallest diameter?
I am interested in special graph constructions that have the smallest diameter. We have a path graph $P_n$ ($N$ is even). We add new set of edges $C$ between path nodes such that set $C$ forms a perfect matching. The output graph is the union of path $P_n$ and perfect matching $C$ on the path … Read more
## Complexity of extending $P_4$-partition of cubic graphs
Surprising phenomena occurs when we want to extend a partial solution of some easy problems. We are given part of the solution and we want to decide whether we can extend it to a complete solution. Extendability problem transforms an easy problem to hard one. For instance, Konig-Hall theorem states that all cubic bipartite graphs … Read more
|
2022-11-27 05:07:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.831209659576416, "perplexity": 1513.725422374224}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710192.90/warc/CC-MAIN-20221127041342-20221127071342-00054.warc.gz"}
|
https://www.physicsforums.com/threads/voltage-between-two-different-charges.357048/
|
# Voltage between two different charges
1. Nov 22, 2009
### LiteHacker
Voltage = Charge / Capacitance.
This assumes that the capacitor has +Q charge on one side and -Q charge on the other side.
What if you have two different charges?
I mean in terms of static electricity, if you have a piece of metal with one charge, and another piece of metal with another charge, the capacitance between them depends on their distance from each other and their volume... But they have two different charges.
How do you find the voltage?
Thank you,
Veniamin
Last edited: Nov 22, 2009
2. Nov 22, 2009
### fluidistic
I think by using the formula $$V=-\int \vec E d \vec l$$ so you'd have to calculate the electric field of the charged capacitor.
3. Nov 22, 2009
### LiteHacker
Indeed, after spending a while on http://en.wikipedia.org/wiki/Capacitor#Parallel_plate_model", I found that E = +- charge density / some constant ..
It goes on taking the integral of E * something = Voltage
But, I don't understand what E equals if you have two different charge densities.
Does anybody?
Note that it seems safe to assume that charge is proportional charge density, so we are still trying to figure out the original problem:
What voltage do you get based on two different charges?
Last edited by a moderator: Apr 24, 2017
4. Nov 22, 2009
### fluidistic
You'd have to find out $$\vec E$$, I believe but not 100% sure. This value may depends on the type of capacitor and its shape.
I'll wait someone else to enlighten us.
Last edited by a moderator: Apr 24, 2017
|
2017-10-23 13:48:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7376260757446289, "perplexity": 911.4152397225553}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187826049.46/warc/CC-MAIN-20171023130351-20171023150351-00392.warc.gz"}
|
http://mathhelpforum.com/pre-calculus/215142-completing-square-print.html
|
Completing the square
• Mar 20th 2013, 09:33 AM
Krislton
Completing the square
Hi,
Please help me understand an exam question I had last semester. I know how to find the solution if I could use $b^2 = 4ac$ but I wasn't allowed to use that method and with completing the square, I would normally put the equation in unity, but it already is so I can miss that step, then take half the coefficients of the b term, move the constant to the other side of the equation, take half the coefficient of b and put it in a bracket, expand the bracket out to see what I need to add or subtract from the equation and bob should be my uncle, but I cant find the solution! (Giggle)
Right the questions was:
(i) By completing the square, find in terms of k the roots of the equation $x^2 + 2kx -7 = 0$
Any help would be greatly appreciated!
Thanks,
Kris :)
• Mar 20th 2013, 09:41 AM
Plato
Re: Completing the square
Quote:
Originally Posted by Krislton
Right the questions was:
(i) By completing the square, find in terms of k the roots of the equation $x^2 + 2kx -7 = 0$
That can written as $(x+k)^2=k^2+7.$
• Mar 20th 2013, 10:24 AM
HallsofIvy
Re: Completing the square
Do you know how to multiply polynomials? In particular do you know that $(x+ k)^2= x^2+ 2kx+ k^2$? If so compare that with the your expression:
$x^2+ 2kx+ k^2$ and $x^2+ 2kx- 7$ You need to get $k^2$ and get rid of -7. You can do that by adding [itex]k^2[/itex] and 7 to both sides of the equation.
|
2016-10-25 19:04:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.823965311050415, "perplexity": 379.09610056293263}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720356.2/warc/CC-MAIN-20161020183840-00288-ip-10-171-6-4.ec2.internal.warc.gz"}
|
http://icpc.njust.edu.cn/Problem/Hdu/1178/
|
# Heritage from father
Time Limit: 2000/1000 MS (Java/Others)
Memory Limit: 131070/65535 K (Java/Others)
## Description
Famous Harry Potter,who seemd to be a normal and poor boy,is actually a wizard.Everything changed when he had his birthday of ten years old.A huge man called 'Hagrid' found Harry and lead him to a new world full of magic power.
If you've read this story,you probably know that Harry's parents had left him a lot of gold coins.Hagrid lead Harry to Gringotts(the bank hold up by Goblins). And they stepped into the room which stored the fortune from his father.Harry was astonishing ,coz there were piles of gold coins.
The way of packing these coins by Goblins was really special.Only one coin was on the top,and three coins consisted an triangle were on the next lower layer.The third layer has six coins which were also consisted an triangle,and so on.On the ith layer there was an triangle have i coins each edge(totally i*(i+1)/2).The whole heap seemed just like a pyramid.Goblin still knew the total num of the layers,so it's up you to help Harry to figure out the sum of all the coins.
## Input
The input will consist of some cases,each case takes a line with only one integer N(0<N<2^31).It ends with a single 0.
## Sample Input
1
3
0
## Sample Output
1.00E0
1.00E1
HintHint
when N=1 ,There is 1 gold coins.
when N=3 ,There is 1+3+6=10 gold coins.
JGShining
## Source
Gardon-DYGG Contest 1
|
2020-02-19 05:29:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21883319318294525, "perplexity": 10140.57645740629}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144027.33/warc/CC-MAIN-20200219030731-20200219060731-00480.warc.gz"}
|
https://www.physicsforums.com/threads/phase-portraits.130846/
|
# Homework Help: Phase portraits
1. Sep 5, 2006
### Benny
Hi, I'm unsure about how to do the following question.
I am given the following system for which I first need to find the general solution.
$$\left[ {\begin{array}{*{20}c} {\mathop x\limits^ \bullet } \\ {\mathop y\limits^ \bullet } \\ \end{array}} \right] = \left[ {\begin{array}{*{20}c} 3 & { - 2} \\ 6 & { - 5} \\ \end{array}} \right]\left[ {\begin{array}{*{20}c} x \\ y \\ \end{array}} \right]$$
I found the general solution to be:
$$\left[ {\begin{array}{*{20}c} x \\ y \\ \end{array}} \right] = c_1 \left[ {\begin{array}{*{20}c} 1 \\ 1 \\ \end{array}} \right]e^t + c_2 \left[ {\begin{array}{*{20}c} 1 \\ 3 \\ \end{array}} \right]e^{ - 3t}$$
I then sketched the phase portrait which in this case is a saddle. The question asks me to use this phase portrait to sketch x(t) and y(t) on the same graph for the case of x = 1, y = 2.9 when t = 0.
I can't think of a way to do this. The phase portrait I've sketched is for the general solution. I've tried a few things with the point (x,y) = (1,2.9) including shifting the 'centre' of the saddle to that point but none of the things I've tried have any reasoning behind them. They're just random things I've tried which haven't lead me anywhere. Can someone please help me out? Thanks.
2. Sep 5, 2006
### HallsofIvy
If you have the general solution
$$\left[ {\begin{array}{*{20}c} x \\ y \\\end{array}} \right] = c_1 \left[ {\begin{array}{*{20}c} 1 \\ 1 \\\end{array}} \right]e^t + c_2 \left[ {\begin{array}{*{20}c} 1 \\ 3 \\\end{array}} \right]e^{ - 3t}$$
then surely
$$\left[ {\begin{array}{*{20}c} x(0) \\ y(0) \\\end{array}} \right] = c_1 \left[ {\begin{array}{*{20}c} 1 \\ 1 \\\end{array}} \right] + c_2 \left[ {\begin{array}{*{20}c} 1 \\ 3 \\\end{array}} \right]= \left[\begin{array}{c}{c_1+ c_2}\\{c_1+ 3c_2}\end{array}\right]= \left[\begin{array}{c}1 \\ 2.9\end{array}\right]$$
so that $c_1+ c_2= 1$ and $c_1+ 3c_2= 2.9$. Once you've found the actual solution, it should be easy to graph it.
But that isn't really "using the phase portrait". I presume you calculated that the eigenvalues of the coefficient matrix are -3 and 1 and that y= 3x and y= x give eigenvalues of each respectively. Are you clear on exactly what the "phase portrait is? You make it sound as if you have just drawn those two lines. Of course, those lines are part of the phase portrait with "flow" along the line y= 3x directed inward (since it corresponds to the negative eigenvalue) and "flow" along the line y= x directed outward. But the phase diagram also consists of all hyperbolas having those lines as asymptotes and the same "flow".
No, the graph of the solution satisfying x(0)= 1, y(0)= 2.9 is not the phase portrait "shifted". It is the particular hyperbola, out of all those having asymptotes y= 3x, y= x. that passes through (1, 2.9)
3. Sep 6, 2006
### Benny
The eigenvectors are the axes of the hyperbolas. Their directions, along with the general solution allow for a sketch of the family of solutions if the appropriate values of t are considered. I've drawn the saddle (ie. the hyperbolas).
If I locate the point I'll get one of the hyperbolas - there are an infinite number of trajectories but the sketch will only have the key hyperbolas for clarity. Having found the hyperbola which corresponds to the point (x,y) = (1, 2.9), I still only have a curve of y against x and not y(t) and x(t) separately - I just don't get how I can sketch the curves of x(t) and y(t) using only the phase portrait. Nor do I understand the relevance of "t=0" to the required sketches. Can you please offer further assistance?
|
2018-04-22 14:45:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6494709849357605, "perplexity": 374.3120003188261}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945604.91/warc/CC-MAIN-20180422135010-20180422155010-00299.warc.gz"}
|
https://stats.stackexchange.com/questions/304561/are-weights-updated-differently-in-a-regression-network-vs-a-classification-net
|
# Are weights updated differently in a regression network vs. a classification network?
Are the weight of a neural network updated differently due to back propagation for a classification network vs. a regression network, if so how?..
My concern comes due to the both network uses different cost function, hence the update must be different as well.
I am under the intuition that log regression often are used for classification, and linear regression is used for normal neural network for regression problems..
Hence must the update of the weight also be different.. If the statement is correct? how does one in a classification process update the weights differently?..
• I don't know what you mean by "different". If you have two functions, $f(x)$ and $g(x)$ which are different (i.e. $f(x) \ne g(x)$) then is the process of calculating $\frac{d}{dx} f(x)$ different than calculating $\frac{d}{dx} g(x)?$ What ever you consider to be the answer to that question is exactly the answer to your question. – Bridgeburners Sep 22 '17 at 21:00
• I am under the impression that classification task usually uses cross entropy as cost function, and regression network uses the mean squared difference between target and output.. Hence would the way the weight being trained in the classification also be different, but how are the weights being updated? @Bridgeburners – Bob Burt Sep 22 '17 at 21:10
• Backpropagation simply means taking the derivative of the cost function with respect to the weights. (More formally, taking the gradient, which is really just a vector of the derivative with respect to each weight.) You already understand that regression NNs and classification NNs have different cost functions. So backpropagation is only different between the two in the sense that I described in my previous comment. Put another way: if you know calculus, and you know how to learn the weights of a regression NN, no new insight is required to learn the weights of a classification NN. – Bridgeburners Sep 22 '17 at 21:15
|
2019-10-22 01:46:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8349196314811707, "perplexity": 367.8888847819466}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987795403.76/warc/CC-MAIN-20191022004128-20191022031628-00499.warc.gz"}
|
https://socratic.org/questions/how-do-you-calculate-the-fourth-derivative-of-f-x-2x-4-3sin2x-2x-1-4
|
# How do you calculate the fourth derivative of f(x)=2x^4+3sin2x+(2x+1)^4?
Jul 11, 2016
$y ' ' ' ' = 432 + 48 \sin \left(2 x\right)$
#### Explanation:
Application of the chain rule makes this problem easy, though it still requires some legwork to get to the answer:
$y = 2 {x}^{4} + 3 \sin \left(2 x\right) + {\left(2 x + 1\right)}^{4}$
$y ' = 8 {x}^{3} + 6 \cos \left(2 x\right) + 8 {\left(2 x + 1\right)}^{3}$
$y ' ' = 24 {x}^{2} - 12 \sin \left(2 x\right) + 48 {\left(2 x + 1\right)}^{2}$
$y ' ' ' = 48 x - 24 \cos \left(2 x\right) + 192 \left(2 x + 1\right)$
$= 432 x - 24 \cos \left(2 x\right) + 192$
Note that the last step allowed us to substantially simplify the equation, making the final derivative much easier:
$y ' ' ' ' = 432 + 48 \sin \left(2 x\right)$
|
2021-07-28 03:32:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 7, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9590758085250854, "perplexity": 665.3383818536173}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153521.1/warc/CC-MAIN-20210728025548-20210728055548-00015.warc.gz"}
|
https://brilliant.org/problems/chemistry-daily-challenge-19-july-2015-2/
|
# Chemistry Daily Challenge 22-July-2015
Chemistry Level 2
What is the ratio of the coefficients of $$\ce{MnO4-}$$, $$\text{C}_2\text{O}_4^{2-}$$, and $$\ce{H+}$$ when the following reaction equation is balanced?
$\ce{MnO4-}+\text{C}_2\text{O}_4^{2-}+\ce{H+} \to \ce{Mn}^{2+}+\ce{CO2}+\ce{H2O}$
×
|
2018-09-24 00:45:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7826114892959595, "perplexity": 897.6038708873383}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267159938.71/warc/CC-MAIN-20180923232129-20180924012529-00024.warc.gz"}
|
https://physics.stackexchange.com/questions/311606/why-does-a-body-always-rotate-about-its-center-of-mass?noredirect=1
|
# Why does a body always rotate about its center of mass? [duplicate]
I found after searching that this question has been asked before. But all the answers were not convincing.
Suppose I have a body which is free, not constrained always rotate about its center of mass (COM). Why is that so?
A convincing answer that I found was that in most cases the moment of inertial about the center of mass is the least and that's why the body rotates about the center of mass.
But I ask it again with hope of the question not getting closed and getting a better succinct answer.
I was thinking that motion about the COM is the most stable one and the rotation about other points degenerates. I don't think it's right. Is it?
• Because it happens to rotate about a point - and that point is named Centre of mass. I guess your actual question is: why is there such a point at all? – Steeven Feb 12 '17 at 16:01
• Related: physics.stackexchange.com/q/53465/2451 , physics.stackexchange.com/q/81029/2451 and links therein. – Qmechanic Feb 12 '17 at 16:01
• What were the answers which you found "unconvincing"? Why were they "unconvincing"? You need to explain, and to provide links. – sammy gerbil Feb 12 '17 at 19:42
• @Steeven : The question Why does the body rotate about some point? is the same as asking Why does the body rotate at all? – sammy gerbil Feb 12 '17 at 19:46
• If you cannot state objectively what you mean by convincing/unconvincing then how can anyone know that their answer will satisfy you? – sammy gerbil Feb 12 '17 at 19:50
You presumably already know that in the absence of external forces, the center of mass of any collection of particles moves at a constant velocity. This is true whether they are stuck together in a single body or are just a bunch of separate bodies with or without interactions between them. We now move to a frame of reference moving at that velocity. In that frame the CofM is stationary.
Now suppose that the particles are indeed stuck together to form a rigid body. We see that the body is moving so that: 1) the CofM remains fixed, 2) all the distances between the particles are fixed. (This second condition is what is meant by a $rigid$ body after all).
A motion with these two properties, (1) and (2), is precisely what is meant by the phrase a rotation about the CofM''
• Great explanation. So the body rotates about the COM because it has to stay rigid. Isn't it ? Had I analysed the motion in frame in which the COM was not at rest or say was in rotation ? What would be the case then ? Probably the rest frame in which COM is in rotation would give the trouble in this case. Isn't it ? And if I say that the moment of inertial about the com is least ( checked it for many discrete cases) that's why its the com about which the body rotates . Would it be correct ? – Shashaank Feb 12 '17 at 16:38
• Im not sure about the role of the minimal moment of inertia. The rotation is just geometry. The moments of I. come into the dynamics: If the three pricipal moments differ, the angular velocity vector can be a complicated function of time. – mike stone Feb 12 '17 at 16:50
• @mikestone Ok. No doubt your answer is a great explanation and I will accept it .But could you also provide an argument when we observe the rotation from the ground frame . – Shashaank Feb 12 '17 at 17:38
• I'm sorry, I don't see how this explains that a centre-of-mass exists. Why couldn't all particles still keep the same distance to each other (remaining rigid) if they rotated around an edge-point instead? – Steeven Feb 12 '17 at 17:46
• @Steeven, because then the COM would be rotating (accelerating) around some other axis, which violates the rule that a COM moves at constant velocity when no external forces are acting on it. – user1717828 Feb 12 '17 at 18:31
Here is one more way to look at this:
You can consider an object with any shape as a single point where all the mass of the object is concentrated. This point is called the center of mass. From Newton's second law, as no force is acting on the object, the center of mass must either move in a straight line or be stationary. If the body rotates, the only way the center of mass can obey that law is if the rotation is around the center of mass.
Imagine two stones tied together with a massless rod and let one stone to rotate around the second one being fixed.
In that case there must be a force that accelerate the first stone perpendicular to its velocity and causes it to rotate around the second one. The whole setup is free, so there is no counter force to equalize and this setup violates Newton's laws.
If we want to rotate this stone-rod-stone body with respect of Newton's laws we must add and arbitrary point it will be rotating around. In this case both stones are revolving around this point, radial force is applied to both of them and they have opposite direction. The forces must cancel out completely and they cancel out only if the arbitrary point is placed exactly in the centre of mass.
The reason that a body under free rotation rotates about its center of mass is that the moment of inertia tensor at the center of mass is a minimum. When you rotate about any point that is not the center of mass, you have to apply the parallel axis theorem.
$$I' = I_\mathrm{CM} + m\vec{r}_\mathrm{CM}^2$$
The minimum of this equation is when the radius from the center of mass to the axis of rotation is zero. Therefore, the center of mass is the point of rotation that provides the least resistance to rotation.
In fact, the instantaneous center of rotation doesn't instantaneously shift to be at the center of mass of the object once the external forces stop acting on the object. Imagine you have a bowl and you drop a ball into it so that it's initial point of contact is close to the rim. The ball will tend towards the bottom of the bowl as that's the location with the lowest gravitational potential. However, before it gets there, it oscillates a bit before coming to rest. The bottom of the bowl is a stable point.
This is analogous to our rotation. The point about which the object rotates is initially offset from the center of mass. However, as time progresses, it tends towards the center of mass as it tries to find the path of least resistance. Rotation about the center of mass provides this least resistance.
• Um, the latter part of your answer cannot possibly be correct. You don't believe that bodies continue to accelerate for a while even after external forces stop acting on them, do you? Now, given that a body rotating around any axis that doesn't pass through its CoM must experience a non-zero net acceleration, I'm sure you can see the contradiction. – Ilmari Karonen Feb 12 '17 at 19:14
• Perhaps you are right. But my intuition tells me that the center of rotation can't shift infinitely fast as that would entail an infinite angular acceleration about the axis of rotation. The inertia tensor at the principal axes of the object acts as a stable point, and rotation approaches that stable point during free motion. To put it succinctly, I see the angular momentum vector as shifting continuously toward the stable point once the external forces cease to act on the body. – gdbb89 Feb 13 '17 at 1:18
• Easy to test: spin a large ring, such as one of those frisbee-like rings, on a stick . Clearly it's not spinning about its CM. Now release the ring (remove the stick). What does the ring start spinning around? --- – Carl Witthoft Feb 13 '17 at 16:39
I'm not a physicist, but I'll take a stab at it.
A simplified example of your spinning sphere that may help you with this concept would be a disk made all of one density of material. An example would be a child's top or a gyroscope that you can spin on a flat surface. Every part of the disk has a matching balancing part on the opposite side of the disk. Each balancing pair of parts of the disk have the same mass as each other, have opposite motions to each other when rotating and create opposite balancing centripetal forces that keep the disc's rotation balanced around the center of mass (which is also the disk's geometric center).
If you add more mass to the disc anywhere but at the center, the center of mass of the disc shifts away from the geometric center of the disk and toward the mass you just added. The object will now rotate around this new center of mass. This is because all the mass on the side away from the new added mass must create a balancing opposite force to the now heavier side of the disk. The mass of the disc between the geometric center of the disc and the new (shifted) center of mass shifts to becomes the opposing balancing force to the added mass.
The green dot on the right is the original center of mass and center of the disc. The blue circle is an added mass. The green dot on the left is the new center of mass. The area between the two red lines is mass on the disk that balances the added mass when rotating. Adding more (blue) mass will shift the center of mass further from original center and move the left red line (and center of mass) further toward the added mass (left). If the original disk was very massive relative to the added mass, the center of mass won't shift as far (i.e. less area between the red lines needed to balance the new mass, and less shifting of center of mass to balance the added mass).
So to conclude, every time you add to (or subtract from) the mass of a rotating object, the object changes the location of it's center of rotation so that the forces caused by rotation remain in balance. The point of rotation is the center of all the mass of that object.
I was thinking that motion about the COM is the most stable one and the rotation about other points degenerates. I don't think it's right . Is it ?
Let's go with this for a second. I'm not sure the term 'degenerates' it completely right here but I think you are on the right track. Consider a perfectly balanced wheel on an automobile. Its rotation is not free but rather fixed in its center which is also its center of mass (because it's balanced.) When it spins, there is no force on the axle.
Now consider what happens if we attach a weight to the rim of the wheel and make it unbalanced. When wheel spins, it will now apply forces to the axle. If you've ever driven in a vehicle in such a situation, you will feel this as a vibration at most speeds as the wheel continually 'jumps'. Why does this happen? It's because the axle is forcing the wheel to rotate around a point that is not its center of mass. In other words, only rotation around the center of mass is neutral; in order for an object to rotate around another point, another force is required to keep it in place. By definition, a 'free' object is not subject to any such force.
One way to see this is to take a frisbee and spin it around a finger inside the rim. It will rotate around your finger (which is not at the center of it's mass.) Your muscles will need to constantly resist the motion in order to keep it in place. If you suddenly remove your finger, it will fly off in a straight line and continue to spin around its center of mass.
Because Inertia Moment is mimimum when rotating around center of mass, so any force applied to the body will "pass" through the "path" of minimal resistance.
Basically it is the point for which the sum of all momentums is minimal.
Also water and electric current flows through paths of minimal resistance. I wanted to do a short answer on purpose, because I think the alternative answer is just a "show the calculations" which is not very intuitive.
In fact, a solid body doesn't rotate about a point (COM), but around an axis on which the COM is situated.
For example in a solid, uniform sphere (every object with a non-uniform distribution of mass can continuously be transformed in a sphere with the same mass uniformly distributed), the only points that rotate about the COM lie in the equator plane perpendicular to the axis of rotation. All the other points rotate about another point on the axis of rotation.
If you let the sphere spin from zero angular velocity to an angular velocity x, without imparting a linear momentum on the sphere the linear momentum can only be conserved if all the moments of the dm's (infinitely small masses if we consider the sphere as a continuous mass) cancel, which is the case if the COM lies on the axis of rotation. Of course, if you consider different axes of rotation they have the COM point in common.
For two separate bodies bounded by an attractive force as gravity, it is possible to say that the bodies rotate about the COM of the two bodies. Like two masses connected by a rope, but with the rope removed. In this case, the rotation is also about the axis of rotation (perpendicular to the rotation plane), but as well about the COM point.
I understand this to be a mathematical or psychological phenomenon rather than a physical one.
An object can rotate about any axis. However, in the case where the axis does not go through the center of gravity, we would typically decompose this motion into a movement of the object's center of gravity, combined with motion about the center of gravity. You can always do this. Just ask 'How did the center of gravity move?' and subtract that movement from the movement of each piece. By definition, the remaining movement is a rotation about the fixed center of gravity.
We don't have to breakdown movement in this way. It so happens that (in Newtonian mechanics) we know how to deal with momentum and angular momentum separately. Different decompositions would be possible. But they almost certainly would be more complex and less intuitive. For example, suppose that angular momentum always led to an additional 'linear force', which was directional, and depended on the relationship between the mass distribution and the axis. It would be much harder to understand what it actually consisted of. We are used to rotating things about their center of mass.
When you solve Newtonian problems of torque, you usually have to judiciously choose a point to resolve torques around. The solution would be the same, but the technique is much easier if you choose the right point, for which as many forces as possible cancel out. 'Center of gravity' is just the standard heuristic to the general case of this problem.
As you know from riding a bicycle is it possible to balance at a very slow speed without falling if you can keep the weight balanced equally on both sides. As with a seesaw. It turns out that you can always find a way to balance a rigid body regardless of how it touches the surface underneath by placing the mass exactly right so it is evenly distributed.
You can see a video on how this can be done here: https://www.youtube.com/watch?v=OGRUf1PLJdY
If you now recognize that this is true regardless of how the rigid body is orientated in the first place, you just need to consider if all these "divide the mass in equal parts"-division lines have anything in common or not. It turns out they all go through a single common point (this can be mathematically proven) which you probably have already guessed has named the center of mass.
It can also be mathematically proven that a rigid body behaves the same as a single point with the full mass of the rigid body. This makes newtonian calculations (like the movement of the Earth around the Sun) easier.
To make the couple around the center of mass axis equally distributed so that the body can rotate. As rotation itself is defined as the axis of rotation and the center of mass should be in same line else it becomes revolving around the axis of rotation.
The other reason is that the rotation can be sustained.
• Besides the last line , I couldn't get what you are saying ! – Shashaank Feb 12 '17 at 16:17
|
2021-01-23 04:50:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5003930330276489, "perplexity": 249.95701857095847}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703533863.67/warc/CC-MAIN-20210123032629-20210123062629-00345.warc.gz"}
|
https://www.numerade.com/questions/evaluate-the-integral-displaystyle-int-frac11-2ex-e-x-dx/
|
💬 👋 We’re always here. Join our Discord to connect with other students 24/7, any time, night or day.Join Here!
JH
# Evaluate the integral.$\displaystyle \int \frac{1}{1 + 2e^x - e^{-x}}\ dx$
## $\frac{1}{3} \ln \left|2 e^{x}-1\right|-\frac{1}{3} \ln \left(e^{x}+1\right)+C$
#### Topics
Integration Techniques
### Discussion
You must be signed in to discuss.
Lectures
Join Bootcamp
### Video Transcript
Let's use a U substitution here. Let's take you to be either the ex then we know. Do you? Either the next T X and we could go out and solve this for X by dividing. So that's first. Let's rewrite. This is E D x. Oops as you DX since S is equal to you Some just replacing this with you here and then do you over you equals the X. So now, after using the use of you still have the one up there. Then that's one plus to you and then minus. And then here because of that minus sign well, one over you. And then here we multiply my DX do you over you. Now the next step years to just multiply this you into that denominator there. What? So that's what we have Could factor that denominator before you try partial fractions. And then here, you know the partial fractions will be of the form. Stay over to you minus one, and then be over you plus one. So, after finding in the way you find the end of the recall is you set this circle term equal to this circle term here, Then you multiply by both sides of this by the green circles from here and then saw for Andy. So after doing this, we end up with a is two thirds and then for being, we get minus one third. So I just pull off that minus. And then there's the you plus one and then here we know these in a girls if you need to, you could do it. Use up here. Here. You could do another U. So, in any case, one half and the natural log to you minus one. The one half is coming from the DW over, too. And then we also have minus one third and the natural log you plus one plus c. So just cancel those twos there. Let's go out and write this out. Cancel those twos. One over three. Also, I shouldn't circle this because we didn't back some in terms of X, new equals either the ex. So here we have one third Ellen to E X, minus one minus one third. And that here eating Ellen Ellen either the X plus one. No absolute values necessary here. This is a positive number. Plus he and that's your final answer
JH
#### Topics
Integration Techniques
Lectures
Join Bootcamp
|
2021-09-23 06:58:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7736061215400696, "perplexity": 1077.9182028959224}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057417.10/warc/CC-MAIN-20210923044248-20210923074248-00538.warc.gz"}
|
https://jira.lsstcorp.org/browse/DM-13519?focusedCommentId=131757&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
|
# Implement per-object Galactic Extinction correction in color analysis QA plots
XMLWordPrintable
#### Details
• Type: Story
• Status: Done
• Resolution: Done
• Fix Version/s: None
• Component/s:
• Labels:
None
• Story Points:
5
• Sprint:
DRP S18-3, DRP S18-4, DRP S18-5, DRP S18-6, DRP F18-1, DRP F18-2
• Team:
Data Release Production
#### Description
Implement a per-object Galactic Extinction correction for use with the color-analysis QA plots to replace the per-field placeholder included in DM-13154. It looks like there is code in sims_photUtils (and dependencies) to do this, so this will be an attempt to get that working with the analysis scripts.
Note that this requires the A_filter/E(B-V) extinction coefficients for the HSC filters (awaiting a response from the HSC team, the placeholder noted above is just using SDSS filter values).
#### Activity
Hide
Lauren MacArthur added a comment - - edited
The per-object Galactic Extinction correction currently requires lsst.sims.catUtils to be setup as it uses its EBVbase class (in lsst.sims.catUtils.dust.EBV) to obtain the color excess, E(B-V), at a given RA, DEC. The call is thus put here in a try/except to fall back to the per-field correction (currently only hard-coded for the five tracts of HSC RC2 + RC datasets) if the sims repo is not setup until we can access the EBVbase class from an lsst_distrib installation (to be discussed with the sims team).
I made the sims code accessible to an lsst_distrib installation on lsst-dev with the following:
$mkdir -p ~/LSST/sims/ups_db $ export EUPS_PATH=~/LSST/sims:$EUPS_PATH $ eups distrib install sims_catUtils -t sims
[where ~ evaluates to /home/lauren/ here]
The installed sims package is version 2.8.0.
And the package is setup as follows:
$setup -k sims_catUtils -t sims Hsin-Fang Chiang has confirmed that she can set this repo up herself and did a test run on the PDR1 data reprocessing (results can currently, but temporarily, be seen https://lsst-web.ncsa.illinois.edu/~hchiang2/lauren_DM-13519/ – this will be redone once this ticket is closed). Thus, anyone on lsst-dev could setup the repo in my /home/lauren/LSST/sims directory to make use of it having setup an lsst_distrib installation. As a check on the extinction corrections applied I confirmed that the mean value matches that used for the "per-field" correction (currently hardwired for the 5 RC2 + old RC datasets), and I created plots of the galactic extinction applied to each object for all bands, e.g. The following example of the wPerp Principal Color fit: Note that there are "fit" and "wired" results displayed. The latter is the current "canonical" value that is hardwired and the former is a live fit to the current dataset. The wired coefficients came from the tract shown as a "happy medium" between the three RC2 tracts. These may be adjusted again based on the PDR1 reprocessing (after running the full run through colorAnalysis with the per-object GE correction applied, see DM-14590.) The full set of results using the per-object GE correction can currently be perused here: https://lsst-web.ncsa.illinois.edu/~lauren/lauren/DM-13519/w_2018_22/ The fit and wired Principal Color axes will be added to the figures in DM-14826. Show Lauren MacArthur added a comment - - edited The per-object Galactic Extinction correction currently requires lsst.sims.catUtils to be setup as it uses its EBVbase class (in lsst.sims.catUtils.dust.EBV) to obtain the color excess, E(B-V), at a given RA, DEC. The call is thus put here in a try/except to fall back to the per-field correction (currently only hard-coded for the five tracts of HSC RC2 + RC datasets) if the sims repo is not setup until we can access the EBVbase class from an lsst_distrib installation (to be discussed with the sims team). I made the sims code accessible to an lsst_distrib installation on lsst-dev with the following:$ mkdir -p ~/LSST/sims/ups_db $export EUPS_PATH=~/LSST/sims:$EUPS_PATH $eups distrib install sims_catUtils -t sims [where ~ evaluates to /home/lauren/ here] The installed sims package is version 2.8.0. And the package is setup as follows:$ setup -k sims_catUtils -t sims Hsin-Fang Chiang has confirmed that she can set this repo up herself and did a test run on the PDR1 data reprocessing (results can currently, but temporarily, be seen https://lsst-web.ncsa.illinois.edu/~hchiang2/lauren_DM-13519/ – this will be redone once this ticket is closed). Thus, anyone on lsst-dev could setup the repo in my /home/lauren/LSST/sims directory to make use of it having setup an lsst_distrib installation. As a check on the extinction corrections applied I confirmed that the mean value matches that used for the "per-field" correction (currently hardwired for the 5 RC2 + old RC datasets), and I created plots of the galactic extinction applied to each object for all bands, e.g. The following example of the wPerp Principal Color fit: Note that there are "fit" and "wired" results displayed. The latter is the current "canonical" value that is hardwired and the former is a live fit to the current dataset. The wired coefficients came from the tract shown as a "happy medium" between the three RC2 tracts. These may be adjusted again based on the PDR1 reprocessing (after running the full run through colorAnalysis with the per-object GE correction applied, see DM-14590 .) The full set of results using the per-object GE correction can currently be perused here: https://lsst-web.ncsa.illinois.edu/~lauren/lauren/DM-13519/w_2018_22/ The fit and wired Principal Color axes will be added to the figures in DM-14826 .
Hide
Lauren MacArthur added a comment -
Paul, would you be able to give this a look? I'm pinging you first as I want you to at least look at the addition of the config files in obs_subaru to check that my implementation is suitable. Let me know if you'd prefer to pass off the pipe_analysis part of the review.
Show
Lauren MacArthur added a comment - Paul, would you be able to give this a look? I'm pinging you first as I want you to at least look at the addition of the config files in obs_subaru to check that my implementation is suitable. Let me know if you'd prefer to pass off the pipe_analysis part of the review.
Hide
Paul Price added a comment -
Changes in obs_subaru look fine. If you don't mind, I'll duck reviewing pipe_analysis as I'm digging out from the vacation backlog and have multiple other reviews pending.
Show
Paul Price added a comment - Changes in obs_subaru look fine. If you don't mind, I'll duck reviewing pipe_analysis as I'm digging out from the vacation backlog and have multiple other reviews pending.
Hide
Lauren MacArthur added a comment -
Sophie Reed, would you mind having a look at the pipe_analysis changes here? Paul Price has already approved the obs_subaru changes.
Show
Lauren MacArthur added a comment - Sophie Reed , would you mind having a look at the pipe_analysis changes here? Paul Price has already approved the obs_subaru changes.
Hide
Lauren MacArthur added a comment -
I've addressed all your comments to date and pushed the updated branch. A new run of colorAnalysis.py on the RC2 tracts is going now. I moved the previous one to https://lsst-web.ncsa.illinois.edu/~lauren/lauren/DM-13519/w_2018_22old2 so this run will be in https://lsst-web.ncsa.illinois.edu/~lauren/lauren/DM-13519/w_2018_22 when it finishes.
Let me know if you see anything else that needs fixing.
Show
Lauren MacArthur added a comment - I've addressed all your comments to date and pushed the updated branch. A new run of colorAnalysis.py on the RC2 tracts is going now. I moved the previous one to https://lsst-web.ncsa.illinois.edu/~lauren/lauren/DM-13519/w_2018_22old2 so this run will be in https://lsst-web.ncsa.illinois.edu/~lauren/lauren/DM-13519/w_2018_22 when it finishes. Let me know if you see anything else that needs fixing.
Hide
Sophie Reed added a comment -
Looks good to me
Show
Sophie Reed added a comment - Looks good to me
Hide
Lauren MacArthur added a comment -
Thanks both. Merged to master.
Show
Lauren MacArthur added a comment - Thanks both. Merged to master.
#### People
Assignee:
Lauren MacArthur
Reporter:
Lauren MacArthur
Reviewers:
Paul Price, Sophie Reed
Watchers:
John Swinbank, Lauren MacArthur, Paul Price, Sophie Reed
|
2023-03-22 10:36:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3541232943534851, "perplexity": 8553.44811434527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943809.22/warc/CC-MAIN-20230322082826-20230322112826-00583.warc.gz"}
|
https://tex.stackexchange.com/questions/277444/no-background-canvas-for-plain-frame
|
# No background canvas for plain frame
I created background canvas for my presentation. I don't want the image if the frame style is plain. How can I extend my style file that no background image appears if \begin{frame}[plain] is used.
\setbeamertemplate{background canvas}
{\ifnum\thepage=1\relax%
{%
\includegraphics{Background_page1}
}
\else%
\includegraphics{Background}
\fi%
}
beamer has the boolean beamer@plainframe to decide whether the plain option is activated or not, so you can use it in your conditional installment of the background canvas template:
\documentclass{beamer}
\usetheme{CambridgeUS}
\makeatletter
\setbeamertemplate{background canvas}{%
\ifbeamer@plainframe%
\else
\ifnum\thepage=1\relax%
\includegraphics{example-image-a}%
\else
\includegraphics{example-image-b}%
\fi
\fi
}
\makeatother
\begin{document}
\begin{frame}
\frametitle{A test regular frame}
\end{frame}
\begin{frame}[plain]
\frametitle{A test frame with the \texttt{plain} option}
\end{frame}
\begin{frame}
\frametitle{Another test regular frame}
\end{frame}
\end{document}
The result:
|
2019-08-23 19:55:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31729841232299805, "perplexity": 4984.246125618862}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318986.84/warc/CC-MAIN-20190823192831-20190823214831-00535.warc.gz"}
|
https://codegolf.stackexchange.com/questions/230100/tips-for-code-golfing-in-desmos
|
# Tips for Code Golfing in Desmos
Desmos is mainly used as an online graphing calculator, but its graphical and mathematical functionalities can also be applied in certain coding challenges.
I know not that many people use Desmos on this site, but for the people that do, what are some tips for golfing in Desmos?
As usual, please keep the tips somewhat specific to Desmos, and one tip per answer.
(This is mainly a $$\\LaTeX\$$ trick that can save some bytes. This tip can likely apply to other languages that use $$\\LaTeX\$$.)
When dealing with exponents(e.g. x^{2}) or other operators that require the usage of brackets(e.g. \sqrt{5}), you can take out the brackets that are automatically there if there is only one digit/character in the exponent. So, x^{2} can be written as x^2. Likewise, \sqrt{5} can be shortened to \sqrt5.
Note that you cannot take out these brackets if there is more than one digit. For example, x^{10} is not the same as x^10.
This exponent trick can even work in summations, where it uses the same notation when indicating the stopping value.
\sum_{k=0}^{n}k^{3}
...it can be written as:
\sum{k=0}^nk^3
Because if this trick, it is better if you can find a way to maximize the amount of one digit exponential terms in your code, to eliminate the brackets.
For example, it is actually more byte efficient to write a^xa^y (6 bytes) instead of a^{x+y} (7 bytes).
• Extending the exponent trick a little further: you can save two bytes with xx^9 instead of x^{10} and one byte with 1/x^n instead of x^{-n}. Jun 21 '21 at 6:21
• @fireflame241 Could x^2 then be further shortened to just xx? Not sure, I've never used this language, but it looks like it might work based on your comment... Jun 21 '21 at 15:35
• @DarrelHoffman Yes, that works. Jun 21 '21 at 17:32
# You can use /
Pasting in x/5 or 8/3 will give the same result as typing it, just not properly formatted - so in some cases, you don't need the bulky \frac{x}{y}.
• It is actually not even necessary to use \frac at all in your code. If we consider fractions in the form (random stuff)/(more random stuff) vs. \frac{random stuff}{more random stuff}, we can see that the / way will always save at least 4 bytes over the \frac way, and even more in some cases, where you can remove the parentheses in the / way. Jun 20 '21 at 20:16
### Edit #2:
(Referring to the example in the first edit)
Apparently you can add a \ at the beginning of the function to make it work(see this comment), which I did not know as well. I guess that invalidates this tip, again.
### Edit:
In fireflame241's comment, he brought to my attention that it is not required to have the \left and \right accompanying the bracket pairs. This is true for most cases. But after some testing, there are some cases where taking out the \left and \right does break the code. Specifically, if you are using any function(e.g. total or max) in addition to these brackets, the function will not work(see example below).
Example:
Suppose you want to compare the corresponding elements of two lists, a and b, and see how many of those corresponding elements are the same. That is, something like a=[1,2,3,4] and b=[2,2,3,5] will output 2(the second and third elements of each list are the same).
Here's what someone might do, after learning that you can take out the \left and \right:
total(\{a=b:1,0\})
In theory, this should work perfectly fine, but in reality, Desmos gives an error and it doesn't work. I'm pretty sure it's because it considers l to be a function, and t, o, and a to be variables in this situation. l, o, and t are not defined, so it gives an error saying so.
In cases like this, it would be better to do:
total(1-sign(a-b)^2)
as suggested by the tip below.
First tip to start it off.
When doing comparisons in your code, most of the times, it is better to try not to use brackets { } in your code, because they always require a \left and a \right to go with them, which increases byte count unnecessarily. Instead, we can utilize the sign function.
Consider a naive implementation that returns 0 if a=b, and returns 1 otherwise:
(22 bytes)
\left\{a=b:0,1\right\}
Instead of doing this, we can save 9 bytes by doing a little math instead:
(11 bytes)
sign(a-b)^2
This works because sign(x) returns -1 if x is negative, 0 if x=0, and 1 otherwise. a-b is 0 only when a=b, so sign(a-b) would be 0 only when a=b. If a does not equal b, it returns either -1 or 1. The ^2 is just to convert the -1 to a 1.
Even if we wanted to return 1 if a=b and 0 otherwise, we can still save 9 bytes by doing 1-sign(a-b)^2 instead of \left\{a=b:1,0\right\}.
• The \left{ and \right} can be replaced with \{ and \} respectively, but it makes the expression unrenderable, see desmos.com/calculator/yezfbrlux5 with \{x<0:-x,x^2\}. (You can see the LaTeX with console.log(Calc.getState().expressions.list[0].latex) in the browser console). Jun 21 '21 at 6:12
• @fireflame241 Oh, I wasn't aware of that. I was wondering why something like \{\} worked, even though it doesn't render anything. In that case, this tip isn't really helpful. Jun 21 '21 at 6:52
• @fireflame241 After some testing, I actually found some cases where taking out the \left and \right does break the code. See the edit. Jun 21 '21 at 20:05
• This could be fixed with \operatorname{total}(\{a=b:1,0\}) or the shorter \total(\{a=b:1,0\}) Jun 21 '21 at 20:07
• @fireflame241 how are you supposed to paste that in? Jun 21 '21 at 20:11
# Logical negation
If you want to swap numbers that represent falsy and truthy values, you can use 0^x, where x is the value that needs to be logically negated. To make it work for negative numbers, you can use the absolute value (0^{abs(x)}), suggested by Aiden Chow.
• \left|x\right| --> abs(x), making it shorter than your 2nd version. Jun 20 '21 at 20:29
• @AidenChow Whoops, missed that
– user
Jun 20 '21 at 20:34
## Parentheses are not always required for trig functions
For example, tan35.6x=0 is valid and treated as tan(35.6x)=0.
Similarly, tan^23x is treated as tan^{2}(3x).
• On the topic of trig functions to powers, I believe you can only do trig functions ^2 ex: sin^2(x) works but sin^3(x) doesnt, so you would have to do sin(x)^3 instead Jun 21 '21 at 8:05
• @Underslash sin^{-1}(x) also works and calculates arcsin(x). Jun 21 '21 at 20:21
• @Aiden4 Yep, that one works as well, but besides those nothing else works like that Jun 21 '21 at 21:22
For functions that use \operatorname{(function name)}, you can simply take out the entire \operatorname part and use the (function name). For example, instead of \operatorname{total}, you can simply write total.
You can use function parameters to assign variable values outside of that function, even though it throws an error. Using this, we can save bytes when we repeat expressions inside of functions.
For example, look at this code:
(45 bytes)
f(a)=sort(a)[1]+sort(a)[length(a)]-sort(a)[2]
There is a lot of sort(a)'s in that code, maybe we can shorten it?
Here is what many people might try to do to shorten this:
(49 bytes)
b(a)=sort(a)
f(a)=b(a)[1]+b(a)[length(a)]-b(a)[2]
It seems like making a function just for sort(a) should have helped, but in reality this is actually 4 bytes longer than the initial code.
What can we do then? Well, the following code will do it:
(37 bytes)
b=sort(a)
f(a)=b[1]+b[length(a)]-b[2]
b is using a function parameter a in declaring it, and Desmos is throwing an error because it seemingly doesn't understand this, but at the end, it still works somehow, so it is valid.
### Change \to to ->
Actions can be represented by \to or →, both of which happen to be exactly 3 bytes, but if you use →, the next function will no longer require a leading \
example:
x\to\join(x,y) => x→join(x,y) shaves off a byte
plus it looks nicer
EDIT: Similarly, \prod can be shaved to ∏, saving a byte or two, and \sum to ∑. Important to note that these are not capital pi Π or capital sigma Σ, but their own distinct characters.
• Actually, simply pasting in -> for → also works. So for example, a->join(a,b) will work, even though the -> isn't converted to → when pasting it in: example. And furthermore, -> and → essentially act the same way when pasting it in (no backslash problems or anything like that), so AFAIK, there is no reason to use → over ->, as the latter will always save one byte over the former. Dec 22 '21 at 3:29
When constructing a list with ..., commas can usually be eliminated. For example, one might write [0,...,9] if they did not know about this, but you can actually save 2 bytes by writing [0...9] instead.
In cases where you want to specify the second element(to set your own common difference), the comma is only required between the first and second element. For example, instead of [0,2,...,50], you can write [0,2...50] instead.
Basically, you can replace ,..., with ... to save 2 bytes.
### List comprehension tips
If you are unaware, a new feature has been released in Desmos a few months ago: list comprehensions
They are similar to Python list comprehensions, where you can essentially use loops to construct lists. This functionality now allows us to be able to emulate nested for loops in Desmos, which was previously much harder to do.
List comprehensions follow the below form:
[(expression in terms of var1, ... ,varN) for var1 = (list1), var2 = (list2), ... , varN = (listN)]
This will construct a list by looping through each var1, var2, ... , varN in a nested fashion.
There are some golfs you can do to save bytes in list comprehensions.
Let's take a simple list comprehension below:
\left[\left(a,b\right)\operatorname{for}a=\left[1...10\right],b=\left[1...10\right]\right]
Like any other function, you can simply take out the \operatorname from the for and it will still work. So all in all, you have something like this:
[(a,b)fora=[1...10],b=[1...10]]
Even though the for and a are together, Desmos will still be able to distinguish between them.
A quirk with list comprehensions (and nested for loops in general) is that you will actually get different lists based on the order of each list. Here's an example to illustrate my point (obviously not golfed completely for readability):
[a+b for a = [1,2,3], b = [2,4,6]] --> [3,4,5,5,6,7,7,8,9]
[a+b for a = [2,4,6], b = [1,2,3]] --> [3,5,7,4,6,8,5,7,9]
(Graph)
Generally, a list comprehension is generated following the pseudocode below (using the general list comprehension form that I mentioned earlier):
SET result to empty list
FOR each varN in (listN)
.
.
.
FOR each var2 in (list2)
FOR each var1 in (list1)
ADD (expression in terms of var1, ... , varN) to the end of result
END FOR
END FOR
.
.
.
END FOR
PRINT result
A certain ordering of the lists can potentially save a few bytes over another if a code golf challenge requires the list output to be ordered in a certain way.
|
2022-01-23 00:24:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5714254379272461, "perplexity": 1248.7340394727087}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303917.24/warc/CC-MAIN-20220122224904-20220123014904-00424.warc.gz"}
|
https://discourse.julialang.org/t/very-large-performance-difference-between-turing-and-stan-gaussian-process/93663
|
# Very large performance difference between Turing and Stan [Gaussian Process]
A student I’m working with is working on a Gaussian Process model in Turing, but it was very slow. Out of frustration, they reimplemented in stan and it’s much faster. For context, it fits (1000 samples from 1 chain) in about 200s in stan but takes about 4000s in Turing (20x difference).
We have tried many things, including messing with the sampler (in Turing), playing with the autodiff backend, changing the prior, etc but haven’t figured anything out. In all these cases, the speed difference seems fairly similar. At this point, we would appreciate any tips on how to help the Julia model catch up to stan or else to understand why Turing is struggling here (to better choose tools in the future).
Thanks!
Our model looks like:
y(s, t) \sim \mathcal{N}(\mu(s, t), \sigma)
where
\mu(s, t) = \mu_0 + \beta_\mu(s) * x(t)
with a Gaussian process over \beta(s).
Although this is kind of a silly model, it’s an important building block for future work.
Our Turing model looks like
@model function gp(x, y, D; cov_fn::Function = cov_exp_quad)
# parse inputs
N = size(y, 2)
# priors
μ0 ~ Normal(5, 3) #
β_m ~ Normal(0, 1)
β_ρ ~ InverseGamma(5, 5) # kernel length parameter -- see Stan manual
β_α ~ Truncated(Normal(0, 1), 0, Inf) # kernel variance -- see Stan manual
# compute the kernel
β_K = cov_fn(D, β_α, β_ρ)
# often one adds noise to the diagonals of the covariance fn to account for noise
# in this case this does not make sense -- the true value of β(s) is smooth
# reject any sample where the covariance kernel is not positive definite
if !isposdef(β_K)
return nothing
end
# spatial model for the coefficient
μ_β ~ MvNormal(β_m * ones(N), β_K)
# the mean
μ = μ0 .+ hcat(x) .* transpose(hcat(μ_β))
logs ~ Normal(0, 1) # log standard deviation
s = exp(logs) # standard deviation
# compute the distributions
dists = Normal.(μ, s)
y .~ dists
end
where
function cov_exp_quad(D, α, ρ)
sq_dist = D .^ 2
return α^2 * exp.(-sq_dist ./ (2 * ρ^2))
end
Our stan model looks like
data {
int<lower=1> N_loc;
int<lower=1> N_obs;
vector[2] X[N_loc]; //locations, longitude & latitude
vector[N_loc] y[N_obs]; //observations of rainfall, indexed by [time, location]
vector[N_obs] x; // covariates
}
parameters{
real<lower=0> mu0; // intercept for mean
real beta_m; // coefficient for mean
real beta_rho; // kernel length parameter
real<lower=0> beta_alpha; // kernel variance parameter
real logs; // log of standard deviation
vector[N_loc] mu_beta;// coefficient
}
model{
matrix[N_loc, N_loc] beta_xi;
matrix[N_loc, N_loc] beta_K = cov_exp_quad(X, beta_alpha, beta_rho); // covariance function
vector[N_loc] mu[N_obs]; // mean value for each observation
beta_xi = cholesky_decompose(beta_K);
mu0 ~ normal(5, 3);
beta_m ~ std_normal();
beta_rho ~ inv_gamma(5, 5);
beta_alpha ~ std_normal();
logs ~ std_normal();
mu_beta ~ multi_normal_cholesky(rep_vector(beta_m, n_station), beta_xi);
for (i in 1:N_obs){
for (j in 1:N_loc){
mu[i, j] = mu0 + x[i] * mu_beta[j];
}
}
for (i in 1:N_obs){
for (j in 1:N_loc){
y[i, j] ~ normal(mu[i, j], exp(logs));
}
}
}
2 Likes
have you checked CPU usage is they’re both using single thread or same amount of CPUs if not single?
1 Like
Thanks for the suggestion. We’re using 1 CPU for both so I don’t think that’s it.
I remember having a huge slowdown with truncated normals…
See if anything discussed there helps?
Yes, that’s one improvement that can be made here. Truncated should not be used directly. Instead, one should use truncated. Secondly, the truncated(Normal(0, 1), 0, Inf) is deprecated syntax, since it can introduce numerical instability. Instead, use truncated(Normal(0, 1); lower=0).
A few other improvements are:
• move all computations that can be made one-time out of the main body and into the arguments. e.g. you can declare X=hcat(x) as a keyword argument and use that in the model. D is elementwise squared in your model each evaluation even though it never changes, which is wasted effort.
• Don’t use isposdef(β_K) if you can avoid it. Internally, that cholesky decomposes your matrix, which slows down your gradients. Instead, if you can check the arguments, that would be much faster.
• Use s ~ LogNormal(0, 1).
• Use y ~ MvNormal(μ, Diagonal(s.^2))
6 Likes
Thanks, these are very helpful tips. I’ll review them with her and see what the speed up looks like
In theory our matrix should always be positive definite. However, in practice we sometimes got some numerical issues, so we introduced this hack. Any other suggestions for handling this? Maybe putting stronger priors on the kernel length scale to keep it from getting too small would help…
If you know certain low values are unreasonable, incorporating that in the prior is a good idea. Sometimes it makes sense to add something like 1e-10 * I to the matrix just to smooth over some numerical issues. It’s a hack but one that works quite well for many models.
2 Likes
Thanks @sethaxen, better constraints on the parameters has helped with this issue so we’ve been able to get rid of the isposdef line without issues.
Things are still quite slow, as @dlakelan notes. It doesn’t dramatically get better if we turn that truncated Normal into something else. We’ve looked thorugh the performance tips documentation, but we’ll scan the discussion referenced and see if we can find anything useful.
Thanks
1 Like
One great thing about using Julia + Turing is that you can just run the regular old Julia profiler on it. Try that out and post a flame graph. I like the StatProfilerHTML.jl library for this.
|
2023-03-31 02:58:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6220620274543762, "perplexity": 6063.51268618947}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949533.16/warc/CC-MAIN-20230331020535-20230331050535-00037.warc.gz"}
|
https://academy.vertabelo.com/course/python-basics-part-2/working-with-lists/list-basics/joining-two-lists
|
Introduction
List basics
8. Joining two lists
Lists with if statements and loops
Lists in functions
Summary
## Instruction
Well done! We can also use the concatenation operator (+) to combine two lists.
# define a list
companies = ['Royal Dutch Shell', 'BP', 'Total', 'Volkswagen', 'Glencore Xstrata', 'Gazprom']
# define another list
new_companies = ['E.ON', 'Eni', 'ING']
# add new_companies to the old list
companies = companies + new_companies
As you can see, we created a new list named new_companies and then added its contents to the end of the companies list. At this point, companies has the following items:
['Royal Dutch Shell', 'BP', 'Total', 'Volkswagen', 'Glencore Xstrata', 'Gazprom', 'E.ON', 'Eni', 'ING']
## Exercise
Add three consecutive days using a single instruction. The water levels on days 8 through 10 were 772 cm, 770 cm, and 745 cm, respectively. Then, print the whole list.
### Stuck? Here's a hint!
You can create a new list with the following three elements: [772, 770, 745]. Add both lists using +.
|
2018-12-15 04:27:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22665704786777496, "perplexity": 12965.769420413291}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826715.45/warc/CC-MAIN-20181215035757-20181215061757-00067.warc.gz"}
|
https://www.tutorialspoint.com/java-program-to-find-the-area-of-a-parallelogram
|
# Java Program to Find the Area of a Parallelogram
In this article, we will understand how to find the area of a parallelogramA parallelogram has two pairs of parallel equal opposite sides. It has a base and a height which is the perpendicular distance between the base and its opposite parallel side.
The area of a parallelogram is calculated using the formula −
base * height
i.e.
b x h
Below is a demonstration of the same −
Input
Suppose our input is −
Base : 6
Height : 8
Output
The desired output would be −
Area parallelogram is : 48
## Algorithm
Step 1 - START
Step 2 - Declare three integer values values namely
Step 3 - Read the required values from the user/ define the values
Step 4 - Calculate the area using the formula by base * height and store the result.
Step 5 - Display the result
Step 6 - Stop
## Example 1
Here, the input is being entered by the user based on a prompt. You can try this example live in ourcoding ground tool .
import java.util.Scanner;
public class AreaOfParallelogram{
public static void main(String args[]){
int base, height, my_area;
System.out.println("Required packages have been imported");
Scanner my_scanner = new Scanner(System.in);
System.out.println("A reader object has been defined ");
System.out.print("Enter the value of parallelogram base : ");
base = my_scanner.nextInt();
System.out.print("Enter the value of parallelogram base : ");
height = my_scanner.nextInt();
my_area=base*height;
System.out.println("The area of the parallelogram is : "+my_area);
}
}
## Output
Required packages have been imported
A reader object has been defined
Enter the value of parallelogram base : 6
Enter the value of parallelogram base : 8
The area of the parallelogram is : 48
## Example 2
Here, the integer has been previously defined, and its value is accessed and displayed on the console.
public class AreaOfParallelogram{
public static void main(String args[]){
int base, height;
base=6;
height=8;
System.out.println("The base and height values are defined as " +base +" and " +height);
int my_area=base*height;
System.out.println("The area of the parallelogram is : "+my_area);
}
}
## Output
The base and height values are defined as 6 and 8
The area of the parallelogram is : 48
|
2023-02-02 15:12:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32391148805618286, "perplexity": 2474.099492695168}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500028.12/warc/CC-MAIN-20230202133541-20230202163541-00533.warc.gz"}
|
https://chenfuture.wordpress.com/category/latex/page/2/
|
## Archive for the 'latex' Category
### DO’s and DON’Ts when typesetting a document
The following rules apply when using LaTeX2e…
• In display math mode, use $...$ instead of $$...$$.
$$...$$ is simply obsolete.
• Use \textbf, \textit instead of \bf, \it.
\bf, \it are obsolete font selection commands. Under New Font Selection Scheme (NFSS), they should be replaced with \textbf, \textit. One immediate difference is that {\it\bf blabla} will not generate the composite effect of italic shape and bold series, while \textit{\textbf{blabla}} indeed produces bold italic fonts.
• Put a tilde before references or citations, e.g., Jie~\cite{habit06}.
This prevents LaTeX from putting a line break between the word and the citation. Similar cases are: length~$l$, function~$f(x)$, etc.
• Be cautious when changing the page margin and page layout.
Studies show that articles with approximately 66 characters per line are the most readable. Reading would become difficult if putting more and more texts into each line. That’s why you see articles are typeset in multiple columns in a newspaper.
• Differentiate between text comma and math comma, e.g., type “for $x=a$, $b$, or~$c$” instead of “for $x=a,b$, or $c$”.
A line will not break at math comma. That is why sometimes you see an ugly math expression exceeding the right margin of your texts. Also there will not be a white space after the math comma. Hence, in $x=a,b$, the “b” character is so close to the comma, which is unpleasant.
• Use \emph more often than \textit when you mean to emphasize a term or a phase.
You can easily change the layout of the emphasized content (such as to bold fonts instead of italic fonts) by redefining the \emph command. However, if you use \textit, you will meet a lot of hassles when you want to change the layout.
• Put a backslash after a dot if the dot does not mean full stop.
Example: “Please see p.\ 381 for an illustration.” The backslash after “p.” reminds LaTeX that the dot does not mean the end of a sentence, so LaTeX will put a correct white space between the dot and the number 381. Usually the width of the white space is shorter than that between a full stop and the beginning character of the next sentence. More examples are “Mr.\ Xing”, “e.g.\”, and “i.e.\”.
(Corrected: Well, none of the above examples are correct… They should be: “p.~381”, “Mr.~Xing”, “e.g.,”, “i.e.,”. But I am sure that the principle itself is ok. Anyone has a good example?)
• Note the difference between hyphen, en dash, em dash, and a minus sign.
Hyphen (-) connects the two parts of a compound word, such as in “anti-virus”. En dash (--) connects two numbers that define a range, such as in “pages 1--10”. Em dash (---) is a punctuation dash. And remember that when you write a negative number, embrace it by the dollar signs, e.g., $-40$.
• Write ellipsis using \ldots instead of three dots.
The \ldots commands correctly typeset the spaces between two consecutive dots.
### Using PSTricks to draw the Olympics Rings
The other day I ran into a person who’s asking people to draw the five Olympics rings by writing latex source codes only. I said, well, I could do that, using pstricks, which was my favorite drawing tool in latex.
After inspecting the Olympics logo for a while, I had a rough idea how to draw it. The key was to draw the five rings in the order: blue, yellow, black, green, and red. For each ring, draw a circle at that color with a certain line width, and then surround the circle with two white slim circles. This gave the salient pattern of the five rings and at the intersection of two rings some whites. The rest of the job was to make the interleaving effect of the five rings instead of seeing the yellow ring placed above the blue ring and so on. The trick was, after drawing the blue and the yellow rings, draw a blue arc at the place where the blue should be on top of the yellow at one of the intersection of the two rings. For the rest of the rings, do the same trick. Drawing the arcs was almost the same as drawing the rings (a colored circle squeezed by two slim white circles), except that an arc was part of a circle from some degree to some degree.
Okay, enough explanation. Here are the pstricks codes.
This governs how a name is formatted. ff stands for first name, vv for von part, ll for last name, and jj for suffix, such as Sr. and Jr. This format says that the first name is followed immediately (separated by only a space but not a line-break) by the von part, which in turn is immediately followed by last name. After the last name comes a comma and a space, then the suffix. As another example, see the acm style:
"{vv~}{ll}{, jj}{, f.}" format.name\$
This format indicates that the von part is presented in the front, immediately followed by last name, then a comma, a space and the suffix. First name goes in the very last, followed by a period. Note that ff means to display the first name fully, while f means that only the initial letter of the first name is displayed.
You will notice that in the acm style, all the letters of a name, except the initial letter, are typeset in smallcaps. This is because in the FUNCTION format.authors, there is a line of code
author format.names scapify
where scapify is another function that small-capifies the lower case letters.
Another example. Read the FUNCTION article. From the codes, you probably get a grasp of how different fields of a reference are listed if the reference is of article type. First are the authors, then comes the article title, which is followed by journal information. The format.journal.vol.num.date is another FUNCTION doing more typesetting. But it only cares about volume, number and date. The pages information is only taken care in the FUNCTION format.pages.
Well, I have to stop here. If I explained all the FUNCTIONs, I probably needed to write a hundred-page book. As a hacker, I am sure that you would like to read the codes in existing style files yourself, and you will know a lot more about how to format in the end.
### Further Reading
Patrick W. Daly. A Master Bibliographic Style File, for numerical, author–year, multilingual applications.
Nicolas Markey. Tame the BeaST, The B to X of BibTEX.
Oren Patashnik. Designing BibTeX Styles.
Michael Shell and David Hoadley. BibTEX Tips and FAQ.
### LaTeX Tabular More
The width and height of a cell in a tabular is controlled by many parameters. Read the following codes:
You specify p{2cm} as part of your \begin{tabular}{} arguments, in the hope that the first column in the table is 2cm wide. Unfortunately, it appears wider than you think. The width of a cell in the table, without regard to the line width of separators, is actually computed by
\tabcolsep + p{length} + \tabcolsep.
The length you specify, gives place to contain characters in the cell. Between the left separator and the left side of the bounding box of the first character, there is some room which is controlled by \tabcolsep, such that the cell will not look too crowded. It is the same on the right side of the cell. In other words, \tabcolsep governs such that the contents in the cell will not be positioned right next to the boarders, which looks rather ugly.
By default, \tabcolsep is set to 6pt, which equals to 2.12mm in digital printing. In the above codes, we re-set it to 1cm. So the total width of the first column in the table is 4cm, while the second column is 5cm wide.
The mechanism of the height of a cell is a little bit different. In default setting, the distance between the upper boarder and lower boarder of a cell is \baselineskip, which is the line spacing in paragraphs. If you look at two adjacent lines of texts in the paragraph, \baselineskip is the distance between the two base lines of the texts. \baselineskip is specified at the font selection stage. The primitive command \fontsize{size}{skip} sets this value. Usually a 10pt font size is associated with 12pt line skip.
The command \arraystretch scales the height of the cell by a factor. As in the above codes, the spacing of a row in the table is 2 times the default.
Note that the default height of a row in a tabular cannot be changed by manually setting \baselineskip. As to my current knowledge, the height of a row can only be changed by specifying a different \arraystretch factor. (Similarly, if you want to change the line spacing in the texts, such as to double spacing, do not change \baselineskip. Use \baselinestretch instead.)
• 245,568 hits
|
2017-05-25 03:14:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8694970011711121, "perplexity": 1546.9637026148}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607963.70/warc/CC-MAIN-20170525025250-20170525045250-00563.warc.gz"}
|
https://direct.mit.edu/evco/article/27/2/267/1089/Constraint-Handling-Guided-by-Landscape-Analysis
|
## Abstract
The notion and characterisation of fitness landscapes has helped us understand the performance of heuristic algorithms on complex optimisation problems. Many practical problems, however, are constrained, and when significant areas of the search space are infeasible, researchers have intuitively resorted to a variety of constraint-handling techniques intended to help the algorithm manoeuvre through infeasible areas and toward feasible regions of better fitness. It is clear that providing constraint-related feedback to the algorithm to influence its choice of solutions overlays the violation landscape with the fitness landscape in unpredictable ways whose effects on the algorithm cannot be directly measured. In this work, we apply metrics of violation landscapes to continuous and combinatorial problems to characterise them. We relate this information to the relative performance of six well-known constraint-handling techniques to demonstrate how some properties of constrained landscapes favour particular constraint-handling approaches. For the problems with sampled feasible solutions, a bi-objective approach was the best performing approach overall, but other techniques performed better on problems with the most disjoint feasible areas. For the problems with no measurable feasibility, a feasibility ranking approach was the best performing approach overall, but other techniques performed better when the correlation between fitness values and the level of constraint violation was high.
## 1 Introduction
Complex optimisation problems of realistic dimensions are generally not solvable with deterministic methods, hence the popularity of metaheuristics. The optimiser takes feedback from a fitness function which defines the “landscape” that the algorithm moves in. Choosing the best algorithm for a given problem is an optimisation problem in itself.
Research into fitness landscapes has been motivated by the goal of finding characteristics that help select the most successful algorithm for a given class of problems. Many metrics for fitness landscapes have been proposed in the last three decades (Malan and Engelbrecht, 2013). While they sometimes succeed in explaining algorithm performance, they generally do not account for the effects constraints have on the landscape.
Many practical problems have constraints and a number of constraint-handling techniques have been proposed to tackle them (Michalewicz, 1995; Coello Coello, 1999). These techniques can be largely categorised into penalty functions, repair approaches, feasibility maintenance, and multi-objective approaches. The success of different constraint-handling approaches depends not only on the parameter settings, but also on the features of the problem. Michalewicz (1995) observed that the death penalty, where infeasible solutions are rejected, performs badly when the proportion of feasible solutions is low, because an evolutionary algorithm cannot collect information to build on when most solutions are discarded. Penalty methods in general are assumed to be unsuitable for problems where the optimum is on the boundary between feasible and infeasible space or when the feasible region is disjoint (Mallipeddi and Suganthan, 2010).
Michalewicz and Schoenauer (1996) raised the question of whether features of the problem can guide the choice of which constraint-handling technique is to be incorporated into an evolutionary algorithm. The objective of this study was to answer this question with the focus on problem landscape features measured through sampling of the search space. The study is significant as it covers problems and algorithmic approaches from both combinatorial and continuous domains. We provide empirical evidence that high-level landscape features can be used as a guiding principle when deciding on the appropriate use of constraint-handling techniques for a new problem. An additional contribution is the introduction of the Broken Hill Problem: a generator of combinatorial problems with tunable levels of complexity and constraint violation that could be used in further studies of constrained search spaces.
## 2 Background
In general, the challenge of selecting the most appropriate algorithm to solve a given problem was formulated by Rice (1976) and was applied in the context of optimisation by Smith-Miles (2008). Further studies have investigated solving aspects of the algorithm selection problem for optimisation using fitness landscape analysis (Bischl et al., 2012; Malan and Engelbrecht, 2014; I. Moser et al., 1997; Muñoz et al., 2013; Ochoa et al., 2014), but all of these studies have been restricted to unconstrained (or only bound constrained) problems.
Most real-world optimisation problems have constraints defined on the search space. Consider, for example, the 25 papers published in the “Real World Applications” session of the GECCO 2016 conference (Friedrich, 2016). The papers cover a range of interesting and complex optimisation problems including habitat restoration planning, optimisation of chemical processes, and office space allocation. Of the 21 papers that present solutions to optimisation problems, 16 are constrained problems, some with as many as ten different constraints.
Existing techniques for characterising fitness landscapes are not necessarily appropriate for constrained landscapes. For example, autocorrelation (Weinberger, 1990; Manderick et al., 1991) is frequently used to characterise landscape ruggedness, but this technique assumes that the problem landscape is statistically isotropic (statistical information is invariant with respect to the starting position of a random walk when it is long enough). If a penalty function is used to convert a problem from a constrained into an unconstrained problem, this can result in a landscape that is no longer statistically isotropic (Greenwood and Hu, 1998a) and can therefore give misleading results on the ruggedness of a landscape (Greenwood and Hu, 1998b). Alternative approaches are needed for characterising constrained landscapes. Poursoltan and Neumann (2014) proposed a method for quantifying the ruggedness of constrained continuous landscapes. This method uses stochastic ranking (Runarsson and Yao, 2000) to perform a biased walk that favours feasible solutions. The measure of ruggedness, however, characterises only the fitness landscape.
Malan et al. (2015) proposed the notion of a violation landscape to be studied with fitness landscapes and introduced five metrics to capture its properties. In this study, we apply the same metrics to constrained problems in both continuous and combinatorial spaces, assess their capabilities in capturing the characteristics of the constrained landscape of a problem and relate the characteristics to six basic constraint handling techniques which are applied in conjunction with evolutionary metaheuristics.
## 3 Constrained Optimisation Benchmarks
In the continuous optimisation domain, a suite of constrained benchmark problems was developed and extended over the years for evolutionary algorithms (Michalewicz and Schoenauer, 1996; Koziel and Michalewicz, 1999; Runarsson and Yao, 2000) to finally form the basis of a competition of the IEEE Congress on Evolutionary Computation (CEC) in 2006 (Liang et al., 2006). A new set of 18 scalable problems were defined for the 2010 CEC competition (Mallipeddi and Suganthan, 2010) and these problems were used for the continuous problems in this study.
Benchmark problems with known and definable constrained areas do not exist in the combinatorial space. In combinatorial problems, the steps through the search landscape are naturally discrete, and gradients to optima can easily be devised. In this article, we introduce the Broken Hill Problem, an optimisation challenge where sequences of assignments can be defined such that each consecutive assignment adds to the value of the solution. At the same time, sequences can be defined as infeasible, rendering a solution invalid. In this way, it is possible to design instances that have longer or shorter gradients with longer or shorter infeasible sequences along these gradients. These instances can be used to test the ability of an algorithm to traverse shorter and longer infeasible areas.
Section 3.1 describes the Broken Hill Problem for tuning constrained combinatorial problems and Section 3.2 describes the CEC 2010 problem suite.
### 3.1 Broken Hill Problem
Devised as a test problem for constraint handling methods, the Broken Hill Problem (BHP) is an assignment problem with a sequence of locations to allocate items to. Items have a numerical value and duplicate assignments—assignments of items with the same value—are permitted. Consecutive allocations of the same item carries a reward, but constraints are applied to sequences of duplicated items of particular lengths.
A solution $x$ of the BHP, with $m$ items to be allocated to $n$ locations, is defined as:
$x=(x1,x2,…,xn),wherexi∈{1,2,…,m}.$
(1)
Given a solution $x$, a duplication sequence is two or more successive occurrences of the same item in $x$. For each solution $x$, two structures are defined: a list $S$ of all duplication sequences in $x$ and a set $I$ of all indices of $x$ that are not part of any duplication sequence. For example, a solution $x=(8,3,7,7,2,5,5,5,5,8)$, would result in list $S={{7,7},{5,5,5,5}}$ and set $I={1,2,5,10}$.
Given $S$ and $I$ above, the fitness function for a BHP with $m$ items to be allocated to $n$ locations is defined as:
$f(x)=∑s∈S(|s|k·v(s))|s|k+1+∑i∈Ixim·n,$
(2)
where
• $k$ is the optimal length of a duplication sequence (a parameter to be set by the problem designer);
• $|s|k$ is the length of a duplication sequence $s$ with a cap of $k$. If the number of items in $s$ is larger than $k$, $|s|k=k$; and
• $v(s)$ is the value of one of the repeating items in a duplication sequence $s$.
The first summation term of Eq. (2) rewards duplication sequences through the exponent (with a value always $>$1 and a maximum of 2). The largest possible exponent occurs when the sequence length equals or exceeds the optimal length of $k$, which ensures that more value is given to sequences closer to the optimal length $k$. The second summation term of Eq. (2) can be regarded as the base landscape that can be tuned to be neutral, guiding or deceptive. The current formulation is a guiding landscape where higher-valued items contribute more to the fitness. In this formulation, the value of the items that are not part of duplication sequences are added, but scaled by $m·n$. This ensures that non-repeating items never dominate the fitness, since the highest-valued item carries a smaller weight than the lowest value when it repeats only once. A neutral base landscape can be formed by setting the second term to 0 and a deceptive landscape can be formed by rewarding lower valued non-repeating items.
Given the example of $x=(8,3,7,7,2,5,5,5,5,8)$, with $n$ = 10, $m$ = 8 and $k$ = 3,
$f(x)=(2·7)23+1+(3·5)33+1+8+3+2+880≈81.32+225+0.26=306.58.$
In this way longer duplication sequences (up to $k$) are rewarded exponentially more than shorter sequences. Non-repeating assignments do contribute to the fitness, but only a very small amount. The parameter $k$ has the effect of creating multiple “hills” that serve as gradients needed for stochastic optimisers to search a fitness space. The optimal solution is formed by a repetition of duplication sequences of $k$ items of maximal value (with value $m$), alternating with duplication sequences of $k$ items of next-to-maximal value (with value $m-1$). The fitness of the optimal solution is expressed by Eq. (3).
$f*=∑i=1r1(k·m)kk+1+∑i=1r2(k·(m-1))kk+1+r3,$
(3)
where
$r1=n2kifnkisevenn2k+1ifnkisoddr2=n2kr3=((nmodk)·m)(nmodk)k+1if(nmodk)>1andnkiseven((nmodk)·(m-1))(nmodk)k+1if(nmodk)>1andnkisoddmm·nif(nmodk)=1andnkisevenm-1m·nif(nmodk)=1andnkisodd0otherwise.$
#### 3.1.1 Constraints
As a benchmark problem for constraint-handling techniques, the BHP defines part of the path to an optimum as infeasible. The constrainedness variable $c$ defines what subsequence of a duplication sequence is infeasible. The offset $o$ permits the researcher to set the length at which infeasible duplication sequences start. More precisely, any solution containing a duplication sequence of a length in the range $o+1,…,o+c$ is defined as infeasible. For example, if $o=2$ and $c=3$, then any solution containing a duplication sequence of length 3, 4, or 5 is infeasible. For a “broken hill,” it is desirable for $k$ to be greater than $o+c$, so that the optimum is feasible and located beyond the feasible area.
Figure 1 illustrates an example path to a local optimum in a neutral base landscape (i.e., the second term of (2) is set to 0), with $k=5$, $o=1$, and $c=3$. The blue dots show the fitness increasing as the solution changes (the “hill''), while the red dots show the constraint violations (“broken” parts of the hill). The violations are counted as the smallest distance to the feasible area, leading to two peaks at steps 2 and 8, where the solution contains a duplication sequence of length 3 (in the middle of the range of infeasible sequences).
Figure 1:
Development of fitness and constraint violations during a hill climb towards a local optimum with $c=3$ and $k=5$. The local optimum would be attained with a 10th step that changes the item 5 of step 5 back to an item 4.
Figure 1:
Development of fitness and constraint violations during a hill climb towards a local optimum with $c=3$ and $k=5$. The local optimum would be attained with a 10th step that changes the item 5 of step 5 back to an item 4.
In Figure 1, the initial solution, (5, 6, 3, 7, 9, 4, 1, 2, 8, 7), contains no duplication sequences and the fitness and level of constraint violation is zero. Each of the steps 1--5 changes the assignments to a 5 which leads to six contiguous items of 5. Since $k=5$, setting the sixth assignment to a 5 (step 5) has no effect on the fitness. Step 6 also has no effect on the fitness, since the 4 does not repeat.
Step 7 produces the second 4 in the sequence, and the fitness rises again. Steps 8 and 9 complete the sequence of four items of value 4. The hill climb does not end here, because it is possible to make one more improvement by exchanging the 5 made by step 5 to a 4 and achieve two sequences of five 5's and five 4's respectively. From this solution, it is not possible to reach the global best of five 9's and five 10's without making deteriorating moves.
The base landscape in the example is neutral. Note that defining it as guiding or deceptive does not change the (lack of) contribution to the objective function of the sixth five assigned in step 5, exceeding $k$.
### 3.2 Continuous Optimisation Benchmarks
In general, a constrained real-valued minimisation problem can be defined as follows:
$Minimisef(x),x=(x0,x1,…,xn-1)∈Rn$
4
$subjecttogi(x)≤0,i=1,⋯,phj(x)=0,j=p+1,⋯,m,$
5
where $x$ is an $n$-dimensional solution to the problem, $f(x)$ is the fitness function to be minimised, $p$ is the number of inequality constraints, $gi(x)$, and $m-p$ is the number of equality constraints, $hj(x)$. Equality constraints are usually expressed as inequalities within an error margin, so a solution $x$ is feasible if it satisfies all the inequality constraints and
$|hj(x)|-ε≤0,j=p+1,⋯,m$
(6)
for some small value of $ε$, such as $10-4$.
A number of continuous constrained optimisation benchmark suites have been proposed and extended over the years (Michalewicz and Schoenauer, 1996; Koziel and Michalewicz, 1999; Runarsson and Yao, 2000; Mezura-Montes and Coello Coello, 2004). For the CEC 2010 Competition on Constrained Real-Parameter Optimization (Mallipeddi and Suganthan, 2010), a new set of 18 problems were defined that were harder to solve than the previous benchmark problems and were scalable to any dimension. These problems were used in this study to generate the continuous problem instances.
### 3.3 Problem Instances
For the combinatorial domain, 70 BHP problem instances with varied sizes of infeasible areas and offset values were defined. The length of the solution (number of locations for assignment) was set as $n=50$; and the number of options to assign, set as $m={8,10}$. These values were combined with infeasible ranges of $c={1,⋯,9}$ and offsets of $o={1,…,5}$ (stopping when $o+c=k$). The optimal length of duplication sequences $k$ was set to 10 in all instances.
Small offsets mean that the infeasible range begins “early” in a path “up a hill”; large offsets allow the algorithm to build solutions of higher quality (beyond the quality of the base landscape) without having to “cross” the infeasible range defined by $c$. The higher $c$, the longer the successive assignments that render a solution infeasible. For example, an instance with $o=5$ and $c=1$ would have small infeasible areas (solutions with duplication sequences of length 6), whereas an instance with $o=1$ and $c=9$ would have feasible solutions at the bottom of the hills (containing no duplication sequences) and all other solutions, including those with optimal fitness would be infeasible.
For the continuous domain, the 18 CEC 2010 problems (Mallipeddi and Suganthan, 2010), which are problems of the form of Eq. (4), were defined in 2, 10, 20, and 30 dimensions, resulting in 72 problem instances. The problems have different objective functions and numbers of inequality and equality constraints and for most problems, the constraints are rotated to prevent feasible patches that are parallel to the axes. Optimal solutions are not reported for these functions, so the quality of solutions returned by algorithms is reported in relation to the quality of solutions reported by other algorithms.
All of the CEC 2010 problems with equality constraints have a reported feasibility region in 10 and 30 dimensions of approximately zero in relation to the size of the search space. This does not imply that there are no feasible solutions. Since equality constraints are commonly re-expressed as inequality constraints within an error margin, each equality constraint effectively defines a very narrow margin of feasibility around the equality constraint function. These regions are so narrow that they do not significantly contribute to the size of the feasible set in relation to the overall size of the search space.
## 4 Constrained Landscape Characterisation
The notion of a violation landscape was introduced by Malan et al. (2015) as a complementary viewpoint to fitness landscapes for constrained optimisation problems. Stadler (2002) defines a fitness landscape as consisting of three elements: a set $X$ of solutions to the problem; a notion of neighbourhood, nearness, distance, or accessibility on the set; and a fitness function. A violation landscape simply replaces the fitness function with a violation function, $φ:X→R$, which quantifies the level of constraint violation for all solutions.
For the BHP, the level of constraint violation of a solution $x$ containing the list of duplication sequences $S$, is defined by Eq. (7), which sums the violations for all duplication sequences.
$φ(x)=∑s∈Sφ(s),$
(7)
where
$φ(s)=|s|-o,if0<(|s|-o)≤c2c-(|s|-o)+1,ifc2<(|s|-o)≤c0,otherwise$
(8)
and $|s|$ is the length of a duplication sequence $s$.
The level of constraint violation of a solution should ideally reflect the distance to feasible space when used in penalty functions in order to direct the search algorithm towards feasible regions (Richardson et al., 1989). For this reason, Eq. (8) defines $φ$ such that it is highest in the middle of the range defined by $c$.
For the continuous problems, the level of constraint violation was defined as follows (Mallipeddi and Suganthan, 2010):
$φ(x)=∑i=1pGi(x)+∑j=p+1mHj(x)m,$
(9)
where
$Gi(x)=gi(x)ifgi(x)>00ifgi(x)≤0,$
(10)
and
$Hj(x)=|hj(x)|if|hj(x)|-ε>00if|hj(x)|-ε≤0,$
(11)
and $ε$ is defined as $10-4$.
The following sections describe the metrics (proposed by Malan et al. (2015)) used to characterise the constrained space.
### 4.1 FsR Metric
The feasibility ratio (FsR) approximates the size of the feasible space in relation to the entire search space. The metric is based on a sample of $n$ solutions and is defined as:
$FsR=nfn,$
(12)
where $nf$ is the number of points in the sample that are feasible.
### 4.2 RFB$×$ Metric
The ratio of feasible boundary crossings (RFB$×$) quantifies how disjoint the feasible regions are. Based on walks through the search space, RFB$×$ counts the proportion of steps that cross between feasible and infeasible space. More formally, given a sequence of $n$ solutions, $x1,x2,…,xn$ obtained by a walk, the binary string $b=b1,b2,⋯,bn$ is defined such that $bi=0$ if $xi$ is feasible and $bi=1$ if $xi$ is infeasible. RFB$×$ is then defined as:
$RFB×=∑i=1n-1cross(i)n-1$
(13)
where
$cross(i)=0ifbi=bi+11otherwise.$
(14)
Note that if FsR is 0 (no feasible solutions in the sample) or 1 (all feasible solutions in the sample), then RFB$×$ based on the same sample will be 0.
### 4.3 FVC Metric
The fitness violation correlation (FVC) quantifies the extent to which the fitness and violation guide search in a similar direction. Based on a sample of solutions resulting in fitness-violation pairs, the FVC is calculated as the Spearman's rank correlation coefficient between the fitness and violation values. For maximisation problems, the fitness values are negated before calculating FVC, so that the metric can be interpreted in the same way for both minimisation and maximisation problems.
The range of FVC values is $[-1,1]$, where a value of 1 indicates that fitness and violation guide search in the same direction, whereas a value of $-$1 indicates that the fitness and violation landscapes guide search in maximally opposite directions.
### 4.4 IZ Metrics
If one considers the scatterplot of fitness-violation pairs of a sample of solutions, the “ideal zone” would be the area of this plot where fitness is good and violations are low (bottom left corner for a minimisation problem). A large proportion of solutions in the ideal zone could indicate relatively large basins of attraction around the better solutions in a landscape combining fitness and violation functions (as with a penalty-based approach). On the other hand, a small proportion of solutions in the ideal zone could indicate narrow basins of attraction (isolated optima) in a penalised fitness landscape. The IZ metrics quantify the proportion of points in a sample that are in two of these ideal zones.
Metric 25_IZ is defined as the proportion of points in a sample that are below the 50% percentile (the median) for both fitness and violation for minimisation problems, and above the 50% percentile for fitness and below the 50% percentile for violation for maximisation problems. If the sample of points are distributed evenly throughout the fitness-violation scatterplot, 25_IZ would have a value of 0.25.
Metric 4_IZ is defined as the proportion of points in a sample that are below the 20% percentile for both fitness and violation for minimisation problems, and above the 20% percentile for fitness and below the 20% percentile for violation for maximisation problems. If the sample of points are distributed evenly throughout the fitness-violation scatterplot, then the 4_IZ would have a value of 0.04.
## 5 Constraint Handling Techniques
This section specifies the base algorithms used to solve the combinatorial and continuous problems and describes the six constraint handling approaches used in this study.
### 5.1 Base Optimisation Algorithms
The aim of this study was to investigate different constraint handling techniques as abstractions from the underlying optimisation approach. To isolate the effect of the constraint handling from the underlying search algorithm, the same base algorithm was used for all combinatorial problems and for all continuous problems, namely a genetic algorithm (GA) for the combinatorial problems and a differential evolution (DE) algorithm (Storn and Price, 1995) for the continuous problems.
For the optimisation of the combinatorial BHP, we chose a simple generational EA with a population size of 100, a mating pool of $50%$ formed using binary tournament ensuring no duplication, uniform crossover with a probability of 0.01 and a mutation rate of $1n$. The initial population was created with uniform random assignments. For selection, the solutions were valued as prescribed by the respective constraint handling technique. The algorithm stopped when a budget of function evaluations had been spent.
For the continuous problems, the classic form of DE, DE/rand/1 (Storn and Price, 1996), was used as the base algorithm with uniform crossover, a population size of 100, a scale factor ($F$) of 0.5, and a crossover rate of 0.5.
The base algorithms were combined with one of the following six constraint handling techniques.
### 5.2 No Constraint Handling
The simplest implementation disregards constraints in the hope to discover a good-quality solution which happens to be feasible. For selection, the solutions are compared by fitness alone. This approach is abbreviated to NCH.
### 5.3 Death Penalty
The death penalty (DP) rejects infeasible solutions. For the combinatorial problems, DP was implemented through the selection operator by removing all infeasible solutions from the population. Some BHP configurations (e.g., low values of $o$) lead to high levels of infeasibility. For this reason, the algorithm ensures that the population is valid after initialisation.
For the continuous problems, many of the instances have a zero feasibility ratio. It was therefore not possible to ensure a valid initial population in all cases. For this reason, DP was implemented in the continuous domain by assigning all infeasible solutions a constant maximum fitness value (the maximum real value using a double representation as defined by the compiler).
### 5.4 Weighted Penalty
The weighted penalty (WP) approach combines the level of constraint violation as a penalty in the fitness function using a static weighting between the two components. In this study, an even weighting of $50%$ penalty and $50%$ fitness was applied.
### 5.5 Feasibility Ranking
Deb (2000)'s feasibility ranking approach uses different comparison criteria depending on the feasibility and is implemented as follows:
• Two feasible solutions are compared by fitness.
• A feasible solution is preferred to an infeasible one.
• Two infeasible solutions are compared by their level of constraint violation.
This approach is abbreviated to FR.
### 5.6 $ε$-Based Feasibility Ranking
Takahama and Sakai (2005) published an extended feasibility ranking with an $ε$ tolerance that reduces over time, denoted $ε$FR:
• Two solutions whose constraint violations are lower or equal to $ε$ are compared by their fitnesses $f(x)$.
• Two solutions whose violations $φ(x)$ are equal are compared by their fitnesses $f(x)$.
• All other solutions are compared by their feasibility violations $φ(x)$.
The most informative description of how to set and adapt the parameter $ε$ is given in a later publication (Takahama and Sakai, 2006). It defines a cut-off, $Tc$, of generations after which $ε=0$. The cut-off can be set in the range $0.1,…,0.8×Tmax$, with $Tmax$ representing the maximum number of generations. Following Takahama and Sakai (2006), the cut-off was defined in terms of the maximum function evaluations ($FEmax$) as $FEc=0.8×FEmax$. Before $FEc$ had been reached, $ε$ was reset whenever a comparison was made between solutions as follows
$ε=φ(xθ)×(1-FEiFEc)cp,$
(15)
where $xθ$ is the $θ$-th solution in a population ordered by violations $φ(x)$ from lowest to highest (least to most infeasible), $FEi$ is the current number of function evaluations, and $cp$ is a parameter to control the speed of reducing relaxation of constraints. Following Takahama and Sakai (2006), $θ$ was set to $0.8×popsize$ and $cp$ was set to 5.
### 5.7 Bi-Objective Formulation
An alternative approach to a penalty or a feasibility ranking method is to model the constraint violations as a second objective, abbreviated to BO (Bi-objective).
The non-dominated sorting procedure of NSGA II (Deb et al., 2002) was used for the ranking of solutions during mating pool selection for the GA and for the selection of the next generation from the current population and trial vector population for the DE. Ties were broken randomly. The generational selection was carried out as described by Deb et al. (2002): successive fronts were added until the prescribed population size had been attained.
## 6 Results
This section provides results on the performance of the different constraint-handling approaches on the problem instances and then characterises the problem instances using the landscape metrics described in Section 4.
### 6.1 Algorithm Performance
The purpose of the first sets of experiments was to determine whether there are differences in the performance of the constraint-handling techniques on the benchmark problems. The expectation was that different algorithms would be suited to different problems, giving a range of performances.
Thirty independent runs of each algorithm were conducted on each problem instance. The budget of function evaluations was set to $20000×D$ (the dimension) for the continuous problems and 500000 for the discrete problem instances. The performances of algorithms were compared according to the CEC competition rules (Mallipeddi and Suganthan, 2010):
• If two algorithms produced feasible solutions in all of the 30 runs, they were compared by the mean fitness of the best feasible solutions produced in the final generation of each run.
• If two algorithms produced feasible solutions in some or all of the runs, they were compared by the proportion of runs that produced feasible solutions (known as the success rate).
• If two algorithms had zero runs with feasible solutions (a success rate of 0), the algorithms were compared by the mean of the violations of the least infeasible solutions in the final generation of each run.
For each problem instance, the algorithms were assigned a rank based on the relative performance using the rules above. Two algorithms were considered different if they differed in the third decimal. Tied algorithms received the same rank, and the subsequent rank remained unassigned. For example, Table 1 shows the ranks assigned to six constraint-handling techniques for two problem instances of the CEC2010 suite. For C01 in 2 dimensions, three of the techniques (DP, FR, and $ε$FR) achieved a success rate of 1 (all 30 runs resulted in feasible solutions). Comparing the mean fitness of these algorithms, DP and $ε$FR both achieved the lowest value of $-$0.365, so these two algorithms share the rank of 1 and FR is ranked 3. The remaining three algorithms are assigned the ranks of 4, 5, and 6 based on mean violation.
Table 1:
Ranking of algorithms based on performance measures of two example problem instances from the CEC2010 problem suite.
CEC2010 problem C01 in 2 dimensions
NCHDPWPFR$ε$FRBO
Success rate
Mean fitness of successful runs n/a −0.365 n/a −0.362 −0.365 n/a
Mean violation 0.320 0.179 0.314
Algorithm Rank
CEC2010 problem C09 in 20 dimensions
NCH DP WP FR $ε$FR BO
Success rate 0.033 0.967
Mean fitness of successful runs n/a n/a 0.000 n/a n/a 0.000
Mean violation 8.650 703.648 4.187 0.278 0.257 0.037
Algorithm Rank
CEC2010 problem C01 in 2 dimensions
NCHDPWPFR$ε$FRBO
Success rate
Mean fitness of successful runs n/a −0.365 n/a −0.362 −0.365 n/a
Mean violation 0.320 0.179 0.314
Algorithm Rank
CEC2010 problem C09 in 20 dimensions
NCH DP WP FR $ε$FR BO
Success rate 0.033 0.967
Mean fitness of successful runs n/a n/a 0.000 n/a n/a 0.000
Mean violation 8.650 703.648 4.187 0.278 0.257 0.037
Algorithm Rank
On problem C09 in 20 dimensions (the second example in Table 1), the BO algorithm achieved a success rate of 0.967 (29 of the 30 runs found a feasible solution), the WP algorithm achieved success rate of 0.033 (only 1 of the 30 runs found a feasible solution), and the other algorithms achieved success rates of 0. The BO and WP are compared based on their success rates and so BO is assigned rank 1 and WP rank 2. The remaining algorithms are ranked based on the mean violation, with DP achieving the worst rank of 6.
The performance of the different algorithms on all 142 problem instances (70 BHP and 72 CEC2010 instances) in terms of ranks is summarised by the boxplot in Figure 2. The whiskers on the plots show that the range of all algorithms is 1 to 6, meaning that each algorithm performed the best and also the worst in at least one problem instance. The median lines inside the boxes show that NCH performed the worst overall with a median rank of 6, whereas BO performed the best overall with a median rank of 2.
Figure 2:
Distribution of algorithm ranks over all problem instances.
Figure 2:
Distribution of algorithm ranks over all problem instances.
An alternative view of performance is to consider whether an algorithm failed or succeeded on a problem instance. Considering the first example in Table 1, the three algorithms with a success rate of 1 have very similar mean fitness values and can all three be regarded as reasonably successful. In contrast, the other three algorithms clearly failed on this problem in comparison to the other approaches.
In this study, an algorithm was regarded as successful on a problem instance if it met either of the two criteria below:
1. It was the best-performing algorithm as defined by the sub-rules 1a–1c below.
• If two algorithms had a success rate of $≥0.5$, the better algorithm was decided by comparing the mean fitnesses of all feasible runs.
• If either algorithm had a success rate of $≥0.5$, while the other had a success rate $<0.5$, the one with the higher success rate was considered better.
• If both algorithms had a success rate of $<0.5$, the better algorithm was decided by comparing the mean violations of all runs.
2. It was within $10-1$ of the best algorithm on the measure used to compare it as defined by the sub-rules 1a–1c above (mean fitness, success rate, or mean violation).
Considering the examples in Table 1 and using the rules above, DP, FR and $ε$FR were labelled as successful on CEC2010 C01 in 2 dimensions and the rest failures, while only BO was labelled as successful on CEC2010 C09 in 20 dimensions.
For all 142 problem instances, the proportion of instances on which each algorithm failed is given in Table 2. From the viewpoint of success/failure, the best performing algorithm was BO, which was the same as in Figure 2 based on the distribution of ranks. The worst performing algorithm was DP, followed by NCH. WP, FR, and $ε$FR had very similar failure rates.
Table 2:
Percentage of problem instances on which each algorithm failed.
NCHDPWPFR$ε$FRBO
68% 81% 65% 64% 65% 42%
NCHDPWPFR$ε$FRBO
68% 81% 65% 64% 65% 42%
In conclusion, the following can be derived from the experiments in this section: although the bi-objective approach performed the best on average on all problems considered, all algorithms achieved some level of success in solving the problems. In particular, each algorithm performed both the worst and the best on at least one problem instance. We conclude that each problem instance can be matched to the most suitable algorithm(s) from the algorithms considered.
### 6.2 Benchmark Problem Characterisation
To characterise all 142 problem instances, samples were generated for each instance using multiple hill climbing walks. Hill climbing walks were chosen over random walks to ensure that the sample contained a mixture of solutions of different quality. In contrast to hill climbing walks, random walks in the BHP search space frequently result in an over-representation of solutions on the “lower levels” of the fitness landscape (since the probability of attaining contiguous identical assignments declines exponentially in random solutions). This is in line with the findings of Smith-Miles (2008) that random sampling may be less successful in describing the landscape, particularly with skewed fitness functions.
During the hill-climbing walks, at least 5000 solutions (1% of the computational budget of actually solving the problem) were created for each BHP problem instance and these samples were used as the basis for the five landscape metrics (FsR, RFB$×$, FVC, 25_IZ, and 4_IZ). For each instance, initial random solutions were generated and locally optimised until no further improvement was possible. This process was repeated until the sample size exceeded 5000, ensuring that the last walk was completed.
For the characterisation of continuous problem instances, a sample size of 200 $×D$ (1% of the computational budget of actually solving the problem) was used. From a random initial position, a basic hill climbing walk was executed. Neighbours were formed by sampling in each dimension from a Gaussian distribution with the current position as mean and a standard deviation of 5% of the range of the domain of the problem instance. If no better neighbour could be found after sampling 100 random neighbours, the walk was terminated. New walks were started from random positions, until the total number of points in all the walks equalled the sample size.
Figure 3 shows the distribution of the landscape metric values for all 142 problem instances. The first boxplot shows that the FsR values ranged from 0 to close to 1, but that the values were strongly skewed towards 0. The median value for FsR was 0.008, indicating that more than half of the problem instances had feasibility ratios of 0 or close to 0. Most of the FsR values for the BHP instances (62 of 70 instances) were greater than 0, indicating larger proportions of feasibility encountered than for the CEC2010 instances (for which 47 of 72 instances had FsR values of 0). Most of the CEC2010 problems (12 of 18) have equality constraints, which explains the low feasibility rates.
Figure 3:
Boxplots showing the distribution of values of metrics for the 142 problem instances.
Figure 3:
Boxplots showing the distribution of values of metrics for the 142 problem instances.
The RFB$×$ values in the second boxplot are similarly skewed towards zero (since an instance with a 0 feasibility ratio would also have a 0 value for RFB$×$) with a median of 0.006 and a maximum of 0.472. The highest RFB$×$ value for a BHP instance was 0.048, which indicates that the feasible solutions are generally clustered together in the search space, rather than distributed in disjoint areas as with some of the CEC2010 instances, such as the CEC2010 problem C08 in 10 dimensions which had a RFB$×$ value of 0.472.
The third boxplot in Figure 3 shows that the FVC values ranged from close to $-$1 to close to 1, with values slightly skewed towards the negative (indicating that there were more problems instances where the fitness and violation landscapes diverged).
The last two boxplots in Figure 3 show that the maximum 4_IZ value was below 0.2 and the maximum 25_IZ value was below 0.5. The median of 0.0035 for 4_IZ and 0.14 for 25_IZ indicates that more than half of the problem instances had 0.35% or less of the sample in the 4% ideal zone and 14% or less in the 25% ideal zone, respectively.
## 7 Linking Performance to Characteristics
The purpose of the investigation was to establish whether the problem characteristics allow any conclusions as to the difficulty of a problem for the given constraint-handling technique. To this end, the problem characteristics were investigated in relation to the performances of the algorithms.
The Spearman's correlation coefficients between the metrics and the performances of the algorithm (with success coded as 1 and failure coded as $-$1) are shown in Table 3. Stronger correlations (positive or negative) are shaded darker.
Table 3:
Spearman's correlation coefficients between algorithm performances (success/failure) and landscape metrics for all problem instances.
FsRRFBxFVC4_IZ25_IZ
NCH 0.13 0.26 0.40 0.26 0.23
DP 0.41 0.40 −0.12 −0.33 −0.33
WP −0.19 −0.10 0.31 0.21 0.32
FR −0.09 −0.09 0.08 0.01 0.02
$ε$FR −0.04 −0.06 0.10 0.05 0.03
BO 0.17 0.10 −0.10 0.13 −0.15
FsRRFBxFVC4_IZ25_IZ
NCH 0.13 0.26 0.40 0.26 0.23
DP 0.41 0.40 −0.12 −0.33 −0.33
WP −0.19 −0.10 0.31 0.21 0.32
FR −0.09 −0.09 0.08 0.01 0.02
$ε$FR −0.04 −0.06 0.10 0.05 0.03
BO 0.17 0.10 −0.10 0.13 −0.15
DP has a high positive correlation with FsR and RFB$×$. This means that DP failure is associated with low feasibility ratios (and hence low RFB$×$ values), which is understandable since DP either disregards or maximally penalizes infeasible solutions. NCH is positively correlated with FVC. That is, NCH, which ignores constraints and optimises only on fitness, does well if the fitness and violation are correlated. FR, $ε$FR, and BO have weak correlations with the metrics.
Due to the large number of problem instances with zero feasibility ratios and corresponding zero values for RFB$×$, the dataset was divided into two: (1) a dataset with an FsR $=$ 0 (55 problem instances: 47 continuous and 8 discrete), referred to as dataset DS_NoFeas, and (2) a dataset with the instances that scored above 0 for FsR (87 instances: 25 continuous and 62 discrete), referred to as dataset DS_Feas. The DS_Feas dataset therefore consists of problem instances that have both feasible and infeasible regions in the search space, whereas the DS_NoFeas dataset contains problem instances with no measurable feasibility and so there is a high probability that some algorithms will be unable to find any feasible solutions. The correlations of algorithm performance to the split datasets are shown in Table 4. For the DS_NoFeas dataset, the metrics FsR and RFB$×$ are left out, because these metrics are zero for all instances in this dataset.
Table 4:
Spearman's correlation coefficients between algorithm performances (success/failure) and landscape metrics for the split datasets.
(a) Correlations for dataset DS_NoFeas.(b) Correlations for dataset DS_Feas.
FVC4_IZ25_IZFsRRFBxFVC4_IZ25_IZ
NCH 0.43 0.49 0.40 NCH −0.16 0.14 0.51 0.36 0.37
DP −0.31 −0.23 −0.31 DP 0.31 0.30 0.01 −0.28 −0.22
WP 0.02 0.00 0.02 WP −0.19 −0.07 0.43 0.24 0.36
FR −0.06 −0.10 −0.06 FR 0.14 0.13 0.09 −0.06 −0.08
$ε$FR −0.01 −0.04 0.01 $ε$FR 0.08 0.02 0.11 0.03 −0.05
BO 0.35 0.57 0.30 BO −0.11 −0.30 −0.28 0.15 −0.14
(a) Correlations for dataset DS_NoFeas.(b) Correlations for dataset DS_Feas.
FVC4_IZ25_IZFsRRFBxFVC4_IZ25_IZ
NCH 0.43 0.49 0.40 NCH −0.16 0.14 0.51 0.36 0.37
DP −0.31 −0.23 −0.31 DP 0.31 0.30 0.01 −0.28 −0.22
WP 0.02 0.00 0.02 WP −0.19 −0.07 0.43 0.24 0.36
FR −0.06 −0.10 −0.06 FR 0.14 0.13 0.09 −0.06 −0.08
$ε$FR −0.01 −0.04 0.01 $ε$FR 0.08 0.02 0.11 0.03 −0.05
BO 0.35 0.57 0.30 BO −0.11 −0.30 −0.28 0.15 −0.14
Table 4 shows some stronger correlations between the metrics and algorithm performance than for the full dataset in Table 3. In particular, medium positive correlations between all three metrics and the NCH and BO algorithms can be seen for the DS_NoFeas dataset. BO correlates more strongly with 4_IZ when there are hardly any feasible solutions. If there are measurable numbers of feasible solutions, it correlates more, but negatively, with RFBx. When hardly any feasible solutions exist, bi-objective approaches cannot afford to be guided by a violation measure that distracts from the few solutions with higher fitness, hence the high correlation with 4_IZ. When there are a number of feasible solutions, BO does not depend on them residing in the same ideal zone, as long as they are not separated by infeasible areas, hence the stronger correlation with RFBx. While BO correlates negatively with FVC, WP has a positive correlation on the DS_Feas dataset. Combining fitness and violation into one function naturally benefits from them being correlated. As expected, the performance of a biobjective formulation improves when the objectives truly conflict.
The performance of each of the algorithms in terms of ranks on the two split datasets is summarised as boxplots in Figure 4 and in terms of percentages of failed instances in Table 5. The relative performance of the algorithms differs between the two datasets. The best performing algorithm on DS_NoFeas is FR with a median rank of 2 (equal to $ε$FR, but with a distribution slightly more skewed toward 1) and the lowest failure rate of 55%. The best performing algorithm on DS_Feas is clearly BO with the lowest median rank of 2 and by far the lowest failure rate of 32%. Even in the case of DS_NoFeas, BO comes a close second with 56%, on a par with WP.
Figure 4:
Distribution of performance of algorithms (in terms of algorithm rank) on the split dataset.
Figure 4:
Distribution of performance of algorithms (in terms of algorithm rank) on the split dataset.
Table 5:
Percentage of problem instances on which each algorithm failed.
DatasetNCHDPWPFR$ε$FRBO
DS_NoFeas 82% 96% 56% 55% 60% 56%
DS_Feas 60% 71% 70% 70% 68% 32%
DatasetNCHDPWPFR$ε$FRBO
DS_NoFeas 82% 96% 56% 55% 60% 56%
DS_Feas 60% 71% 70% 70% 68% 32%
To further investigate the relationship between problem characteristics and algorithm performance, the C4.5 decision tree induction algorithm (Quinlan, 1993), implemented in WEKA (Hall et al., 2009) as J48, was applied to classify the performances of the algorithms on the two datasets.
The parameter values of the WEKA J48 classifier were set to a confidence threshold of 25% (the default) and the minimum number of instances per leaf nodes was adjusted per experiment to reduce the size of the tree and reduce overfitting. Accuracies are reported using 10-fold cross validation.
### 7.1 Dataset with FsR Equal to Zero
The DS_NoFeas dataset (with zero values for FsR) consisted of 55 instances with three landscape metrics (FVC, 4_IZ, and 25_IZ) and S/F class labels for each of the five constraint handling techniques. To reason about the metrics in relation to performance, the categories of “Low,” “Low-Medium,” “Medium-High,” and “High” were defined based on the ranges: (1) minimum to lower quartile, (2) lower quartile to median, (2) median to upper quartile, and (4) upper quartile to maximum of each of the metrics. These ranges are summarised in Table 6.
Table 6:
Ranges of metric values for the DS_NoFeas dataset used as the basis for fuzzy interpretation of rules derived from the decision trees.
LowLow-MediumMedium-HighHigh
FVC $[-0.889,-0.506)$ $[-0.506,-0.120)$ $[-0.12,0.351)$ $[0.351,0.961]$
4_IZ $[0,0.004)$ $[0.004,0.046)$ $[0.046,0.077)$ $[0.077,0.173]$
25_IZ $[0.03,0.161)$ $[0.161,0.229)$ $[0.229,0.305)$ $[0.305,0.466]$
LowLow-MediumMedium-HighHigh
FVC $[-0.889,-0.506)$ $[-0.506,-0.120)$ $[-0.12,0.351)$ $[0.351,0.961]$
4_IZ $[0,0.004)$ $[0.004,0.046)$ $[0.046,0.077)$ $[0.077,0.173]$
25_IZ $[0.03,0.161)$ $[0.161,0.229)$ $[0.229,0.305)$ $[0.305,0.466]$
The decision tree models generated by the C4.5 algorithm are given in Figure 5. In the visualisations of the trees, the splitting values are rounded off to three decimal places. For each tree, the testing accuracy based on 10-fold cross validation is reported below the tree. The total number of instances from the dataset reaching each leaf node is indicated in parentheses below the node. The number of instances that are incorrectly classified by the model, if any, are indicated after the slash in the parentheses.
Figure 5:
Classification trees created by the C4.5 algorithm based on the dataset DS_NoFeas.
Figure 5:
Classification trees created by the C4.5 algorithm based on the dataset DS_NoFeas.
The first tree in Figure 5a predicts the success of the NCH algorithm. The tree can be interpreted in the following fuzzy terms (based on the ranges in Table 6): When FsR is zero, if the fitness and violation are very highly correlated, then NCH will probably succeed; otherwise it will probably fail. This is understandable, since the NCH algorithm considers fitness alone, so it will only succeed in general if the fitness and the violation landscapes guide search in the same direction.
The second algorithm, DP, fails in 96% of the cases in the DS_NoFeas dataset and so is simply predicted to fail if FsR is zero. Since the death penalty eliminates infeasible solutions immediately after their creation (in the discrete domain) and assigns the maximum penalty to all infeasible solutions in the continuous domain, the algorithm has no opportunity to detect better solutions and is outperformed by the other algorithms.
In the case of the WP and FR algorithms, no success/failure decision tree could be induced, so it can be concluded that the failure of WP and FR on instances with FsR = 0 could not be predicted from the landscape metrics. Likewise, in the case of the $ε$FR algorithm, no decision tree could be induced. However, since the performance of $ε$FR was so similar to the performance of FR (as seen in Figure 4a), an alternative classification was performed. Figure 5b shows a tree based on a subset of the DS_NoFeas dataset where the algorithm performance ranks differed between $ε$FR and FR. For each instance, if the algorithm rank (a value from 1–6) of $ε$FR was better than the rank of FR, then the final class was denoted $ε$FR and vice versa for FR. The leaf nodes therefore denote the better performing algorithm of the two. In fuzzy terms the tree in Figure 5b can be interpreted as: When FsR is zero, if FVC is not High, then FR will probably perform better than $ε$FR. This can be understood as follows: when there are basically no feasible areas, FR only searches using the level of violation, while $ε$FR first searches based on fitness (when $ε$ is large) and then later switches to search based on the level of violation (when $ε$ is small). When the chances of finding a feasible solution are low, a search based on both fitness and violation ($ε$FR) will probably only perform better when the fitness and the violation direct search in the same direction.
The success/failure tree of the BO algorithm is shown in Figure 5c. In fuzzy terms, this can be interpreted as: When FsR is zero, if the proportion of points in the 4% ideal zone is Medium-High to High, then BO will probably succeed, otherwise it will fail. A value of over 6.8% for 4_IZ means that proportionately more of the points in the sample had both the best fitness and the lowest constraint violation. In general, BO succeeded on these problem instances, but not on the others. Since there are basically no feasible areas, the other algorithms that only searched based on the level of violation (such as FR) performed better by not trying to also optimise based on fitness.
### 7.2 Dataset with Non-Zero FsR
The DS_Feas dataset (with FsR $≠$ 0) consisted of 87 problem instances. Based on the failure percentages (Table 5), BO with 32% failures was a clear winner for the DS_Feas dataset. To investigate this behaviour further, a failure/success decision tree was induced for the BO algorithm and the remaining algorithms were analysed in terms of relative performance to the BO algorithm. The ranges of landscape metric values on this dataset are given in Table 7 and the classification trees generated by the C4.5 algorithm are illustrated in Figure 6.
Figure 6:
Classification trees created by the C4.5 algorithm based on the dataset DS_Feas.
Figure 6:
Classification trees created by the C4.5 algorithm based on the dataset DS_Feas.
Table 7:
Ranges of metric values for the DS_Feas dataset used as the basis for fuzzy interpretation of rules derived from the decision trees.
LowLow-MediumMedium-HighHigh
FsR $[0,0.041)$ $[0.041,0.267)$ $[0.267,0.488)$ $[0.488,0.961]$
RFB$×$ $[0,0.008)$ $[0.008,0.014)$ $[0.014,0.026)$ $[0.026,0.472]$
FVC $[-0.938,-0.686)$ $[-0.686,-0.383)$ $[-0.383,-0.006)$ $[-0.006,0.889]$
4_IZ $[0,0]$ $[0,0]$ $[0,0.02)$ $[0.02,0.146]$
25_IZ $[0,0.006)$ $[0.006,0.055)$ $[0.055,0.231)$ $[0.231,0.437]$
LowLow-MediumMedium-HighHigh
FsR $[0,0.041)$ $[0.041,0.267)$ $[0.267,0.488)$ $[0.488,0.961]$
RFB$×$ $[0,0.008)$ $[0.008,0.014)$ $[0.014,0.026)$ $[0.026,0.472]$
FVC $[-0.938,-0.686)$ $[-0.686,-0.383)$ $[-0.383,-0.006)$ $[-0.006,0.889]$
4_IZ $[0,0]$ $[0,0]$ $[0,0.02)$ $[0.02,0.146]$
25_IZ $[0,0.006)$ $[0.006,0.055)$ $[0.055,0.231)$ $[0.231,0.437]$
Figure 6a gives the success/failure tree induced by the C4.5 algorithm for BO on instances with some proportion of feasibility in the sample. In fuzzy terms the tree can be interpreted as: If the feasible areas are highly disjoint, then BO will probably fail; otherwise it will probably succeed. Recall that the RFB$×$ metric quantifies the proportion of boundary crossings between feasible and infeasible areas during the walks. When RFB$×$ is high, this means that the violation landscape is characterised by multiple small feasible areas (flat sections with zero violation) between small non-feasible areas (with non-zero levels of constraint violation). The BO algorithm optimises the fitness and the level of constraint violation as separate objectives. To this algorithm, the violation landscape is a highly rugged landscape with numerous global optima (all the flat sections) spread throughout the search space. It can therefore be understood that on such problems, BO does not perform well, because the violation landscape does not help to guide search to the best solutions.
The remaining trees in Figure 6 show when the other algorithms performed better than BO. Figure 6b and 6c show that the NCH and DP algorithms performed better than the BO algorithm in general when the feasible areas were disjoint. This corresponds with the failure graph for BO in Figure 6a and shows that even ignoring the constraints completely or killing off infeasible solutions were superior strategies to a bi-objective approach for these problem instances.
Figure 6e shows that the WP approach in general outperformed BO when the FVC was fairly high to high—in this case, not negatively correlated. A weighted penalty approach adds the level of constraint violation to the fitness value as a single-objective formulation. It can be understood that such an approach does not do well when the fitness and violation are negatively correlated, since the fitness and violation would essentially cancel each other out in the penalty function.
Similar to NCH and DP, the FR approach performs better than BO when feasible areas are very disjoint (Figure 6d). An FR approach always prefers a feasible solution to an infeasible solution and only considers the level of violation when comparing two infeasible solutions. The algorithm is therefore unaffected by the many flat sections of the violation landscape in the way the BO algorithm is affected. Figure 6d shows that FR also generally outperforms BO when FVC is very low (highly negatively correlated). Since the FR algorithm only considers either fitness or violation at each comparison, it is not as affected as BO by these two objectives guiding search in opposite directions. The tree generated to compare $ε$FR and BO (Figure 6f) is the same structure as for FR vs. BO and so can be understood in the same way.
## 8 Conclusion
This article compared six well-known constraint-handling techniques used with evolutionary algorithms on a range of constrained problems in both continuous and combinatorial spaces. For the combinatorial spaces, a new benchmark problem generator, the Broken Hill Problem, was introduced for instantiating assignment problems with different levels of complexity and constraint violation. For the continuous domain, the CEC 2010 suite of scalable constrained benchmark problems was used.
The 142 problem instances were characterised based on hill-climbing samples using five landscape metrics, combining information from the fitness and violation landscapes. The link between the problem characteristics and algorithm performance was then investigated.
A bi-objective approach to constraint handling (treating constraint violation as a second objective) performed better on average on all problem instances investigated. However, when considering the problems characterised by low levels of feasibility (as measured by the FsR metric), a feasibility ranking approach performed better on average.
Decision tree induction was used to further investigate the link between problem characteristics and algorithm performance. It was found that although a bi-objective approach performed well on average, it had a high probability of failing when the feasible areas were very disjoint (as measured by the RFB$×$ metric). In these instances, other approaches such as no constraint handling, death penalty, and feasibility ranking out-performed bi-objective. In addition, a weighted penalty approach in general out-performed a bi-objective approach when problems had some measurable feasibility and the fitness and violation values were positively correlated (as measured by the FVC metric).
In the case of problems with no measurable feasibility, the no constraint handling and death penalty approaches performed very poorly, which was to be expected. When the fitness and violation were highly correlated, an $ε$-feasibility ranking approach out-performed a feasibility ranking approach, whereas a bi-objective approach was successful on problems characterised by medium to high values for the 4_IZ metric.
These results can be understood in terms of the behaviour of the constraint-handling techniques and so provide insight into the behaviour of the different constraint-handling techniques. The results could be used as guiding principles when deciding on the appropriate use of constraint-handling techniques for evolutionary algorithms. The next step in this research will be to implement these findings into an adaptive constraint-handling technique approach. This will involve capturing landscape information online, during the optimisation process, so that there is no wasted computation budget expended on a priori characterisation.
## References
Bischl
,
B.
,
Mersmann
,
O.
,
Trautmann
,
H.
, and
Preuß
,
M
. (
2012
). Algorithm selection based on exploratory landscape analysis and cost-sensitive learning. In
Proceedings of the Fourteenth International Genetic and Evolutionary Computation Conference
, pp.
313
320
.
Coello Coello
,
C. A.
(
1999
).
A survey of constraint handling techniques used with evolutionary algorithms
.
Technical Report. Laboratorio Nacional de Informtica Avanzada
.
Deb
,
K
. (
2000
).
An efficient constraint handling method for genetic algorithms
.
Computer Methods in Applied Mechanics and Engineering
,
186
(
24
):
311
338
.
Deb
,
K.
,
Pratap
,
A.
,
Agarwal
,
S.
, and
Meyarivan
,
T
. (
2002
).
A fast and elitist multiobjective genetic algorithm: Nsga-ii
.
IEEE Transactions on Evolutionary Computation
,
6
(
2
):
182
197
.
Friedrich
,
T.
(Ed.) (
2016
).
GECCO '16: Proceedings of the genetic and evolutionary computation conference 2016
.
New York
:
ACM
.
Greenwood
,
G. W.
, and
Hu
,
X.
(
1998a
).
Are landscapes for constrained optimization problems statistically isotropic?
Physica Scripta
,
57:321
323
.
Greenwood
,
G. W.
, and
Hu
,
X.
(
1998b
).
On the use of random walks to estimate correlation in fitness landscapes
.
Computational Statistics and Data Analysis
,
28:131
137
.
Hall
,
M.
,
Frank
,
E.
,
Holmes
,
G.
,
Pfahringer
,
B.
,
Reutemann
,
P.
, and
Witten
,
I. H
. (
2009
).
The Weka data mining software: An update
.
Special Interest Group on Knowledge and Discover & Data Mining Explorations Newsletter
,
11
(
1
):
10
18
.
Koziel
,
S.
, and
Michalewicz
,
Z
. (
1999
).
Evolutionary algorithms, homomorphous mappings, and constrained parameter optimization
.
Evolutionary Computation
,
7
(
1
):
19
44
.
Liang
,
J.
,
,
T.
,
Mezura-Montes
,
E.
,
Clerc
,
M.
,
Suganthan
,
P.
,
Coello Coello
,
C.
, and
Deb
,
K.
(
2006
).
Problem definitions and evaluation criteria for the CEC 2006 competition on constrained real-parameter optimization
.
Technical Report. Nanyang Technological University, Singapore
.
Malan
,
K. M.
, and
Engelbrecht
,
A. P.
(
2013
).
A survey of techniques for characterising fitness landscapes and some possible ways forward
.
Information Sciences
,
241:148
163
.
Malan
,
K. M.
, and
Engelbrecht
,
A. P.
(
2014
). Fitness landscape analysis for metaheuristic performance prediction. In
H.
Richter
and
A. P.
Engelbrecht
(Eds.),
Recent advances in the theory and application of fitness landscapes
, pp.
103
132
.
Complexity and Computation
, Vol.
6
.
Berlin
:
Springer
.
Malan
,
K. M.
,
Oberholzer
,
J. F.
, and
Engelbrecht
,
A. P
. (
2015
). Characterising constrained continuous optimisation problems. In
2015 IEEE Congress on Evolutionary Computation
, pp.
1351
1358
.
Mallipeddi
,
R.
, and
Suganthan
,
P. N.
(
2010
).
Problem definitions and evaluation criteria for the CEC 2010 competition on constrained real-parameter optimization
.
Technical Report. Nanyang Technological University, Singapore
.
Manderick
,
B.
,
de Weger
,
M. K.
, and
Spiessens
,
P
. (
1991
). The genetic algorithm and the structure of the fitness landscape. In
Proceedings of the Fourth International Conference on Genetic Algorithms
, pp.
143
150
.
Mezura-Montes
,
E.
, and
Coello Coello
,
C. A.
(
2004
).
What makes a constrained problem difficult to solve by an evolutionary algorithm?
Technical Report. Evolutionary Computation Group (EVOCINV), Electrical Engineering Department, Computer Science Department Av. Instituto Politecnico Nacional, No. 2508.
Michalewicz
,
Z.
(
1995
).
A survey of constraint handling techniques in evolutionary computation methods
.
Evolutionary Programming
,
4:135
155
.
Michalewicz
,
Z.
, and
Schoenauer
,
M
. (
1996
).
Evolutionary algorithms for constrained parameter optimization problems
.
Evolutionary Computation
,
4
(
1
):
1
32
.
Moser
,
I.
,
Gheorghita
,
M.
, and
Aleti
,
A
. (
1997
).
Identifying features of fitness landscapes and relating them to problem difficulty
.
Evolutionary Computation
,
25
(
3
):
407
437
.
Muñoz
,
M. A.
,
Kirley
,
M.
, and
Halgamuge
,
S. K.
(
2013
). The Algorithm Selection Problem on the continuous optimization domain. In
C.
Moewes
and
A.
Nürnberger
(Eds.),
Computational intelligence in intelligent data analysis
, pp.
75
89
.
Studies in Computational Intelligence
, Vol.
445
.
Berlin, Heidelberg
:
Springer
.
Ochoa
,
G.
,
Verel
,
S.
,
Daolio
,
F.
, and
Tomassini
,
M.
(
2014
). Local optima networks: A new model of combinatorial fitness landscapes. In
H.
Richter
and
A. P.
Engelbrecht
(Eds.),
Recent advances in the theory and application of fitness landscapes
, pp.
233
262
.
Emergence, Complexity and Computation
, Vol.
6
.
Berlin
:
Springer
.
Poursoltan
,
S.
, and
Neumann
,
F.
(
2014
). Ruggedness quantifying for constrained continuous fitness landscapes. In
R.
Datta
and
K.
Deb
(Eds.),
Evolutionary constrained optimization
, pp.
29
50
.
Infosys Science Foundation Series
.
Berlin
:
Springer
.
Quinlan
,
J. R
. (
1993
).
C4.5: Programs for machine learning
.
San Francisco
:
Morgan Kaufmann
.
Rice
,
J. R.
(
1976
).
The Algorithm Selection Problem
.
,
15:65
118
.
Richardson
,
J. T.
,
Palmer
,
M. R.
,
Liepins
,
G. E.
, and
Hilliard
,
M
. (
1989
). Some guidelines for genetic algorithms with penalty functions. In
Proceedings of the Third International Conference on Genetic Algorithms
, pp.
191
197
.
,
T. P.
, and
Yao
,
X
. (
2000
).
Stochastic ranking for constrained evolutionary optimization
.
IEEE Transactions on Evolutionary Computation
,
4
(
3
):
284
294
.
Smith-Miles
,
K
. (
2008
). Towards insightful algorithm selection for optimisation using meta-learning concepts. In
Proceedings of the IEEE Joint Conference on Neural Networks
, pp.
4118
4124
.
,
P. F.
(
2002
).
Fitness landscapes
. In
Biological Evolution and Statistical Physics
, pp.
183
204
.
Lecture Notes in Physics
, Vol.
585
.
Storn
,
R.
, and
Price
,
K.
(
1995
).
Differential evolution—A simple and efficient adaptive scheme for global optimization over continuous spaces
.
Technical Report TR-95-012. International Computer Science Institute
.
Storn
,
R.
, and
Price
,
K
. (
1996
). Minimizing the real functions of the icec96 contest by differential evolution. In
Proceedings of the International Conference on Evolutionary Computation
, pp.
842
844
.
Takahama
,
T.
, and
Sakai
,
S.
(
2005
).
Constrained optimization by ε constrained particle swarm optimizer with ε-level control
, pp.
1019
1029
.
Berlin, Heidelberg
:
Springer
.
Takahama
,
T.
, and
Sakai
,
S
. (
2006
). Constrained optimization by the ε constrained differential evolution with gradient-based mutation and feasible elites. In
2006 IEEE International Conference on Evolutionary Computation
, pp.
1
8
.
Weinberger
,
E
. (
1990
).
Correlated and uncorrelated fitness landscapes and how to tell the difference
.
Biological Cybernetics
,
63
(
5
):
325
336
.
|
2021-05-09 16:09:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 274, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6528321504592896, "perplexity": 985.1214914383033}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989006.71/warc/CC-MAIN-20210509153220-20210509183220-00490.warc.gz"}
|
http://mathhelpforum.com/algebra/49041-solved-complex-number-print.html
|
# [SOLVED] A complex number
Printable View
• September 14th 2008, 01:02 PM
arbolis
[SOLVED] A complex number
Find all the complex numbers such that $z^2=1-i$.
My attempt : I wrote $z=(a+ib)$ and with a little bit of some algebra I reached that $ab=-\frac{1}{2}$ which mean that if $ab=-\frac{1}{2}$ then $z^2=1-i$ is satisfied. So the solution is all z such that $\forall$ a,b $\in \mathbb{R}$ / $ab=-\frac{1}{2}$ with $z=a+ib$. I'm sure there is a nicer way to write the answer. Can you check out my answer and write it in a better way if it is right? Thanks in advance.
• September 14th 2008, 01:11 PM
thelostchild
You are correct that
$2ab=-1$
but remember to equate the real parts as well as the imaginary parts
so
$(a+ib)^2=1-i$
$a^2+2abi-b^2=1-i$
$(a^2-b^2)+2abi=1-i$
equating real parts as well gives us $a^2-b^2=1$
now solve the two equations simultaneously to find the answer :)
as an aside you can always check your answer by plugging a few numbers in, if we try a=1 b=-0.5 which satisfy your condition we have
$(1-\frac{1}{2}i)^2=1-\frac{1}{4}-1i=\frac{3}{4}-1i\neq 1-i$
therefore it cannot be correct on its own :)
• September 14th 2008, 01:32 PM
arbolis
I didn't forget to do that, the problem is that I didn't reached anything beautiful. Like $b=+$ or $-\sqrt{a^2-1}$, or $b^4-b^2=\frac{1}{4}$, or $4b^2(b^2+1)=1$ and so on.
I'd like to know the answer and then try to reach it.
I've a similar problem to solve after this so I'll try it alone.
• September 14th 2008, 01:39 PM
Plato
Why not just find the two square roots of $1-i$?
• September 14th 2008, 01:39 PM
Jhevon
Quote:
Originally Posted by arbolis
I didn't forget to do that, the problem is that I didn't reached anything beautiful. Like $b=+$ or $-\sqrt{a^2-1}$, or $b^4-b^2=\frac{1}{4}$, or $4b^2(b^2+1)=1$ and so on.
I'd like to know the answer and then try to reach it.
I've a similar problem to solve after this so I'll try it alone.
you are finding the two square roots of 1 - i
the answers are $a = \pm \sqrt{\frac {1 + \sqrt{2}}2}$ and $b = - \frac 1{2a}$ ........and of course, you plug in what a is (can you get to that? :D)
you could also try converting to polar form, it might be the same, or more work, but not much different in terms of difficulty
• September 14th 2008, 02:04 PM
arbolis
I've reached $b=\pm \sqrt{\frac{1}{\pm 2\sqrt 2 +2}}$ so you can imagine to what $a$ is equal. But $b$ is a real number, so I can simply it to $\pm \sqrt{\frac{1}{2\sqrt 2 +2}}$. (Whew) I feel I made an error somewhere.
I've started with $a^2-b^2=-2ab$.
• September 14th 2008, 02:17 PM
Plato
In polar/exponential form $1 - i = \sqrt 2 \left( {\cos \left( {\frac{{ - \pi }}{4}} \right) + i\sin \left( {\frac{{ - \pi }}{4}} \right)} \right) = \sqrt 2 e^{i\left( {\frac{{ - \pi }}{4}} \right)}$
The roots are $z_1 = \sqrt[4]{2}\left( {\cos \left( {\frac{{ - \pi }}
{8}} \right) + i\sin \left( {\frac{{ - \pi }}{8}} \right)} \right)\;\& \;z_2 = \sqrt[4]{2}\left( {\cos \left( {\frac{{7\pi }}{8}} \right) + i\sin \left( {\frac{{7\pi }}{8}} \right)} \right)$
• September 14th 2008, 02:21 PM
arbolis
Quote:
Originally Posted by Plato
In polar/exponential form $1 - i = \sqrt 2 \left( {\cos \left( {\frac{{ - \pi }}{4}} \right) + i\sin \left( {\frac{{ - \pi }}{4}} \right)} \right) = \sqrt 2 e^{i\left( {\frac{{ - \pi }}{4}} \right)}$
The roots are $z_1 = \sqrt[4]{2}\left( {\cos \left( {\frac{{ - \pi }}
{8}} \right) + i\sin \left( {\frac{{ - \pi }}{8}} \right)} \right)\;\& \;z_2 = \sqrt[4]{2}\left( {\cos \left( {\frac{{7\pi }}{8}} \right) + i\sin \left( {\frac{{7\pi }}{8}} \right)} \right)$
Ah that's nice, I think I will do that in my exam even if I should simplify $-\frac{\pi}{8}$ and $\frac{7\pi}{8}$. I think my teacher will give me as much credit as if I solve it without using polar form.
• September 14th 2008, 02:28 PM
Jhevon
Quote:
Originally Posted by arbolis
I've reached $b=\pm \sqrt{\frac{1}{\pm 2\sqrt 2 +2}}$ so you can imagine to what $a$ is equal. But $b$ is a real number, so I can simply it to $\pm \sqrt{\frac{1}{2\sqrt 2 +2}}$. (Whew) I feel I made an error somewhere.
I've started with $a^2-b^2=-2ab$.
without doing polar form, you should start with
$a^2 - b^2 = 1$ ...............(1)
$2ab = -1$ ..................(2)
from (2), $b = - \frac 1{2a}$. plug that into (1) and solve for $a$. note that you must take the real answer. then plug that into (2) to solve for $b$
|
2016-02-14 08:29:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 47, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9087910652160645, "perplexity": 437.40775525534474}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701171770.2/warc/CC-MAIN-20160205193931-00052-ip-10-236-182-209.ec2.internal.warc.gz"}
|
https://www.inwinstack.com/zh/blog-tw/blog_ai-tw/6923/
|
# ML Topics: Regression
## ML Topics: Regression
### What is regression?
Regression is a supervised learning approach which models a target value based on independent predictors. In other words, regression is an analyzing method which estimates the relationships between a dependent variable (outcome variable) and one or more independent variables (predictors). This method is mostly used for forecasting and finding out cause-and-effect relationship between variables. In other words, regression is a techniques which predict continuous responses—for example, changes in temperature or fluctuations in electricity demand. If the nature of data response is a real number –such as temperature, regression techniques will be a great choice.
A well-known example of regression is the prediction of housing prices. With several features are known, such as floor plan, unit size, distance to specific landmarks, amenities …etc. The algorithms could then predict a price for your house and the amount you can sell it for.
### Common regression techniques
#### Linear regression
Linear regression is one of the most basic version of regression. It is an approach for predicting a response using a single feature. Considering a dataset where we have a value of response y for every feature x (left half of Figure 1), the regression task is to find a line which fits best in the scatter plot so that we can predict the response for any new feature values. After implementing some linear regression algorithms, we will find a line which can fit the scatter data points (the blue line in the right half of Figure 1), and the line is called regression line.
##### Figure 1: Linear Regression
In short, linear regression is a statistical modeling technique used to describe a continuous response variable as a linear function of one or more predictor variables. Because linear regression models are simple to interpret and easy to train, they are often the first model to be fitted to a new dataset.
#### Nonlinear regression
In contrast to linear regression, nonlinear regression is a statistical modeling technique that helps describe nonlinear relationships in experimental data. Nonlinear regression models are generally assumed to be parametric, where the model is described as a nonlinear equation. When the data shows strong nonlinear trends and cannot be easily transformed into a linear space, nonlinear regression is a favorable choice.
“Nonlinear” refers to a fit function that is a nonlinear function of the parameters. For example, if the fitting parameters are $$C_{0}$$, $$C_{1}$$, and $$C_{2}$$: the equation $$y=C_{0}+C_{1}x+C_{2}x^{2}$$ is a linear function of the fitting parameters, whereas $$y=\frac{C_{0}x^{C_{1}}}{x+C_{2}}$$ is a nonlinear function of the fitting parameters.
#### Gaussian Process Regression (GPR) Model
When trying to interpolate spatial data, such as hydrogeological data for the distribution of groundwater, Gaussian process regression (GPR) models are popular in this field. GPR models, also referred to as Kriging, are nonparametric models that are used for predicting the value of a continuous response variable. It is a method of interpolation for which the interpolated values are modeled by a Gaussian process governed by prior covariances.
##### Figure 2: GPR Model
Figure 2 is a simple example of the GPR model generated by scikit-learn (Gaussian Processes regression: basic introductory example) under Python environment. GPR models are widely used in the field of spatial analysis for interpolation in the presence of uncertainty. Also, it is common to use them as a surrogate model to facilitate optimization of complex designs such as automotive engines.
#### SVM Regression (SVR)
If there will be a large number of predictor variables in your data; or, facing with high-dimensional data, SVM regression is a common solution. SVM can be used for not only classification, but regression algorithms. SVM regression algorithms work like SVM classification algorithms with several modifications that make it able to predict a continuous response. Instead of finding a hyperplane that separates data, SVM regression algorithms find a model that deviates from the measured data by a value no greater than a small amount, with parameter values that are as small as possible. Figure 3 is an example of 1D SVM regression using linear, polynomial and RBF kernels.
#### Regression Tree
Decision trees can also be used to solve regression problems when the decision tree has a continuous target variable. The main difference between the classification tree analysis and the regression tree analysis is the nature of predicted outcome. The predicted outcome of the classification tree is the class (discrete) to which the data belongs; while it could be a real number (e.g. the price of a house, or a patient’s length of stay in a hospital) for the regression tree. Therefore, the regression tree can be considered as a variant of decision trees, which is designed to approximate real-valued functions, instead of being used for classification methods. Figure 4 is an example of 1D regression with decision tree. The decision tree here is used to fit a sine curve with addition noisy observation. As a result, it learns local linear regressions approximating the sine curve.
### In a nutshell …
Regression, or regression analysis, is a set of statistical methods used for the estimation of relationships between a dependent variable and one or more independent variables. It is widely used for prediction and forecasting. Thus, regression algorithms is the predominant empirical tool in economics and finance industries. For example, it is often used to predict consumption spending, fixed investment spending, inventory investment, revenues and expense forecasting, and analyzing the systematic risks of an investment. Also, regression is widely applied to predict a trend line which represents the variation in some quantitative data with passage of time (like GDP, oil prices, etc.).
Followed with our previous blog articles, classification, clustering, and regression are the three most popular and well-known machine learning categories that every ML enthusiast must know and they are also the good place to start for people who want to learn ML as well. Hope you like our articles and we will have more ML topics in the future!
#### Related articles:
Editor: Chieh-Feng Cheng
Ph.D. in ECE, Georgia Tech
Technical Writer, inwinSTACK
## 訂閱電子報
Select list(s)*
活動訊息 產品資訊 好文推薦
|
2021-08-03 12:58:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5103031396865845, "perplexity": 652.7129410125074}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154459.22/warc/CC-MAIN-20210803124251-20210803154251-00565.warc.gz"}
|
http://chalkdustmagazine.com/regulars/answers/puzzle-solutions-issue-04/
|
000
# Puzzle Solutions, Issue 04
Warning: Contains SPOILERS!
These solutions relate to puzzles found in Issue 04 of the magazine.
### Odd sums
The sum of the first $n$ odd numbers is $n^2$ (this can be proved by induction). This means that:
$$\frac{\text{sum of the first }n\text{ odd numbers}}{\text{sum of the next }n\text{ odd numbers}}=\frac{n^2}{(2n)^2-n^2}\\ =\frac{n^2}{3n^2}=\frac{1}{3}$$
### Odd squares
1 and 9 are the only two square numbers whose digits are all odd.
### Two lines
Matthew Scroggs is a PhD student at UCL working on finite and boundary element methods. His website, mscroggs.co.uk, is full of maths and now features a video of him completing a level of Pac-Man optimally.
@mscroggs mscroggs.co.uk + More articles by Matthew
• ### In conversation with… Cédric Villani
We feel underdressed for Breakfast at Villani's
Did you win?
• ### How many quadratics factorise?
Write down a quadratic. What is the probability that it factorises? Paging Prof. Dirichet...
• ### Review of Elastic Numbers
We have a go at the puzzles in Daniel Griller's new book
• ### On the cover: dragon curves
Read more about the fire-breathing curves that appear on the cover of issue 05
• ### Seven things you didn’t notice in Issue 04
How many did you spot?
|
2017-10-20 03:20:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4123300015926361, "perplexity": 5112.473485162581}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823630.63/warc/CC-MAIN-20171020025810-20171020045810-00731.warc.gz"}
|
https://bird.bcamath.org/browse?authority=9b31c96f-85f8-49ce-8f62-f0f4a239b522&type=author
|
Now showing items 1-1 of 1
• #### Quantitative weighted estimates for Rubio de Francia's Littlewood--Paley square function
(2019-12)
We consider the Rubio de Francia's Littlewood--Paley square function associated with an arbitrary family of intervals in $\mathbb{R}$ with finite overlapping. Quantitative weighted estimates are obtained for this operator. ...
|
2021-09-19 06:05:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6128969192504883, "perplexity": 707.2913335956068}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056711.62/warc/CC-MAIN-20210919035453-20210919065453-00430.warc.gz"}
|
https://stats.libretexts.org/Textbook_Maps/Map%3A_Introductory_Statistics_(OpenStax)/01%3A_Sampling_and_Data/1.2%3A_Data%2C_Sampling%2C_and_Variation_in_Data_and_Sampling
|
# 1.2: Data, Sampling, and Variation in Data and Sampling
Data may come from a population or from a sample. Small letters like $$x$$ or $$y$$ generally are used to represent data values. Most data can be put into the following categories:
• Qualitative
• Quantitative
Qualitative data are the result of categorizing or describing attributes of a population. Hair color, blood type, ethnic group, the car a person drives, and the street a person lives on are examples of qualitative data. Qualitative data are generally described by words or letters. For instance, hair color might be black, dark brown, light brown, blonde, gray, or red. Blood type might be AB+, O-, or B+. Researchers often prefer to use quantitative data over qualitative data because it lends itself more easily to mathematical analysis. For example, it does not make sense to find an average hair color or blood type.
Quantitative data are always numbers. Quantitative data are the result of counting or measuring attributes of a population. Amount of money, pulse rate, weight, number of people living in your town, and number of students who take statistics are examples of quantitative data. Quantitative data may be either discrete or continuous.
All data that are the result of counting are called quantitative discrete data. These data take on only certain numerical values. If you count the number of phone calls you receive for each day of the week, you might get values such as zero, one, two, or three.
All data that are the result of measuring are quantitative continuous data assuming that we can measure accurately. Measuring angles in radians might result in such numbers as $$\frac{\pi}{6}$$, $$\frac{\pi}{3}$$, $$\frac{\pi}{2}$$, $$\pi$$, $$\frac{3\pi}{4}$$, and so on. If you and your friends carry backpacks with books in them to school, the numbers of books in the backpacks are discrete data and the weights of the backpacks are continuous data.
Sample of Quantitative Discrete Data
The data are the number of books students carry in their backpacks. You sample five students. Two students carry three books, one student carries four books, one student carries two books, and one student carries one book. The numbers of books (three, four, two, and one) are the quantitative discrete data.
Exercise $$\PageIndex{1}$$
The data are the number of machines in a gym. You sample five gyms. One gym has 12 machines, one gym has 15 machines, one gym has ten machines, one gym has 22 machines, and the other gym has 20 machines. What type of data is this?
quantitative discrete data
Sample of Quantitative Continuous Data
The data are the weights of backpacks with books in them. You sample the same five students. The weights (in pounds) of their backpacks are 6.2, 7, 6.8, 9.1, 4.3. Notice that backpacks carrying three books can have different weights. Weights are quantitative continuous data because weights are measured.
Exercise $$\PageIndex{2}$$
The data are the areas of lawns in square feet. You sample five houses. The areas of the lawns are 144 sq. feet, 160 sq. feet, 190 sq. feet, 180 sq. feet, and 210 sq. feet. What type of data is this?
quantitative continuous data
Exercise $$\PageIndex{3}$$
You go to the supermarket and purchase three cans of soup (19 ounces) tomato bisque, 14.1 ounces lentil, and 19 ounces Italian wedding), two packages of nuts (walnuts and peanuts), four different kinds of vegetable (broccoli, cauliflower, spinach, and carrots), and two desserts (16 ounces Cherry Garcia ice cream and two pounds (32 ounces chocolate chip cookies).
Name data sets that are quantitative discrete, quantitative continuous, and qualitative.
Solution
One Possible Solution:
• The three cans of soup, two packages of nuts, four kinds of vegetables and two desserts are quantitative discrete data because you count them.
• The weights of the soups (19 ounces, 14.1 ounces, 19 ounces) are quantitative continuous data because you measure weights as precisely as possible.
• Types of soups, nuts, vegetables and desserts are qualitative data because they are categorical.
Try to identify additional data sets in this example.
Sample of qualitative data
The data are the colors of backpacks. Again, you sample the same five students. One student has a red backpack, two students have black backpacks, one student has a green backpack, and one student has a gray backpack. The colors red, black, black, green, and gray are qualitative data.
Exercise $$\PageIndex{4}$$
The data are the colors of houses. You sample five houses. The colors of the houses are white, yellow, white, red, and white. What type of data is this?
qualitative data
COLLABORATIVE EXERCISE $$\PageIndex{1}$$
Work collaboratively to determine the correct data type (quantitative or qualitative). Indicate whether quantitative data are continuous or discrete. Hint: Data that are discrete often start with the words "the number of."
1. the number of pairs of shoes you own
2. the type of car you drive
3. where you go on vacation
4. the distance it is from your home to the nearest grocery store
5. the number of classes you take per school year.
6. the tuition for your classes
7. the type of calculator you use
8. movie ratings
9. political party preferences
10. weights of sumo wrestlers
11. amount of money (in dollars) won playing poker
12. number of correct answers on a quiz
13. peoples’ attitudes toward the government
14. IQ scores (This may cause some discussion.)
Items a, e, f, k, and l are quantitative discrete; items d, j, and n are quantitative continuous; items b, c, g, h, i, and m are qualitative.
Exercise $$\PageIndex{5}$$
Determine the correct data type (quantitative or qualitative) for the number of cars in a parking lot. Indicate whether quantitative data are continuous or discrete.
quantitative discrete
Exercise $$\PageIndex{6}$$
A statistics professor collects information about the classification of her students as freshmen, sophomores, juniors, or seniors. The data she collects are summarized in the pie chart Figure 1.3.1. What type of data does this graph show?
Figure $$\PageIndex{1}$$
This pie chart shows the students in each year, which is qualitative data.
Exercise $$\PageIndex{7}$$
The registrar at State University keeps records of the number of credit hours students complete each semester. The data he collects are summarized in the histogram. The class boundaries are 10 to less than 13, 13 to less than 16, 16 to less than 19, 19 to less than 22, and 22 to less than 25.
Figure $$\PageIndex{2}$$
What type of data does this graph show?
A histogram is used to display quantitative data: the numbers of credit hours completed. Because students can complete only a whole number of hours (no fractions of hours allowed), this data is quantitative discrete.
### Qualitative Data Discussion
Below are tables comparing the number of part-time and full-time students at De Anza College and Foothill College enrolled for the spring 2010 quarter. The tables display counts (frequencies) and percentages or proportions (relative frequencies). The percent columns make comparing the same categories in the colleges easier. Displaying percentages along with the numbers is often helpful, but it is particularly important when comparing sets of data that do not have the same totals, such as the total enrollments for both colleges in this example. Notice how much larger the percentage for part-time students at Foothill College is compared to De Anza College.
De Anza College Foothill College
Table $$\PageIndex{1}$$: Fall Term 2007 (Census day)
Number Percent Number Percent
Full-time 9,200 40.9% Full-time 4,059 28.6%
Part-time 13,296 59.1% Part-time 10,124 71.4%
Total 22,496 100% Total 14,183 100%
Tables are a good way of organizing and displaying data. But graphs can be even more helpful in understanding the data. There are no strict rules concerning which graphs to use. Two graphs that are used to display qualitative data are pie charts and bar graphs.
• In a pie chart, categories of data are represented by wedges in a circle and are proportional in size to the percent of individuals in each category.
• In a bar graph, the length of the bar for each category is proportional to the number or percent of individuals in each category. Bars may be vertical or horizontal.
• A Pareto chart consists of bars that are sorted into order by category size (largest to smallest).
Look at Figures $$\PageIndex{3}$$ and $$\PageIndex{4}$$ and determine which graph (pie or bar) you think displays the comparisons better.
Figure $$\PageIndex{3}$$: Pie Charts
It is a good idea to look at a variety of graphs to see which is the most helpful in displaying the data. We might make different choices of what we think is the “best” graph depending on the data and the context. Our choice also depends on what we are using the data for.
Figure $$\PageIndex{4}$$: Bar chart
### Percentages That Add to More (or Less) Than 100%
Sometimes percentages add up to be more than 100% (or less than 100%). In the graph, the percentages add to more than 100% because students can be in more than one category. A bar graph is appropriate to compare the relative size of the categories. A pie chart cannot be used. It also could not be used if the percentages added to less than 100%.
Characteristic/Category Percent
Table $$\PageIndex{2}$$: De Anza College Spring 2010
Full-Time Students 40.9%
Students who intend to transfer to a 4-year educational institution 48.6%
Students under age 25 61.0%
TOTAL 150.5%
Figure $$\PageIndex{2}$$: Bar chart of data in Table $$\PageIndex{2}$$.
### Omitting Categories/Missing Data
The table displays Ethnicity of Students but is missing the "Other/Unknown" category. This category contains people who did not feel they fit into any of the ethnicity categories or declined to respond. Notice that the frequencies do not add up to the total number of students. In this situation, create a bar graph and not a pie chart.
Table $$\PageIndex{2}$$: Ethnicity of Students at De Anza College Fall Term 2007 (Census Day)
Frequency Percent
Asian 8,794 36.1%
Black 1,412 5.8%
Filipino 1,298 5.3%
Hispanic 4,180 17.1%
Native American 146 0.6%
Pacific Islander 236 1.0%
White 5,978 24.5%
TOTAL 22,044 out of 24,382 90.4% out of 100%
Figure $$\PageIndex{3}$$: Enrollment of De Anza College (Spring 2010)
The following graph is the same as the previous graph but the “Other/Unknown” percent (9.6%) has been included. The “Other/Unknown” category is large compared to some of the other categories (Native American, 0.6%, Pacific Islander 1.0%). This is important to know when we think about what the data are telling us.
This particular bar graph in Figure $$\PageIndex{4}$$ can be difficult to understand visually. The graph in Figure $$\PageIndex{5}$$ is a Pareto chart. The Pareto chart has the bars sorted from largest to smallest and is easier to read and interpret.
Figure $$\PageIndex{4}$$: Bar Graph with Other/Unknown Category
Figure $$\PageIndex{5}$$: Pareto Chart With Bars Sorted by Size
### Pie Charts: No Missing Data
The following pie charts have the “Other/Unknown” category included (since the percentages must add to 100%). The chart in Figure $$\PageIndex{6; left}$$ is organized by the size of each wedge, which makes it a more visually informative graph than the unsorted, alphabetical graph in Figure $$\PageIndex{6; right}$$.
Figure $$\PageIndex{6}$$.
### Sampling
Gathering information about an entire population often costs too much or is virtually impossible. Instead, we use a sample of the population. A sample should have the same characteristics as the population it is representing. Most statisticians use various methods of random sampling in an attempt to achieve this goal. This section will describe a few of the most common methods. There are several different methods of random sampling. In each form of random sampling, each member of a population initially has an equal chance of being selected for the sample. Each method has pros and cons. The easiest method to describe is called a simple random sample. Any group of n individuals is equally likely to be chosen by any other group of n individuals if the simple random sampling technique is used. In other words, each sample of the same size has an equal chance of being selected. For example, suppose Lisa wants to form a four-person study group (herself and three other people) from her pre-calculus class, which has 31 members not including Lisa. To choose a simple random sample of size three from the other members of her class, Lisa could put all 31 names in a hat, shake the hat, close her eyes, and pick out three names. A more technological way is for Lisa to first list the last names of the members of her class together with a two-digit number, as in Table 1.3.2:
Table $$\PageIndex{3}$$: Class Roster
ID Name ID Name ID Name
00 Anselmo 11 King 21 Roquero
01 Bautista 12 Legeny 22 Roth
02 Bayani 13 Lundquist 23 Rowell
03 Cheng 14 Macierz 24 Salangsang
04 Cuarismo 15 Motogawa 25 Slade
05 Cuningham 16 Okimoto 26 Stratcher
06 Fontecha 17 Patel 27 Tallai
07 Hong 18 Price 28 Tran
08 Hoobler 19 Quizon 29 Wai
09 Jiao 20 Reyes 30 Wood
10 Khan
Lisa can use a table of random numbers (found in many statistics books and mathematical handbooks), a calculator, or a computer to generate random numbers. For this example, suppose Lisa chooses to generate random numbers from a calculator. The numbers generated are as follows:
0.94360; 0.99832; 0.14669; 0.51470; 0.40581; 0.73381; 0.04399
Lisa reads two-digit groups until she has chosen three class members (that is, she reads 0.94360 as the groups 94, 43, 36, 60). Each random number may only contribute one class member. If she needed to, Lisa could have generated more random numbers.
The random numbers 0.94360 and 0.99832 do not contain appropriate two digit numbers. However the third random number, 0.14669, contains 14 (the fourth random number also contains 14), the fifth random number contains 05, and the seventh random number contains 04. The two-digit number 14 corresponds to Macierz, 05 corresponds to Cuningham, and 04 corresponds to Cuarismo. Besides herself, Lisa’s group will consist of Marcierz, Cuningham, and Cuarismo.
To generate random numbers:
• Press MATH.
• Arrow over to PRB.
• Press 5:randInt(. Enter 0, 30).
• Press ENTER for the first random number.
• Press ENTER two more times for the other 2 random numbers. If there is a repeat press ENTER again.
Note: randInt(0, 30, 3) will generate 3 random numbers.
Figure $$\PageIndex{7}$$
Besides simple random sampling, there are other forms of sampling that involve a chance process for getting the sample. Other well-known random sampling methods are the stratified sample, the cluster sample, and the systematic sample.
To choose a stratified sample, divide the population into groups called strata and then take a proportionate number from each stratum. For example, you could stratify (group) your college population by department and then choose a proportionate simple random sample from each stratum (each department) to get a stratified random sample. To choose a simple random sample from each department, number each member of the first department, number each member of the second department, and do the same for the remaining departments. Then use simple random sampling to choose proportionate numbers from the first department and do the same for each of the remaining departments. Those numbers picked from the first department, picked from the second department, and so on represent the members who make up the stratified sample.
To choose a cluster sample, divide the population into clusters (groups) and then randomly select some of the clusters. All the members from these clusters are in the cluster sample. For example, if you randomly sample four departments from your college population, the four departments make up the cluster sample. Divide your college faculty by department. The departments are the clusters. Number each department, and then choose four different numbers using simple random sampling. All members of the four departments with those numbers are the cluster sample.
To choose a systematic sample, randomly select a starting point and take every nth piece of data from a listing of the population. For example, suppose you have to do a phone survey. Your phone book contains 20,000 residence listings. You must choose 400 names for the sample. Number the population 1–20,000 and then use a simple random sample to pick a number that represents the first name in the sample. Then choose every fiftieth name thereafter until you have a total of 400 names (you might have to go back to the beginning of your phone list). Systematic sampling is frequently chosen because it is a simple method.
A type of sampling that is non-random is convenience sampling. Convenience sampling involves using results that are readily available. For example, a computer software store conducts a marketing study by interviewing potential customers who happen to be in the store browsing through the available software. The results of convenience sampling may be very good in some cases and highly biased (favor certain outcomes) in others.
Sampling data should be done very carefully. Collecting data carelessly can have devastating results. Surveys mailed to households and then returned may be very biased (they may favor a certain group). It is better for the person conducting the survey to select the sample respondents.
True random sampling is done with replacement. That is, once a member is picked, that member goes back into the population and thus may be chosen more than once. However for practical reasons, in most populations, simple random sampling is done without replacement. Surveys are typically done without replacement. That is, a member of the population may be chosen only once. Most samples are taken from large populations and the sample tends to be small in comparison to the population. Since this is the case, sampling without replacement is approximately the same as sampling with replacement because the chance of picking the same individual more than once with replacement is very low.
In a college population of 10,000 people, suppose you want to pick a sample of 1,000 randomly for a survey. For any particular sample of 1,000, if you are sampling with replacement,
• the chance of picking the first person is 1,000 out of 10,000 (0.1000);
• the chance of picking a different second person for this sample is 999 out of 10,000 (0.0999);
• the chance of picking the same person again is 1 out of 10,000 (very low).
If you are sampling without replacement,
• the chance of picking the first person for any particular sample is 1000 out of 10,000 (0.1000);
• the chance of picking a different second person is 999 out of 9,999 (0.0999);
• you do not replace the first person before picking the next person.
Compare the fractions 999/10,000 and 999/9,999. For accuracy, carry the decimal answers to four decimal places. To four decimal places, these numbers are equivalent (0.0999).
Sampling without replacement instead of sampling with replacement becomes a mathematical issue only when the population is small. For example, if the population is 25 people, the sample is ten, and you are sampling with replacement for any particular sample, then the chance of picking the first person is ten out of 25, and the chance of picking a different second person is nine out of 25 (you replace the first person).
If you sample without replacement, then the chance of picking the first person is ten out of 25, and then the chance of picking the second person (who is different) is nine out of 24 (you do not replace the first person).
Compare the fractions 9/25 and 9/24. To four decimal places, 9/25 = 0.3600 and 9/24 = 0.3750. To four decimal places, these numbers are not equivalent.
When you analyze data, it is important to be aware of sampling errors and nonsampling errors. The actual process of sampling causes sampling errors. For example, the sample may not be large enough. Factors not related to the sampling process cause nonsampling errors. A defective counting device can cause a nonsampling error.
In reality, a sample will never be exactly representative of the population so there will always be some sampling error. As a rule, the larger the sample, the smaller the sampling error.
In statistics, a sampling bias is created when a sample is collected from a population and some members of the population are not as likely to be chosen as others (remember, each member of the population should have an equally likely chance of being chosen). When a sampling bias happens, there can be incorrect conclusions drawn about the population that is being studied.
Exercise $$\PageIndex{8}$$
A study is done to determine the average tuition that San Jose State undergraduate students pay per semester. Each student in the following samples is asked how much tuition he or she paid for the Fall semester. What is the type of sampling in each case?
1. A sample of 100 undergraduate San Jose State students is taken by organizing the students’ names by classification (freshman, sophomore, junior, or senior), and then selecting 25 students from each.
2. A random number generator is used to select a student from the alphabetical listing of all undergraduate students in the Fall semester. Starting with that student, every 50th student is chosen until 75 students are included in the sample.
3. A completely random method is used to select 75 students. Each undergraduate student in the fall semester has the same probability of being chosen at any stage of the sampling process.
4. The freshman, sophomore, junior, and senior years are numbered one, two, three, and four, respectively. A random number generator is used to pick two of those years. All students in those two years are in the sample.
5. An administrative assistant is asked to stand in front of the library one Wednesday and to ask the first 100 undergraduate students he encounters what they paid for tuition the Fall semester. Those 100 students are the sample.
a. stratified; b. systematic; c. simple random; d. cluster; e. convenience
Example $$\PageIndex{9}$$: Calculator
You are going to use the random number generator to generate different types of samples from the data. This table displays six sets of quiz scores (each quiz counts 10 points) for an elementary statistics class.
#1 #2 #3 #4 #5 #6
5 7 10 9 8 3
10 5 9 8 7 6
9 10 8 6 7 9
9 10 10 9 8 9
7 8 9 5 7 4
9 9 9 10 8 7
7 7 10 9 8 8
8 8 9 10 8 8
9 7 8 7 7 8
8 8 10 9 8 7
Instructions: Use the Random Number Generator to pick samples.
1. Create a stratified sample by column. Pick three quiz scores randomly from each column.
• Number each row one through ten.
• On your calculator, press Math and arrow over to PRB.
• For column 1, Press 5:randInt( and enter 1,10). Press ENTER. Record the number. Press ENTER 2 more times (even the repeats). Record these numbers. Record the three quiz scores in column one that correspond to these three numbers.
• Repeat for columns two through six.
• These 18 quiz scores are a stratified sample.
2. Create a cluster sample by picking two of the columns. Use the column numbers: one through six.
• Press MATH and arrow over to PRB.
• Press 5:randInt( and enter 1,6). Press ENTER. Record the number. Press ENTER and record that number.
• The two numbers are for two of the columns.
• The quiz scores (20 of them) in these 2 columns are the cluster sample.
3. Create a simple random sample of 15 quiz scores.
• Use the numbering one through 60.
• Press MATH. Arrow over to PRB. Press 5:randInt( and enter 1, 60).
• Press ENTER 15 times and record the numbers.
• Record the quiz scores that correspond to these numbers.
• These 15 quiz scores are the systematic sample.
4. Create a systematic sample of 12 quiz scores.
• Use the numbering one through 60.
• Press MATH. Arrow over to PRB. Press 5:randInt( and enter 1, 60).
• Press ENTER. Record the number and the first quiz score. From that number, count ten quiz scores and record that quiz score. Keep counting ten quiz scores and recording the quiz score until you have a sample of 12 quiz scores. You may wrap around (go back to the beginning).
Example $$\PageIndex{10}$$
Determine the type of sampling used (simple random, stratified, systematic, cluster, or convenience).
1. A soccer coach selects six players from a group of boys aged eight to ten, seven players from a group of boys aged 11 to 12, and three players from a group of boys aged 13 to 14 to form a recreational soccer team.
2. A pollster interviews all human resource personnel in five different high tech companies.
3. A high school educational researcher interviews 50 high school female teachers and 50 high school male teachers.
4. A medical researcher interviews every third cancer patient from a list of cancer patients at a local hospital.
5. A high school counselor uses a computer to generate 50 random numbers and then picks students whose names correspond to the numbers.
6. A student interviews classmates in his algebra class to determine how many pairs of jeans a student owns, on the average.
a. stratified; b. cluster; c. stratified; d. systematic; e. simple random; f.convenience
Exercise $$\PageIndex{11}$$
Determine the type of sampling used (simple random, stratified, systematic, cluster, or convenience).
A high school principal polls 50 freshmen, 50 sophomores, 50 juniors, and 50 seniors regarding policy changes for after school activities.
stratified
If we were to examine two samples representing the same population, even if we used random sampling methods for the samples, they would not be exactly the same. Just as there is variation in data, there is variation in samples. As you become accustomed to sampling, the variability will begin to seem natural.
Example $$\PageIndex{12}$$: Sampling
Suppose ABC College has 10,000 part-time students (the population). We are interested in the average amount of money a part-time student spends on books in the fall term. Asking all 10,000 students is an almost impossible task. Suppose we take two different samples.
First, we use convenience sampling and survey ten students from a first term organic chemistry class. Many of these students are taking first term calculus in addition to the organic chemistry class. The amount of money they spend on books is as follows:
$128;$87; $173;$116; $130;$204; $147;$189; $93;$153
The second sample is taken using a list of senior citizens who take P.E. classes and taking every fifth senior citizen on the list, for a total of ten senior citizens. They spend:
$50;$40; $36;$15; $50;$100; $40;$53; $22;$22
a. Do you think that either of these samples is representative of (or is characteristic of) the entire 10,000 part-time student population?
a. No. The first sample probably consists of science-oriented students. Besides the chemistry course, some of them are also taking first-term calculus. Books for these classes tend to be expensive. Most of these students are, more than likely, paying more than the average part-time student for their books. The second sample is a group of senior citizens who are, more than likely, taking courses for health and interest. The amount of money they spend on books is probably much less than the average parttime student. Both samples are biased. Also, in both cases, not all students have a chance to be in either sample.
b. Since these samples are not representative of the entire population, is it wise to use the results to describe the entire population?
b. No. For these samples, each member of the population did not have an equally likely chance of being chosen.
Now, suppose we take a third sample. We choose ten different part-time students from the disciplines of chemistry, math, English, psychology, sociology, history, nursing, physical education, art, and early childhood development. (We assume that these are the only disciplines in which part-time students at ABC College are enrolled and that an equal number of part-time students are enrolled in each of the disciplines.) Each student is chosen using simple random sampling. Using a calculator, random numbers are generated and a student from a particular discipline is selected if he or she has a corresponding number. The students spend the following amounts:
$180;$50; $150;$85; $260;$75; $180;$200; $200;$150
c. Is the sample biased?
Students often ask if it is "good enough" to take a sample, instead of surveying the entire population. If the survey is done well, the answer is yes.
Exercise $$\PageIndex{12}$$
A local radio station has a fan base of 20,000 listeners. The station wants to know if its audience would prefer more music or more talk shows. Asking all 20,000 listeners is an almost impossible task.
The station uses convenience sampling and surveys the first 200 people they meet at one of the station’s music concert events. 24 people said they’d prefer more talk shows, and 176 people said they’d prefer more music.
Do you think that this sample is representative of (or is characteristic of) the entire 20,000 listener population?
The sample probably consists more of people who prefer music because it is a concert event. Also, the sample represents only those who showed up to the event earlier than the majority. The sample probably doesn’t represent the entire fan base and is probably biased towards people who would prefer music.
COLLABORATIVE EXERCISE $$\PageIndex{8}$$
As a class, determine whether or not the following samples are representative. If they are not, discuss the reasons.
1. To find the average GPA of all students in a university, use all honor students at the university as the sample.
2. To find out the most popular cereal among young people under the age of ten, stand outside a large supermarket for three hours and speak to every twentieth child under age ten who enters the supermarket.
3. To find the average annual income of all adults in the United States, sample U.S. congressmen. Create a cluster sample by considering each state as a stratum (group). By using simple random sampling, select states to be part of the cluster. Then survey every U.S. congressman in the cluster.
4. To determine the proportion of people taking public transportation to work, survey 20 people in New York City. Conduct the survey by sitting in Central Park on a bench and interviewing every person who sits next to you.
5. To determine the average cost of a two-day stay in a hospital in Massachusetts, survey 100 hospitals across the state using simple random sampling.
### Variation in Data
Variation is present in any set of data. For example, 16-ounce cans of beverage may contain more or less than 16 ounces of liquid. In one study, eight 16 ounce cans were measured and produced the following amount (in ounces) of beverage:
15.8; 16.1; 15.2; 14.8; 15.8; 15.9; 16.0; 15.5
Measurements of the amount of beverage in a 16-ounce can may vary because different people make the measurements or because the exact amount, 16 ounces of liquid, was not put into the cans. Manufacturers regularly run tests to determine if the amount of beverage in a 16-ounce can falls within the desired range.
Be aware that as you take data, your data may vary somewhat from the data someone else is taking for the same purpose. This is completely natural. However, if two or more of you are taking the same data and get very different results, it is time for you and the others to reevaluate your data-taking methods and your accuracy.
### Variation in Samples
It was mentioned previously that two or more samples from the same population, taken randomly, and having close to the same characteristics of the population will likely be different from each other. Suppose Doreen and Jung both decide to study the average amount of time students at their college sleep each night. Doreen and Jung each take samples of 500 students. Doreen uses systematic sampling and Jung uses cluster sampling. Doreen's sample will be different from Jung's sample. Even if Doreen and Jung used the same sampling method, in all likelihood their samples would be different. Neither would be wrong, however.
Think about what contributes to making Doreen’s and Jung’s samples different.
If Doreen and Jung took larger samples (i.e. the number of data values is increased), their sample results (the average amount of time a student sleeps) might be closer to the actual population average. But still, their samples would be, in all likelihood, different from each other. This variability in samples cannot be stressed enough.
### Size of a Sample
The size of a sample (often called the number of observations) is important. The examples you have seen in this book so far have been small. Samples of only a few hundred observations, or even smaller, are sufficient for many purposes. In polling, samples that are from 1,200 to 1,500 observations are considered large enough and good enough if the survey is random and is well done. You will learn why when you study confidence intervals.
Be aware that many large samples are biased. For example, call-in surveys are invariably biased, because people choose to respond or not.
COLLABORATIVE EXERCISE $$\PageIndex{8}$$
Divide into groups of two, three, or four. Your instructor will give each group one six-sided die. Try this experiment twice. Roll one fair die (six-sided) 20 times. Record the number of ones, twos, threes, fours, fives, and sixes you get in the following table (“frequency” is the number of times a particular face of the die occurs):
First Experiment (20 rolls) Second Experiment (20 rolls)
Face on Die Frequency Face on Die Frequency
1
2
3
4
5
6
Did the two experiments have the same results? Probably not. If you did the experiment a third time, do you expect the results to be identical to the first or second experiment? Why or why not?
Which experiment had the correct results? They both did. The job of the statistician is to see through the variability and draw appropriate conclusions.
### Critical Evaluation
We need to evaluate the statistical studies we read about critically and analyze them before accepting the results of the studies. Common problems to be aware of include
• Problems with samples: A sample must be representative of the population. A sample that is not representative of the population is biased. Biased samples that are not representative of the population give results that are inaccurate and not valid.
• Self-selected samples: Responses only by people who choose to respond, such as call-in surveys, are often unreliable.
• Sample size issues: Samples that are too small may be unreliable. Larger samples are better, if possible. In some situations, having small samples is unavoidable and can still be used to draw conclusions. Examples: crash testing cars or medical testing for rare conditions
• Undue influence: collecting data or asking questions in a way that influences the response
• Non-response or refusal of subject to participate: The collected responses may no longer be representative of the population. Often, people with strong positive or negative opinions may answer surveys, which can affect the results.
• Causality: A relationship between two variables does not mean that one causes the other to occur. They may be related (correlated) because of their relationship through a different variable.
• Self-funded or self-interest studies: A study performed by a person or organization in order to support their claim. Is the study impartial? Read the study carefully to evaluate the work. Do not automatically assume that the study is good, but do not automatically assume the study is bad either. Evaluate it on its merits and the work done.
• Misleading use of data: improperly displayed graphs, incomplete data, or lack of context
• Confounding: When the effects of multiple factors on a response cannot be separated. Confounding makes it difficult or impossible to draw valid conclusions about the effect of each factor.
### References
1. Gallup-Healthways Well-Being Index. http://www.well-beingindex.com/default.asp (accessed May 1, 2013).
2. Gallup-Healthways Well-Being Index. http://www.well-beingindex.com/methodology.asp (accessed May 1, 2013).
3. Gallup-Healthways Well-Being Index. http://www.gallup.com/poll/146822/ga...questions.aspx (accessed May 1, 2013).
4. Data from http://www.bookofodds.com/Relationsh...-the-President
5. Dominic Lusinchi, “’President’ Landon and the 1936 Literary Digest Poll: Were Automobile and Telephone Owners to Blame?” Social Science History 36, no. 1: 23-54 (2012), http://ssh.dukejournals.org/content/36/1/23.abstract (accessed May 1, 2013).
6. “The Literary Digest Poll,” Virtual Laboratories in Probability and Statistics http://www.math.uah.edu/stat/data/LiteraryDigest.html (accessed May 1, 2013).
7. “Gallup Presidential Election Trial-Heat Trends, 1936–2008,” Gallup Politics http://www.gallup.com/poll/110548/ga...9362004.aspx#4 (accessed May 1, 2013).
8. The Data and Story Library, http://lib.stat.cmu.edu/DASL/Datafiles/USCrime.html (accessed May 1, 2013).
9. LBCC Distance Learning (DL) program data in 2010-2011, http://de.lbcc.edu/reports/2010-11/f...hts.html#focus (accessed May 1, 2013).
10. Data from San Jose Mercury News
### Chapter Review
Data are individual items of information that come from a population or sample. Data may be classified as qualitative, quantitative continuous, or quantitative discrete.
Because it is not practical to measure the entire population in a study, researchers use samples to represent the population. A random sample is a representative group from the population chosen by using a method that gives each individual in the population an equal chance of being included in the sample. Random sampling methods include simple random sampling, stratified sampling, cluster sampling, and systematic sampling. Convenience sampling is a nonrandom method of choosing a sample that often produces biased data.
Samples that contain different individuals result in different data. This is true even when the samples are well-chosen and representative of the population. When properly selected, larger samples model the population more closely than smaller samples. There are many different potential problems that can affect the reliability of a sample. Statistical data needs to be critically analyzed, not simply accepted.
### Footnotes
1. lastbaldeagle. 2013. On Tax Day, House to Call for Firing Federal Workers Who Owe Back Taxes. Opinion poll posted online at: http://www.youpolls.com/details.aspx?id=12328 (accessed May 1, 2013).
2. Scott Keeter et al., “Gauging the Impact of Growing Nonresponse on Estimates from a National RDD Telephone Survey,” Public Opinion Quarterly 70 no. 5 (2006), http://poq.oxfordjournals.org/content/70/5/759.full (accessed May 1, 2013).
3. Frequently Asked Questions, Pew Research Center for the People & the Press, http://www.people-press.org/methodol...wer-your-polls (accessed May 1, 2013).
### Glossary
Cluster Sampling
a method for selecting a random sample and dividing the population into groups (clusters); use simple random sampling to select a set of clusters. Every individual in the chosen clusters is included in the sample.
Continuous Random Variable
a random variable (RV) whose outcomes are measured; the height of trees in the forest is a continuous RV.
Convenience Sampling
a nonrandom method of selecting a sample; this method selects individuals that are easily accessible and may result in biased data.
Discrete Random Variable
a random variable (RV) whose outcomes are counted
Nonsampling Error
an issue that affects the reliability of sampling data other than natural variation; it includes a variety of human errors including poor study design, biased sampling methods, inaccurate information provided by study participants, data entry errors, and poor analysis.
Qualitative Data
See Data.
Quantitative Data
See Data.
Random Sampling
a method of selecting a sample that gives every member of the population an equal chance of being selected.
Sampling Bias
not all members of the population are equally likely to be selected
Sampling Error
the natural variation that results from selecting a sample to represent a larger population; this variation decreases as the sample size increases, so selecting larger samples reduces sampling error.
Sampling with Replacement
Once a member of the population is selected for inclusion in a sample, that member is returned to the population for the selection of the next individual.
Sampling without Replacement
A member of the population may be chosen for inclusion in a sample only once. If chosen, the member is not returned to the population before the next selection.
Simple Random Sampling
a straightforward method for selecting a random sample; give each member of the population a number. Use a random number generator to select a set of labels. These randomly selected labels identify the members of your sample.
Stratified Sampling
a method for selecting a random sample used to ensure that subgroups of the population are represented adequately; divide the population into groups (strata). Use simple random sampling to identify a proportionate number of individuals from each stratum.
Systematic Sampling
a method for selecting a random sample; list the members of the population. Use simple random sampling to select a starting point in the population. Let k = (number of individuals in the population)/(number of individuals needed in the sample). Choose every kth individual in the list starting with the one that was randomly selected. If necessary, return to the beginning of the population list to complete your sample.
|
2017-09-26 19:48:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4509684443473816, "perplexity": 1516.7074400333136}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818696681.94/warc/CC-MAIN-20170926193955-20170926213955-00482.warc.gz"}
|
http://mathhelpforum.com/calculus/139192-having-problems-numerical-methods-question.html
|
# Math Help - Having problems with numerical methods question
1. ## Having problems with numerical methods question
How would I do the following 2 questions below??
Thanks
2. Originally Posted by mistry88
How would I do the following 2 questions below??
Thanks
integrate by parts.
let $u = tan^-1(x)$, and let $dv = x dx$. Then proceed.
have you done integration by parts before?
3. Originally Posted by harish21
integrate by parts.
let $u = tan^-1(x)$, and let $dv = x dx$. Then proceed.
have you done integration by parts before?
O ok thank you, yes I have done integration by parts before. I just proved it to equal the answer, but now don't know how to do the second part..
4. Originally Posted by mistry88
O ok thank you, yes I have done integration by parts before. I just proved it to equal the answer, but now don't know how to do the second part..
Where are you stuck? What have you tried? Have you reviewed your class notes and textbook for the formula?
5. Its ok now, I have figured out how to do it and have finished this question
6. Originally Posted by mistry88
How would I do the following 2 questions below??
Thanks
These questions look like they could be part of an assignment that counts towards your final grade. MHF policy is to not knowingly help with such questions as they are meant to be the work of the student not the work of others.
Thread closed. Please feel free to pm me to discuss this further.
|
2014-03-14 16:15:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7220801711082458, "perplexity": 632.1076825262688}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678693548/warc/CC-MAIN-20140313024453-00076-ip-10-183-142-35.ec2.internal.warc.gz"}
|
https://jasn.asnjournals.org/highwire/markup/38825/expansion?width=1000&height=500&iframe=true&postprocessors=highwire_tables%2Chighwire_reclass%2Chighwire_figures%2Chighwire_math%2Chighwire_inline_linked_media%2Chighwire_embed
|
Table 3.
Incidence rate of ESRD in patients with T2D during 8–12 years of follow-up according to quartiles of the distributions of baseline plasma concentrations of markers of the TNF pathway
QuartileaNumber of PatientsIncidence Rateb
Free TNFαTotal TNFαTNFR1TNFR2
Q11023 (3)000
Q21027 (7)3 (3)1 (1)2 (2)
Q310312 (11)11 (11)5 (5)5(5)
Q410149 (38)68 (45)84 (53)78 (52)
P for trendc<10−11<10−12<10−12<10−12
• a Quartile boundaries are as follows. Free TNFα (pg/ml): 3.0 for the 25th percentile, 4.3 for the 50th percentile, and 6.7 for the 75th percentile; total TNFα (pg/ml): 8.1 for the 25th percentile, 12.5 for the 50th percentile, and 17.2 for the 75th percentile; TNFR1 (pg/ml): 1049 for the 25th percentile, 1310 for the 50th percentile, and 1837 for the 75th percentile; TNFR2 (pg/ml): 2017 for the 25th percentile, 2527 for the 50th percentile, and 3363 for the 75th percentile.
• b Per 1000 person-years. Number of events is indicated in parentheses.
• c Bonferroni correction was applied.
|
2019-09-23 13:30:04
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8666194081306458, "perplexity": 9195.663239986709}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514576965.71/warc/CC-MAIN-20190923125729-20190923151729-00085.warc.gz"}
|
https://hoverbear.org/blog/quadcopters-stabilization/
|
In our past articles we've explored some of the basics of the mechanics of Quadcopters. In this article we'll be doing something a bit different and discussing the algorithms behind how the Quadcopter keeps itself stable.
To do this we'll actually be inspecting some of the official Bitcraze Firmware and it's stabilizer.c implementation.
It's okay if you don't know C or understand what's going on in this file, that's part of the purpose of this article!
The Crazyflie uses a real time operating system called FreeRTOS which is a well regarded industry standard.
# Structure
C code and Arduino code are fairly similar, and the best practice is to lay out your code roughly as follows:
// Includes
// Definitions
// Variables
// Functions
So what are all these? Let's break them down.
## Includes
In order to use code from other files it's necessary to bring them "in scope". Includes come in two forms:
#include <math.h> // Use a system provided library.
#include "FreeRTOS.h" // Use from the project library.
Notice how we include .h files instead of .c files? These are called header files and contain functions, variables, and definitions. In most cases, each .h file has a respective .c file.
The distinction between .c and .h files is largely a historical one. Some modern languages have combined the two.
## Definitions
Definitions are a way to assign certain values to specific names. #defines can be values or expressions.
#define example 1
#define max(a,b) ((a) > (b) ? (a) : (b))
#define min(a,b) ((a) < (b) ? (a) : (b))
Note: Definitions cannot change while the program is running, this is not a place for variables.
Consider them like "macros", if we enter max(1,2) then our compiler will replace it with ((1) > (2) ? (1) : (2)).
## Variables
We've used variables in our Arduino experiments already. Variables are the main workhorse of data storage.
int foo;
static int bar = 2;
const int baz = 3;
foo = 1;
Variables follow the format type name = value. You can also do just type name and name will be null (nothing) until it is set.
Sometimes you'll also see things like static and const in front. static variables exist over the lifetime of the program and are unique inside that given code file, they are not accessible outside of it. const variables cannot change their value after declared.
Not all variables will be simple values, for example, below we declare three Axis3f. When designing programs it's quite easy to create your own types to store whatever you might need.
static Axis3f gyro; // Gyro axis data in deg/s
static Axis3f acc; // Accelerometer axis data in mG
static Axis3f mag; // Magnetometer axis data in testla
You'll see float occur commonly in the stabilization code, this is a decimal value like 0.00001.
## Functions
Functions are step-by-step procedures which (generally) have an input and an output. The simplest function is this:
void foo() {}
This is a function with no input or output! void is the return type, void in most cases means nothing is returned. A function which takes a pair of integers and returns their sum looks like this:
int sum(int a, int b) {
return a+b;
}
Functions can be invoked by calling them like so:
int should_be_three = sum(1, 2);
# Understanding the Code
The stabilization code is broken up into a few sections. We'll take the code directly from the project and go over it slowly. If anything doesn't make sense please email me and I'll make it more clear!
## Initialization
void stabilizerInit(void)
{
if(isInit)
return;
motorsInit();
imu6Init();
sensfusion6Init();
controllerInit();
rollRateDesired = 0;
pitchRateDesired = 0;
yawRateDesired = 0;
isInit = true;
}
The stabilizerInit() function is what starts up the stabilization routines. You can see in the first line that if it is already been initialized the function simply returns early, doing nothing. (Note how isInit is set at the end of a normal call)
The code then initializes it's dependencies (which also exit early if already initialized!) After, it sets the desired orientation values to zero.
Finally, the function calls xTaskCreate which spawns a task which can run concurrently alongside other tasks. This particular task runs the stabilizerTask() function.
Okay, so what does this task look like then? Let's take a look! This function is longer, so I'll be breaking it up.
static void stabilizerTask(void* param)
{
uint32_t attitudeCounter = 0;
uint32_t altHoldCounter = 0;
uint32_t lastWakeTime;
//Wait for the system to be fully started to start stabilization loop
systemWaitStart();
while(1)
{
In the first few lines the function allocates some space for some 32-bit unsigned integers, these only represent absolute numbers. You can see that lastWakeTime is set later in the code. There are a few functions whose purpose is not immediately clear, let's go over them.
You'll notice as well that there is the start of a while(1) loop, which is an infinite loop, and will keep going until it is manually exited. Let's move forward.
// Magnetometer not yet used more then for logging.
if (imu6IsCalibrated())
{
commanderGetRPY(&eulerRollDesired, &eulerPitchDesired, &eulerYawDesired);
commanderGetRPYType(&rollType, &pitchType, &yawType);
// 250HZ
if (++attitudeCounter >= ATTITUDE_UPDATE_RATE_DIVIDER)
{
sensfusion6UpdateQ(gyro.x, gyro.y, gyro.z, acc.x, acc.y, acc.z, FUSION_UPDATE_DT);
sensfusion6GetEulerRPY(&eulerRollActual, &eulerPitchActual, &eulerYawActual);
accWZ = sensfusion6GetAccZWithoutGravity(acc.x, acc.y, acc.z);
accMAG = (acc.x*acc.x) + (acc.y*acc.y) + (acc.z*acc.z);
// Estimate speed from acc (drifts)
controllerCorrectAttitudePID(eulerRollActual, eulerPitchActual, eulerYawActual,
eulerRollDesired, eulerPitchDesired, -eulerYawDesired,
&rollRateDesired, &pitchRateDesired, &yawRateDesired);
attitudeCounter = 0;
}
At the top of this chunk you'll see imu9Read(&gyro, &acc, &mag); which, if you've never used pointers, may seem odd. Essentially what we're doing is calling the imu9Read function and passing it the three pointers to the location of our variables. The function can then dereference these pointers and write into them. This is a common practice when you want to modify a complex value in a function without needing to copy the entire thing.
The commanderGetRPY() and commanderGetRPYType() fetch the desired inputs from the user, like an increase in pitch or roll.
After, if a counter is high enough (the ++ increments it) we do an 'attitude' update. This is not to be confused with altitude. The term attitude is something that seems to be internal to the Crazyflie, and appears to just be their term for a need for adjustment.
The sensfusion6UpdateQ() and sensfusion6GetEulerRPY() functions pull the current quadcopter orientation from the sensors onboard. You may note that this only updates the Gyro and Accelerometer, that's because the Crazyflie does not use the Magnometer in this code yet.
AccWZ is used along with the deadband (a way to reduce the amount of data collected and save battery life) to estimate the vertical speed of the device. It appears that AccMAG is unused.
Then controllerCorrectAttitudePID() is called. This takes the desired values and through a round-about method works with PidObjects to update the pointers we pass in (The & values). PidObjects are used to model mathematics that drive the quadcopter.
// 100HZ
if (imuHasBarometer() && (++altHoldCounter >= ALTHOLD_UPDATE_RATE_DIVIDER))
{
stabilizerAltHoldUpdate();
altHoldCounter = 0;
}
if (rollType == RATE)
{
rollRateDesired = eulerRollDesired;
}
if (pitchType == RATE)
{
pitchRateDesired = eulerPitchDesired;
}
if (yawType == RATE)
{
yawRateDesired = -eulerYawDesired;
}
Next, if necessary, an altitude hold update is performed. Afterwards the three axes of movement are updated to their desired values.
// TODO: Investigate possibility to subtract gyro drift.
controllerCorrectRatePID(gyro.x, -gyro.y, gyro.z,
rollRateDesired, pitchRateDesired, yawRateDesired);
controllerGetActuatorOutput(&actuatorRoll, &actuatorPitch, &actuatorYaw);
if (!altHold || !imuHasBarometer())
{
// Use thrust from controller if not in altitude hold mode
commanderGetThrust(&actuatorThrust);
}
else
{
// Added so thrust can be set to 0 while in altitude hold mode after disconnect
commanderWatchdog();
}
Here the task updates the desired rate, and updates its picture of how fast the quadcopter is actuating with its motors. Then it is updating the thrust based on input from the user or the altitude hold control.
if (actuatorThrust > 0)
{
#if defined(TUNE_ROLL)
distributePower(actuatorThrust, actuatorRoll, 0, 0);
#elif defined(TUNE_PITCH)
distributePower(actuatorThrust, 0, actuatorPitch, 0);
#elif defined(TUNE_YAW)
distributePower(actuatorThrust, 0, 0, -actuatorYaw);
#else
distributePower(actuatorThrust, actuatorRoll, actuatorPitch, -actuatorYaw);
#endif
}
else
{
distributePower(0, 0, 0, 0);
controllerResetAllPID();
}
}
}
Finally, the task distributes power to the actuators. The #if defined(TUNE_ROLL) lines are compile time options meant for debugging, normally the #else case is used.
# What does it all mean?
In order to stabilize itself a quadcopter must rapidly, constantly sample its sensors, controller, and actuators in order to get the best picture of two very important things:
• What it is doing at that moment.
• What it should be doing at that moment.
In order to determine these values it uses mathematical models to get an idea of it's orientation and status. If you read our article on sensors you may recall how we transformed our accelerometer data into velocity data, this is the same idea. Then, the quadcopter attempts to find a happy, stable place that satisfies these requirements.
79c28d5a3d6f1bfddd7af01630d53938cf4858e7
|
2022-05-26 00:57:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3911583423614502, "perplexity": 3897.7012349306046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662595559.80/warc/CC-MAIN-20220526004200-20220526034200-00770.warc.gz"}
|
https://brilliant.org/problems/its-easy-believe-me-2/
|
A number theory problem by Thành Đạt Lê
Find the smallest value of $$n \in \mathbb Z$$ such that $$(2n + 1) \space | \space (2n^{2} - n + 2)$$.
×
|
2017-12-13 17:07:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5401345491409302, "perplexity": 2172.6410069639437}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948529738.38/warc/CC-MAIN-20171213162804-20171213182804-00604.warc.gz"}
|
https://ncatlab.org/nlab/show/J%C3%B3nsson-Tarski+algebra
|
# nLab Jónsson-Tarski algebra
Contents
### Context
#### Topos Theory
higher algebra
universal algebra
topos theory
# Contents
## Idea
A Jónsson-Tarski algebra is a set that looks like two copies of itself. Since historically the classical examples of these occured in the cardinal arithmetics of Georg Cantor, they are also known as Cantor algebras.
## Definition
A Jónsson-Tarski algebra, also called a Cantor algebra, is a set $A$ together with an isomorphism $A\cong A\times A$.
More generally, an object $A$ in a symmetric monoidal category $\mathcal{M}$ together with an isomorphism $\alpha:A\otimes A\rightarrow A$ is called a Jónsson-Tarski object, or an idempotent object (Fiore&Leinster 2010).
In another possible direction for generalization, one defines a Jónsson-Tarski n-algebra as a set $X$ together with an isomorphism $X\overset{\simeq}{\to}X^n$ (cf. Smirnov 1971, Higman 1974).1
## Properties
• Clearly (at least in classical mathematics), any Jónsson-Tarski algebra is either empty, a singleton, or infinite.
• The structure of a Jónsson-Tarski algebra can be described by an algebraic theory, with one binary operation $\mu$ and two unary operations $\lambda$ and $\rho$ such that $\mu(\lambda(x),\rho(x)) = x$, $\lambda(\mu(x,y))=x$, and $\rho(\mu(x,y))=y$.
• Any two Jónsson-Tarski algebras freely generated from finite non empty sets are isomorphic. It was this property they owe their introduction to (Jónsson&Tarski 1956,1961).
• Just like in the category $Grp$ of groups, subalgebras of free algebras are free themselves (cf. this Stackexchange question).
• The category of Jónsson-Tarski algebras is a topos, the so called Jónsson-Tarski topos $\mathcal{J}_2$, and hence is an example for a variety that is also a topos (cf. Johnstone 1985).
• The Thompson Group F is the group of order-preserving automorphisms of the free Jónsson-Tarski algebra on one generator (cf. Fiore-Leinster 2010).
## References
• K. S. Brown, Finiteness Properties of Groups , JPAA 44 (1987) pp.45-75.
• J. Dubeau, Jónsson Jónsson-Tarski algebras , arXiv:2202.02460 (2022). (abstract)
• J. Dudek, A. W. Marczak, On Cantor Identities , Algebra Universalis 68 (2012) pp.237–247.
• Marcelo Fiore, Tom Leinster, An abstract characterization of Thompson’s group F , arXiv.math/0508617 (2010). (pdf)
• R. Freese, J. B. Nation, Free Jónsson-Tarski algebras , ms. 2020. (pdf)
• G. Higman, Finitely presented infinite simple groups , Notes on Pure Mathematics 8 (1974) Australian National University Canberra.
• P. Hines, The Categorical Theory of Self-Similarity , TAC 6 no.3 (1999). (abstract)
• Peter Johnstone, When is a Variety a Topos? , Algebra Universalis 21 (1985) pp.198-212.
• Peter Johnstone, Collapsed Toposes and Cartesian Closed Varieties , JA 129 (1990) pp.446-480.
• B. Jónsson, A. Tarski , Two General Theorems Concerning Free Algebras , Bull. Amer. Math. Soc. 62 p.554. (pdf)
• B. Jónsson, A. Tarski , On Two Properties of Free Algebras , Math. Scand. 9 (1961) pp.95-101. (pdf)
• Tom Leinster, Jónsson-Tarski toposes, Talk Nice 2007. (slides)
• A. K. Rumjancev, An independent basis for the quasi-identities of a free Cantor algebra , Algebra and Logic 16 (1977) pp.119-129.
• D. M. Smirnov, Cantor algebras with a single generator I , Algebra and Logic 10 (1971) pp.40-49.
• D. M. Smirnov, Cantor algebras with a single generator II , Algebra and Logic 12 (1973) pp.399-404.
• D. M. Smirnov, Bases and automorphisms of free Cantor algebras of finite rank , Algebra and Logic 13 (1974) pp.17-33.
• S. Swierczkowski, On isomorphic free algebras , Fund. Math. 50 (1961) pp.35–44.
1. A profunctorial variation on this theme has been proposed by Leinster (2007). See at Jónsson-Tarski topos for some details.
Last revised on May 4, 2022 at 13:03:05. See the history of this page for a list of all contributions to it.
|
2023-01-27 11:12:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 15, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8808173537254333, "perplexity": 1818.6332685556843}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494976.72/warc/CC-MAIN-20230127101040-20230127131040-00606.warc.gz"}
|
https://tex.stackexchange.com/questions/193848/making-a-probability-tree-using-tikzpicture-forest
|
# Making a probability tree using tikzpicture / forest
I want to create a probability tree using either tikzpicture or forest. I have experimented a little bit with both, but I'm running into some trouble with both of the programs.
First, I have my code for tikzpicture that I want to create:
![\begin{tikzpicture}$grow=right, sloped$
\node$bag$ {(1, 1)}
child {
node$bag$ {($\frac{1}{2}$, $\frac{1}{2}$)}
child {
node {($\frac{1}{4}$, $\frac{1}{4}$)}
edge from parent
node$above$ {$T$}
node$below$ {$\frac{1}{2}$}
}
child {
node {(1, 1)}
edge from parent
node$above$ {$H$}
node$below$ {$\frac{1}{2}$}
}
edge from parent
node$above$ {$T$}
node$below$ {$\frac{1}{2}$}
}
child {
node$bag$ {(2, 2)}
child {
node {(1, 1)}
edge from parent
node$above$ {$T$}
node$below$ {$\frac{1}{2}$}
}
child {
node {(4, 4)}
edge from parent
node$above$ {$H$}
node$below$ {$\frac{1}{2}$}
}
edge from parent
node$above$ {$H$}
node$below$ {$\frac{1}{2}$}
};
\end{tikzpicture}][1]
Next, I have my forest code:
\begin{forest}
for tree={grow=0,l=3cm,anchor=west,child anchor=west}
[{$(1, 1)$}
[{$(\frac{1}{2}, \frac{1}{2})$}
[{$(\frac{1}{4}, \frac{1}{4})$}]
[{$(1,1)$}]
]
[{$(2, 2)$}
[{$(1,1)$}]
[{$(4,4)$}]
]
]
\end{forest}
I like the tikzpicture style and spacing very much, except I would love to be able to have labels along the bottom that say "$t=0$", "$t=1$", "$t=2$", etc. that are in a single horizontal line? Is there a way to do this in tikzpicture? I want the labels like in the answer to this question.
Regarding the forest code, I think that the diagram is too condensed and squished together. Is there a way to make it more spread out, like the tikzpicture? Also, is there a method to create labels "$t=0$", "$t=1$", "$t=2$", etc. as above?
The above graph is my tikzpicture and the below is my forest.
• Can you please edit your question and combine your code fragments into a single compilable document? This will make it easier for others to play with the code. – Alan Munn Jul 29 '14 at 23:00
• The question you linked to with the labels you want has an answer which uses forest so you can adapt that to do the labelling. Try fit=rectangle or fit=band for a less compact tree. (Page 29.) – cfr Jul 29 '14 at 23:02
One way to space out the tree in forest is to increase the minimum distance between the siblings. As mentioned above, fit can also be used to adjust the spread.
\documentclass{standalone}
\usepackage{forest}
\usetikzlibrary{positioning}
\begin{document}
\begin{forest}
for tree={grow=0,l=3cm,anchor=west,child anchor=west, s sep+=10pt}
[{$(1, 1)$}, name=t0
[{$(\frac{1}{2}, \frac{1}{2})$}
[{$(\frac{1}{4}, \frac{1}{4})$}, name=bot]
[{$(1,1)$}]
]
[{$(2, 2)$}, name=t1
[{$(1,1)$}]
[{$(4,4)$}, name=t2]
]
]
\coordinate [below=of bot] (coord);
\foreach \i in {0,...,2}
\node at (coord -| t\i) {$t=\i$};
\end{forest}
\end{document}
|
2019-10-16 09:47:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7167906165122986, "perplexity": 3082.392553024627}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986666959.47/warc/CC-MAIN-20191016090425-20191016113925-00166.warc.gz"}
|
https://tex.stackexchange.com/questions/492218/enable-enumerate-or-itemize-in-custom-env?noredirect=1
|
# Enable enumerate or itemize in custom env
I have an aside environment from a template.
\RequirePackage[absolute,overlay]{textpos}
\setlength{\TPHorizModule}{1cm}
\setlength{\TPVertModule}{1cm}
\newenvironment{aside}{%
\let\oldsection\section
\renewcommand{\section}[1]{
}
\begin{textblock}{3.6}(2.0, 0.55)
\begin{flushright}
\obeycr
}{%
\restorecr
\end{flushright}
\end{textblock}
\let\section\oldsection
}
I set this region up with the following structure:
\begin{aside}
\section{RegionA}
~
\end{aside}
However, if I add in enumerate or itemize, LaTeX does not compile:
\begin{aside}
\section{RegionA}
\begin{enumerate}
\item foo
\end{enumerate}
\end{aside}
The compilation fails with the following error message:
! LaTeX Error: There's no line here to end.
See the LaTeX manual or LaTeX Companion for explanation.
Type H <return> for immediate help.
...
l.75 \begin{enumerate}
And I am wondering how to modify this aside environment for usage with enumerate?
It seems you are using class friggeri-cv or a slightly changed one cv-style, both using same environment aside (see also this question).
I took the mwe from the cited question and changed it a little bit. The culprit here is that you try to insert in an place where text is right-justified an per se left-justified text. That would give an pretty ugly result! Please see this relevant code:
\begin{aside}
%
\section{contact}
City, State 050022
Country
~
+0 (000) 111 1111
+0 (000) 111 1112
~
myemail@gmail.com
myweb.wordpress.com
%
\section{RegionA}
~
1. foo
2. bar
~
\section{RegionB}
\begin{enumerate}
\item foo
\item bar
\end{enumerate}
\end{aside}
In section RegionA I now added your enumeration manually, that means without using environment enumerate! In section RegionB I use enumerate resulting in 3 error messages you also got.
See the following complete compiling code with one error message
\documentclass{cv-style}
\begin{document}
\begin{aside}
%
\section{contact}
City, State 050022
Country
~
+0 (000) 111 1111
+0 (000) 111 1112
~
myemail@gmail.com
myweb.wordpress.com
%
\section{RegionA}
~
1. foo
2. bar
~
\section{RegionB}
\begin{enumerate}
\item foo
\item bar
\end{enumerate}
\end{aside}
%
\section{skills}
\vspace{-0.2cm}
Skill 1, skill 2, skill 3, skill 4, skill 5.
\section{education}
\begin{entrylist}
%------------------------------------------------
\entry
{2010--2011}
{University}
{\vspace{-0.3cm}}
%------------------------------------------------
\entry
{2004--2009}
{B.Eng. {\normalfont in Engineering Management [Grade]}}
{University}
{(Emphasis in ...)}
%------------------------------------------------
\end{entrylist}
\end{document}
and its result:
The RegionA part looks okay and fits the other layout (building a vertical line on the right in the first column and a vertical line for the text starting in the right column.
The RegionB part looks ugly, because it does not fit the used layout. Flushed left text in an flushed right layout does not look good, simply omit that. See the red arrow in the screenshot marked with a flash!
The content of environment aside is not prepared to handle enumerate or itemize, it is an simple tool proposed to have manually added text only.
So to omit the error messages simply use the method shown in RegionA and delete wrong RegionB!
BTW: Rethink your principal layout of your CV if you really need an itemized or enumerated list in that first small column. Lists should be placed in the second column in this layout design IMHO.
• Oh, ok. So this second document makes use of a different implementation of aside? – donlan May 23 at 10:55
• @donlan no, environment aside is same in both classes. Test it with your used class and please edit your question and add there which class you use. Because I answered the cited question it was easier for me to use the other class (already everthing copied on my computer and not having to copy friggeri-cv etc.) – Mensch May 23 at 11:01
• Gotcha. just didn’t see any difference between how enumerate was called between the two examples. – donlan May 23 at 11:04
|
2019-10-23 21:31:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7467543482780457, "perplexity": 3832.547455122946}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987836295.98/warc/CC-MAIN-20191023201520-20191023225020-00531.warc.gz"}
|
http://gmatclub.com/forum/m24-73371.html?fl=similar
|
Find all School-related info fast with the new School-Specific MBA Forum
It is currently 24 Sep 2016, 23:07
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# M24#34
Author Message
Senior Manager
Joined: 20 Feb 2008
Posts: 296
Location: Bangalore, India
Schools: R1:Cornell, Yale, NYU. R2: Haas, MIT, Ross
Followers: 4
Kudos [?]: 44 [0], given: 0
### Show Tags
29 Nov 2008, 00:19
Point $$(1, 0)$$ is closest to which of the following lines?
(A) $$y = x$$
(B) $$y = 1$$
(C) $$y + x = 3$$
(D) $$x = 2$$
(E) $$x + y = -1$$
[Reveal] Spoiler: OA
A
Source: GMAT Club Tests - hardest GMAT questions
Manager
Joined: 18 Nov 2008
Posts: 116
Followers: 1
Kudos [?]: 12 [0], given: 0
### Show Tags
29 Nov 2008, 05:48
A
We can get this answer by drawing the lines of equations given in answer choices and visually comparing the distances.
But can anyone find the algebraic way to solve this kind of questions? Of course, if it won't take 18 min, to calculate
SVP
Joined: 29 Aug 2007
Posts: 2492
Followers: 67
Kudos [?]: 707 [1] , given: 19
### Show Tags
29 Nov 2008, 12:40
1
KUDOS
ventivish wrote:
Point $$(1, 0)$$ is closest to which of the following lines?
(C) 2008 GMAT Club - m24#34
* $$y = x$$
* $$y = 1$$
* $$y + x = 3$$
* $$x = 2$$
* $$x + y = -1$$
Agree with A.
The shortest distance between point (1, 0) and the equation y = x is 1/sqrt(2).
Any of the equation other than y = x has distance more than 1/sqrt(2). A drawing of graph would have much clear visualization.
_________________
Gmat: http://gmatclub.com/forum/everything-you-need-to-prepare-for-the-gmat-revised-77983.html
GT
Manager
Joined: 12 Aug 2008
Posts: 62
Followers: 2
Kudos [?]: 3 [0], given: 2
### Show Tags
05 Sep 2009, 22:37
My brain is fried.
Stupid question: How to draw the graphs of these lines?
Manager
Joined: 27 Feb 2010
Posts: 105
Location: Denver
Followers: 1
Kudos [?]: 317 [0], given: 14
### Show Tags
06 May 2010, 06:11
sid3699 you will have to substitue values for X and Y and point those values on the X and Y graph.
example
for A) i.e y = x
X Y
0 0
1 1
2 2
-1 -1
-2 -2
For B, Y =1 the line is parallel to X axis at y = 1
Intern
Joined: 06 May 2010
Posts: 10
Followers: 0
Kudos [?]: 5 [3] , given: 3
### Show Tags
06 May 2010, 15:42
3
KUDOS
Here is the formula
The shortest distance from (X1,Y1) to the line ax+by+c=0 is
mod(aX1+bY1+c)/\sqrt{(a$$2+b2$$)}
Mod(aX1+bY1+c)/sqrt(a^2+b^2).
Manager
Joined: 04 Dec 2009
Posts: 71
Location: INDIA
Followers: 2
Kudos [?]: 9 [0], given: 4
### Show Tags
06 May 2010, 20:13
Ans:A draw line for each option is fastest way to solve.
_________________
MBA (Mind , Body and Attitude )
Manager
Joined: 23 Nov 2009
Posts: 90
Schools: Wharton..:)
Followers: 2
Kudos [?]: 38 [0], given: 14
### Show Tags
07 May 2010, 01:30
if u want to do this question by elimination see x=1 and y =1 give the same answers , so both gets eliminated ..ofcourse A is the ans ..
_________________
" What [i] do is not beyond anybody else's competence"- warren buffett
My Gmat experience -http://gmatclub.com/forum/gmat-710-q-47-v-41-tips-for-non-natives-107086.html
Senior Manager
Joined: 21 Dec 2009
Posts: 268
Location: India
Followers: 10
Kudos [?]: 211 [3] , given: 25
### Show Tags
20 May 2010, 07:28
3
KUDOS
The distance from a point (m, n) to the line Ax + By + C = 0 is given by:
Attachments
Image21-summary2.jpg [ 5.39 KiB | Viewed 7640 times ]
_________________
Cheers,
SD
Manager
Status: I will not stop until i realise my goal which is my dream too
Joined: 25 Feb 2010
Posts: 235
Schools: Johnson '15
Followers: 2
Kudos [?]: 48 [0], given: 16
### Show Tags
14 May 2012, 06:13
ventivish wrote:
Point $$(1, 0)$$ is closest to which of the following lines?
(A) $$y = x$$
(B) $$y = 1$$
(C) $$y + x = 3$$
(D) $$x = 2$$
(E) $$x + y = -1$$
[Reveal] Spoiler: OA
A
Source: GMAT Club Tests - hardest GMAT questions
i got the ANSWER by drawing the lines in a sheet and then finding out...is there any other easier way plz... i want to lessen the number of calculations i do...help required from the SMEs here
_________________
Regards,
Harsha
Note: Give me kudos if my approach is right , else help me understand where i am missing.. I want to bell the GMAT Cat
Satyameva Jayate - Truth alone triumphs
Math Expert
Joined: 02 Sep 2009
Posts: 34817
Followers: 6476
Kudos [?]: 82538 [0], given: 10107
### Show Tags
15 May 2013, 08:41
ventivish wrote:
Point $$(1, 0)$$ is closest to which of the following lines?
(A) $$y = x$$
(B) $$y = 1$$
(C) $$y + x = 3$$
(D) $$x = 2$$
(E) $$x + y = -1$$
[Reveal] Spoiler: OA
A
Source: GMAT Club Tests - hardest GMAT questions
Point (1,0) is closest to which of the following lines?
A. y=x
B. y=1
C. y+x=3
D. x=2
E. x+y=−1
Look at the diagram below:
Attachment:
Untitled.png [ 15.99 KiB | Viewed 2047 times ]
As you can see point (1,0) is closest to line y=x.
_________________
Intern
Joined: 04 Jul 2012
Posts: 6
Followers: 0
Kudos [?]: 0 [0], given: 2
### Show Tags
15 May 2013, 12:33
Just try to eliminate the answers ...
for x = 1 , the distance is 1 , so with y =1 so both of these are not answers and rest two line equations except y=x are far from the point (1,0). so left with option y=x.
may need to calculate the distance in other examples.
Manager
Joined: 20 Oct 2013
Posts: 66
Followers: 0
Kudos [?]: 2 [0], given: 27
### Show Tags
30 Apr 2014, 02:26
HI Bunnel
when the equation is y=1 then why is your line on the graph passing through y=2... also when we have x=2... the distance between y=x line and dst between x=2 line are same.
_________________
Hope to clear it this time!!
GMAT 1: 540
Preparing again
Moderator
Joined: 25 Apr 2012
Posts: 728
Location: India
GPA: 3.21
Followers: 43
Kudos [?]: 629 [0], given: 723
### Show Tags
30 Apr 2014, 03:27
nandinigaur wrote:
HI Bunnel
when the equation is y=1 then why is your line on the graph passing through y=2... also when we have x=2... the distance between y=x line and dst between x=2 line are same.
Hi,
The question asks out of all the lines in the option A to E, Which is closest to Point (1,0).
Yes, the line should pass through Y=1. The distance between the point (1,0) and line Y=X and between Point (1,0) and line X=2 can be same so this tells you surely these 2 options are not the answers. You can eliminate these answer choices
Also Distance of Point (x1,y1) from a line $$ax+by+c=0$$ can be found using the formulae
|$$(ax1+by1+c)/$$$$\sqrt{a^2+b^2}$$ |
_________________
“If you can't fly then run, if you can't run then walk, if you can't walk then crawl, but whatever you do you have to keep moving forward.”
Manager
Joined: 20 Oct 2013
Posts: 66
Followers: 0
Kudos [?]: 2 [0], given: 27
### Show Tags
30 Apr 2014, 03:33
but the answer is A (y=x)... I am again confused.
_________________
Hope to clear it this time!!
GMAT 1: 540
Preparing again
Moderator
Joined: 25 Apr 2012
Posts: 728
Location: India
GPA: 3.21
Followers: 43
Kudos [?]: 629 [0], given: 723
### Show Tags
30 Apr 2014, 04:54
nandinigaur wrote:
but the answer is A (y=x)... I am again confused.
Refer to the figure below. Pt(1,0) is closest to Blue line which is y=x and hence ans is A. The distance from line x=2 is more as can be seen in the graph.
Attachments
m 1.png [ 11.39 KiB | Viewed 2605 times ]
_________________
“If you can't fly then run, if you can't run then walk, if you can't walk then crawl, but whatever you do you have to keep moving forward.”
Manager
Joined: 20 Oct 2013
Posts: 66
Followers: 0
Kudos [?]: 2 [0], given: 27
### Show Tags
30 Apr 2014, 06:34
Thanks.... i got it...I guess I am so stressed that i am not even getting the easy explanations
_________________
Hope to clear it this time!!
GMAT 1: 540
Preparing again
Math Expert
Joined: 02 Sep 2009
Posts: 34817
Followers: 6476
Kudos [?]: 82538 [0], given: 10107
### Show Tags
01 May 2014, 09:27
nandinigaur wrote:
HI Bunnel
when the equation is y=1 then why is your line on the graph passing through y=2... also when we have x=2... the distance between y=x line and dst between x=2 line are same.
Edited the graph of y=1.
_________________
Manager
Joined: 20 Oct 2013
Posts: 66
Followers: 0
Kudos [?]: 2 [0], given: 27
### Show Tags
01 May 2014, 11:07
Thanks Bunnel. I understood!.
_________________
Hope to clear it this time!!
GMAT 1: 540
Preparing again
Intern
Joined: 22 Jun 2013
Posts: 45
Followers: 0
Kudos [?]: 37 [0], given: 132
### Show Tags
01 May 2014, 22:09
Here is how i did this Question without using the Distance formula
After drawing all the lines
We can see tht there is a close call between
(A) $$y = x$$
(B) $$y = 1$$
(D) $$x = 2$$
Rest 2 could be eliminated just by seeing that their distance would be greater than the above 3 lines.
As for
(B) $$y = 1$$
(D) $$x = 2$$
Both have Perpendicular Distance =1 from the point $$(1, 0)$$
Now we can guess that since we have to find the shortest distance & no 2 options can have the minimum value & be the answer
Thus $$y = x$$ will be the answer
Re: M24#34 [#permalink] 01 May 2014, 22:09
Display posts from previous: Sort by
# M24#34
Moderator: Bunuel
Powered by phpBB © phpBB Group and phpBB SEO Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
|
2016-09-25 06:07:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5358681082725525, "perplexity": 9215.416717035785}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738659865.46/warc/CC-MAIN-20160924173739-00131-ip-10-143-35-109.ec2.internal.warc.gz"}
|
https://socratic.org/questions/what-is-the-range-of-the-function-f-x-x-2-9
|
# What is the range of the function f(x)= -x^2 +9?
Sep 2, 2017
Range of $f \left(x\right) = \left[9 , - \infty\right)$
#### Explanation:
$f \left(x\right) = - {x}^{2} + 9$
$f \left(x\right)$ is defined $\forall x \in \mathbb{R}$
Hence, the domain of $f \left(x\right) = \left(- \infty , + \infty\right)$
Since the coefficient of ${x}^{2} < 0$ $f \left(x\right)$ has maximum value.
${f}_{\max} = f \left(0\right) = 9$
Also, $f \left(x\right)$ has no lower bounds.
Hence, the range of $f \left(x\right) = \left[9 , - \infty\right)$
We can see the range from the graph of $f \left(x\right)$ below.
graph{-x^2 +9 [-28.87, 28.87, -14.43, 14.45]}
|
2020-04-02 13:46:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 11, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.925377368927002, "perplexity": 2185.876837032388}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506959.34/warc/CC-MAIN-20200402111815-20200402141815-00225.warc.gz"}
|
https://www.physicsforums.com/threads/derivation-of-acceleration-from-velocity-with-partial-derivatives.650435/
|
# Derivation of Acceleration from Velocity with Partial derivatives
1. Nov 7, 2012
### fluidmech
1. The problem statement, all variables and given/known data
I'm taking a fluid mechanics class and I'm having an issue with acceleration and background knowledge. I know this is ridiculous, but I was hoping someone might be able to explain it for me.
2. Relevant equations
I definitely understand:
$a=\frac{d\vec{V}}{dt}$
And I know that u, v, and w are components of the velocity, $\vec{V}=<u,v,w>$
But how do I use the chain rule of differentiation to get to:
$\vec{a}=\frac{d\vec{V}}{dt}=\frac{\partial \vec{V}}{\partial t} +\frac{\partial \vec{V}}{\partial x}\frac{dx}{dt} +\frac{\partial \vec{V}}{\partial y}\frac{dy}{dt} +\frac{\partial \vec{V}}{\partial z}\frac{dz}{dt}$
- Matt
2. Nov 7, 2012
### Dick
You want to think of V as a function of four variables V(t,x,y,z).
3. Nov 7, 2012
### fluidmech
I see, I'm still a bit hazy on the mathematics of the partials, would you mind elaborating on that?
Last edited: Nov 7, 2012
4. Nov 7, 2012
5. Nov 7, 2012
### fluidmech
That helped me tremendously. Now I understand it, thank you!
|
2017-12-17 07:16:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5866342186927795, "perplexity": 851.2188767063085}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948593526.80/warc/CC-MAIN-20171217054825-20171217080825-00479.warc.gz"}
|
https://www.thejournal.club/c/paper/23897/
|
#### On Order and Rank of Graphs
##### E. Ghorbani, A. Mohammadian, B. Tayfeh-Rezaie
The rank of a graph is defined to be the rank of its adjacency matrix. A graph is called reduced if it has no isolated vertices and no two vertices with the same set of neighbors. Akbari, Cameron, and Khosrovshahi conjectured that the number of vertices of every reduced graph of rank r is at most $m(r)=2^{(r+2)/2}-2$ if r is even and $m(r) = 5\cdot2^{(r-3)/2}-2$ if r is odd. In this article, we prove that if the conjecture is not true, then there would be a counterexample of rank at most $46$. We also show that every reduced graph of rank r has at most $8m(r)+14$ vertices.
arrow_drop_up
|
2023-02-06 22:59:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.558163046836853, "perplexity": 156.56045118233635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500365.52/warc/CC-MAIN-20230206212647-20230207002647-00161.warc.gz"}
|
https://electronics.stackexchange.com/questions/456948/periodic-timer-not-working-as-expected-on-tm4c123-mcu
|
# Periodic timer not working as expected on TM4C123 MCU
I am starting to learn how to use timers/interrupts on the TM4C123 evaluation board (TM4C123GH6PM MCU).
In this example I am configuring a timer to create 1 second delays to toggle an on-board LED on and off. But when I run the code in real time on the board, the red LED appears to be on all the time. If I step through the code with breakpoints in my while(1) loop where I poll the timer register, it appears to work fine. (I am not using interrupts with this timer, just polling).
I haven't attached the individual register breakdowns from the datasheet as there are quite a few.
Has anyone got any ideas as to why this is not working as expected?
Below is my code for the example I am following from the TM4C123 datasheet:
And here is my code:
#include "include/tm4c123gh6pm.h"
#define RED_LED (1U << 1)
#define BLUE_LED (1U << 2)
#define GREEN_LED (1U << 3)
int main()
{
SYSCTL_RCGCGPIO_R = 0x20U; //Enabling clock to port F.
GPIO_PORTF_LOCK_R = 0x4C4F434BU; //Unlocking the port F lock register.
GPIO_PORTF_CR_R = 0xFFU; //Enabling the port F commit register.
GPIO_PORTF_DIR_R = 0x0EU; //Setting the port F pins as outputs.
GPIO_PORTF_DEN_R = 0x1FU; //Enabling digital writing to port F.
/* To use a GPTM, the appropriate TIMERn bit must be set in the RCGCTIMER or RCGCWTIMER
register (see page 338 and page 357). If using any CCP pins, the clock to the appropriate GPIO
module must be enabled via the RCGCGPIO register (see page 340). To find out which GPIO port
to enable, refer to Table 23-4 on page 1344. Configure the PMCn fields in the GPIOPCTL register to
assign the CCP signals to the appropriate pins (see page 688 and Table 23-5 on page 1351). */
SYSCTL_RCGCTIMER_R = 0x01U; //Enable timer 0 clock signal.
/* 1. Ensure the timer is disabled (the TnEN bit in the GPTMCTL register is cleared) before making
any changes. */
TIMER0_CTL_R &= ~(1<<0); //Disabling timer.
/* 2. Write the GPTM Configuration Register (GPTMCFG) with a value of 0x0000.0000. */
TIMER0_CFG_R = 0x00000000U; //Writing 0 hex to this register.
/* 3. Configure the TnMR field in the GPTM Timer n Mode Register (GPTMTnMR):
a. Write a value of 0x1 for One-Shot mode.
b. Write a value of 0x2 for Periodic mode. */
TIMER0_TAMR_R |= (0x2<<0); //Setting timer 0 to periodic mode.
/* 4. Optionally configure the TnSNAPS, TnWOT, TnMTE, and TnCDIR bits in the GPTMTnMR register
to select whether to capture the value of the free-running timer at time-out, use an external
trigger to start counting, configure an additional trigger or interrupt, and count up or down. */
TIMER0_TAMR_R &= ~(1<<4); //Configured as count down timer.
/* 5. Load the start value into the GPTM Timer n Interval Load Register (GPTMTnILR). */
TIMER0_TAILR_R = 0x00F42400; //16,000,000Mhz clock, 1 sec delays.
/* 6. If interrupts are required, set the appropriate bits in the GPTM Interrupt Mask Register
(GPTMIMR). */
//Interrupts are not used, I am polling the timer0 regsiter as seen below in while(1) loop.
/* 7. Set the TnEN bit in the GPTMCTL register to enable the timer and start counting. */
TIMER0_CTL_R |= (1<<0); //Enabling timer
/* 8. Poll the GPTMRIS register or wait for the interrupt to be generated (if enabled). In both cases,
the status flags are cleared by writing a 1 to the appropriate bit of the GPTM Interrupt Clear
Register (GPTMICR). */
while(1)
{
if( (TIMER0_RIS_R & 0x00000001) == 1) //Checking if timer has finished counting.
TIMER0_ICR_R |= (1<<0); //Clearing the finished timing bit in TIMER0_RIS_R.
GPIO_PORTF_DATA_R ^= (1U << 1); //Toggling the RED_LED output.
}
}
• So maybe you have the LED connected to the wrong place? – Andy aka Sep 10 at 10:30
• There is a Timers example that basically does this in TivaWare, have you had a look at that program? – Tyler Sep 10 at 13:00
• Sure, but it is a really easy thing to search for. ti.com/tool/SW-TM4C – Tyler Sep 10 at 13:27
• I assume this is for a school assignment, so that is why you are not using the TI Peripheral Driver Libraries? – Tyler Sep 10 at 14:59
while(1)
|
2019-11-19 18:52:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5139696598052979, "perplexity": 9587.363721015341}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670162.76/warc/CC-MAIN-20191119172137-20191119200137-00043.warc.gz"}
|
http://factbased.blogspot.de/2013/
|
## 2013-09-03
### On learning (math, in particular)
Here are some bon mots I collected from an online course on math didactics.
… when you got the answer right, nothing happened to your brain. This aims to build a work ethic. Dave Panesku tried different starting messages at Khan Academy Videos. Messages that tried to build work ethic (“the more you work, the more you learn”) made students solve more problems, while encouraging messages (“this is hard, try again if you fail first time”) had no effect compared to the control group.
It is also about appreciating mistakes.
## Convince yourself, convince a friend, convince a skeptic
Most math students and their parents have difficulties to name the topic or learning goal that is currently covered in class. Discussing different ways of seeing, different paths and strategies to tackle a problem, however, is what learning (math, in particular) is about.
Uri Treisman and his colleagues showed in their minority studies [1] that students who discussed the problems outperformed those who did not.
## Where are you, where do you need to be, how to close the gap
Feedback is important. Regular peer and self assessments outperforms control groups, especially low achievers improve [2].
Grading does not provide useful feedback. Diagnostic feedback encourages the students and outperforms grades – as well as grades together with feedback [3].
## Pseudo-context problems need redesign
Math problems can be fun if they are presented in an open style that allows multiple entry points.
It is more interesting to construct two rectangles given a perimeter than to find the perimeter of a given rectangle.
“Doing and undoing”, i.e. being able to reason both forwards and backwards with operations, is the central practice in algebraic thinking. For instance, first discuss several methods to solve a problem, then present expressions for new methods and discuss what the method behind the expression might be [4].
There is an web effort on makeover of dull math problems.
Can you do any number between 1 and 20 by using only 4 4s? For example, 20 = (4/4 + 4)*4.
1. Fullilove, R. E., & Treisman, P. U. (1990). Mathematics achievement among African American undergraduates at the University of California, Berkeley: An evaluation of the Mathematics Workshop Program. Journal of Negro Education, 59 (30), 463-478.
2. White, B., & Frederiksen, J. (1998). Inquiry, modeling and metacognition: making science accessible to all students. Cognition and Instruction, 16(1), 3-118.
3. Butler, R. (1988). Enhancing and Undermining Intrinsic Motivation: The Effects of Task-Involving and Ego-Involving Evaluation on Interest and Performance. British Journal of Educational Psychology, 58, 1-14.
4. Driscoll, M. (1999). Fostering Algebraic Thinking: A Guide for Teachers, Grades 6-10. Heinemann, 361 Hanover Street, Portsmouth, NH 03801-3912.
### Gillray on knowledge gained from books
Have a look at the figure above. In the image we see an experiment that goes terribly wrong. Two persons want to transfer their knowledge on horses and try to domesticate crocodiles. We see a whip, a bridle, a saddle and an instruction manual “education for crocodiles”. But crocodiles are not horses, they bite back, and turn the scene into a blood bath.
## 200 years later it is still true
Gillray created this caricature in 1799, and it was a comment on politics. It relates to personal letters from enraged French officers in Bonaparte's Egyptian command [1]. It is the trait of a caricature to exaggerate its subject and should of course be interpreted metaphorically. The crocodiles may represent the Egypt people Bonaparte tried to subordinate as he did in many Europian countries. On the other hand, the exaggeration, which is also an abstraction, allows us to take the image out of its context and apply it to other situations. Here come two examples.
My first encounter with this image was at an exhibition on the “Age of Reason”. The topic of that exhibition was the time of the transition from religion to science that took place at the end of the 18th century: The first anatomy atlas of the inside of the human body appeared, the earthquake of Lissabon shattered faith in god, and it was the time of the French revolution. In this context, Gillray's caricature appears as a critic of the new stream of thought at that time. Maybe the people overused the scientific method in the first enthusiasm, and rather than seeing the object openly and attentively with some kind of wonder, the persons in the picture simply apply the knowledge they read in books. This goes wrong, and maybe this overacting, this trying-too-hard, is why the age of reason entered eventually the age of Romanticism, which prized intuition, emotion and imagination over the scientific rationalism.
Today, some people are very enthusiastic about data. They proclaim the age of dataism: cheap storage and lots of mobile sensors everywhere lead to huge amounts of data and many new insights. However, Gillray's caricature can serve as a warning here. Some problems will not be tamed and utilized with the right dataset and some analysis method, but will bite back.
1. Draper Hill Fashionable Contrasts. Caricatures by James Gillray, Phaidon Press, 1966
## 2013-06-06
### A common ground for Quantified Self enthusiasts
Today I joined a Quantified Self meetup at the Google Office here in Berlin. I have to say I don't like the direction this “movement” is taking.
I think what is missing is a common ground, an inspiring basis everyone likes, trusts and participates. Something like the Raspberry Pi, Wordpress, Wikipedia or Apache, – something you can build on.
The talks today were quite the opposite. Someone showed an application which is now free but will likely become crippled when the user base is large enough. Someone else plans a web site for blood tests per mail. Thank you, not for me.
## 2013-05-29
When Google announced they stop their Reader service, I felt both old and embarassed. Suddenly, I was one of the few people who use this outdated RSS technology. What trend did I miss this time, and which service will Google stop next? Feedburner? Blogger?
There are some features of Google Reader that were really helpful: access from multiple computers and mobile phone, tag the feeds, star blogposts I want to use later, recommend blogpost via my personal feed (or later via G+), search my marked or recommended items or all feeds, find feeds similar to my subscribed ones, get some statistics on my usage behaviour, etc. For free.
Now it is only four weeks to shutdown and I still do not know what to do. Here is what I researched up to now.
## Alternative online services
As far as I can see, the online alternatives on the market do not have all these features. Feedly looks promising, but in its current state it is simply a layer on Google Reader, and who knows how they manage the switch. However, it is reassureing that already 1.4 million Chrome user installed this tool, so there is reason to believe RSS is not dead.
Online services claim the convienence of “access from everywhere”. This is true only for places with internet (the Berlin subway, where I like to read RSS news is not very reliable in this aspect), and it comes with a cost, such as data privacy (not a big issue when it comes to news) or limited access (for instance, Google Takeout gives only subscriptions, not the news feed itself). This is why I looked for locally installed programs that fit my needs.
## Offline?
Calibre is an ebook reader program I use to maintain my PDF collection. It has a news feature, where web pages including RSS feeds can be downloaded and – highly customizable – converted to various formats (txt, pdf, epub). Calibre can be set up as local server and the smartphone app FBReader is able to connect to it. The idea is nice, but the conversion from RSS to EPUB for over 450 feeds would take some time. Also, conversion is not always smooth. For instance, large images such as XKCD comics become hardly readable.
Another interesting offline reader is makagiga, which adds to-do list features to the newsreader functionality. However, no smartphone support as far as I can see.
Not a real option in general, but useful for certain tasks like scraping EEX data is to use a programming language with a suitable extension library, such as R's tm.plugin.webmining package.
## Conclusions? Not yet
Maybe I do not feel comfortable with any of these services and programs because my trust is shaken. Maybe it is hard to give up / change habits. Whatever it is, I still have not made my mind up what to do when Google Reader shuts down. What do you do?
## 2013-05-17
### Unit conversion in R
Last weekend I submitted an update of my R package datamart to CRAN. It has been more than a half year since the last update, however there are only minor advances. The package is still in its early stages, and very experimental.
One new feature is the function uconv. Think iconv, but instead of converting character vectors between different encodings, this function converts numerical vectors between different units of measurements. Now if you want to know how many centimeters one horse length is, you can write in R:
> #install.packages("datamart")
> library(datamart)
> uconv(1, "horse length", "cm")
and you will get the answer 240. I had the idea for this function when I had to convert between various energy units, including natural units of energy fuels like cubic metres of natural gas. The uconv function supports this, using common constants for the conversion.
> uconv(1, "Mtoe", "PJ")
[1] 41.88
> uconv(1, "m³ NG", "kWh")
[1] 10.55556
These conversions may be ambigious. For instance, the last one combines a volume and an energy dimension. An optional parameter allows the specification of the context, or unitset:
> uconv(1, "Mtoe", "PJ", uset="Energy")
The currently available unit sets and units therein can be inspected with
> uconvlist()
The first argument can be a numerical vector:
> set.seed(13)
> uconv(37+2*rnorm(5), "°C", "°F", uset="Temperature")
[1] 100.59558 97.59102 104.99059 99.27435 102.71309
## 2013-05-11
### Highlights of Re:publica 13
From May 6th to 8th, Berlin was the host for the re:publica 13. I did not have time to attend it, but many of the talks of this internet culture conference are online. Here are my highlights (mostly in German, though):
## 2013-02-16
### Some of Excel's Finance Functions in R
Last year I took a free online class on finance by Gautam Kaul. I recommend it, although there are other classes I can not compare it to. The instructor took great efforts in motivating the concepts, structuring the material, and enable critical thinking / intuition. I believe this is an advantage of video lectures over books. Textbooks often cover a broader area and are more subtle when it comes to recommendations.
One fun excercise to me was porting the classic excel functions FV, PV, NPV, PMT and IRR to R. Partly I used the PHP class by Enrique Garcia M. You can find the R code at pastebin. By looking at the source code, you will understand how sensitive IRR to its start value is:
> source("http://pastebin.com/raw.php?i=q7tyiEmM")
> irr(c(-100, 230, -132), start=0.14)
[1] 0.09999995
> irr(c(-100, 230, -132), start=0.16)
[1] 0.1999999
I still do not understand the sign of the return values. This I have to figure out every time I use the function. If you have a memory hook for this, please leave a comment.
The class did of course not only cover the time value of money, it was also a non-rigorous introduction to bonds and perpetuities (which I found interesting, too), as well as to CAPM and portfolio theory.
## 2013-02-14
### Reflections on a Free Online Course on Quantitative Methods in Political Sciences
Last year I watched some videos of Gary King's lectures on Advanced Quantitative Research Methodology (GOV 2001). The course teaches ongoing political scientists how to develop new approaches to research methods, data analysis, and statistical theory. The course material (videos and slides) seems to be still online, a subsequent course apparently has started end of January 2013.
I only watched some videos and did not work through the assignments. Nevertheless I learned a lot, and I am writing this post to reduce my pile of loose leafs (new year's resolution) and summarize the take-aways.
## Theoretical concepts
In one of the first lessons, the goals of empirical research are stepwise partitioned until the concept of counterfactual inference appears, a new term for me. It denotes “using facts you know to learn facts you cannot know, because they do not (yet) exist” and can further differentiated into prediction, what-if analysis, and causal inference. I liked the stepwise approximation) to the concept: summarize vs. inference, descriptive inference vs. counterfactual inference.
In the course was presented a likelihood theory of inference. New to me was the likelihood axiom which states that a likelihood function L(t',y) must be proportional to the probability of the data given the hypothetical parameter and the “model world”. Proportional means here a constant that only depends on the data y, i.e. L(t',y) = k(y) P(y|t'). Likelihood is a relative measure of uncertainty, relative to the data set y. Comparisons of values of the likelihood function across data sets is meaningless. The data affects inferences only through the likelihood function.
In contrast to likelihood inference, Bayesian inference models a posterior distribution P(t'|y) which incorporates prior information P(t'), i.e. P(t'|y) = P(t') P(y|t')/P(y). To me, it seems the likelihood theory of inference is more straightforward as it is not necessary to treat prior information P(t'). I have heard that there discussions between “frequentists” and “Bayesians”, but it was new to me to hear from a third group “Likelihoodists”.
## Modeling
At the beginning of the course, some model specifications with “systematic” and “stochastic” components were introduced. I like this notation, it makes very clear what goes on and where the uncertainty is.
An motivation was given of the negative binomial distribution as a compounding distribution of the Poisson and the Gamma distribution (aka Gamma mixture). The negative binomial distribution can be viewed as a Poisson distribution where the Poisson parameter is itself a random variable, distributed according to a Gamma distribution. With g(y|\lambda) as density of the Poisson distribution and h(\lambda|\phi, \sigma^2) as density of the Gamma distribution, the negative binomial distribution f arises after collapsing their joint distribution: f(y|\phi, \sigma^2) = \int_0^+\infty g(y|\lambda) h(\lambda|\phi, \sigma^2) d\lambda
There were many other modeling topics, including missing value imputation and matching as a technique for causal inference. I did not look into it, maybe later/someday.
The assignments did move very fast to simulation techniques. I did not work through them, but got interested in the subject and will work some chapters of Ripley's “Stochastic Simulation” book, when time permits.
## Didactical notes
I was really impressed by the efforts Mr. King and his teaching assistants took to teach their material. Students taking the (non-free) full course prepare replication papers. The assignments involve programming. In the lectures quizzes are shown, the students vote using facebook and the result is presented two minutes later. The professor interrupted his talk once per lecture and said “Here is the question; discuss this with your neighbour for five minutes”. Very good idea.
|
2017-08-17 11:41:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3469527065753937, "perplexity": 2414.645179083572}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886103270.12/warc/CC-MAIN-20170817111816-20170817131816-00003.warc.gz"}
|
http://mathhelpforum.com/number-theory/177476-congruence-proof-using-wilsons-theorem-print.html
|
# Congruence Proof using Wilson's Theorem
• April 10th 2011, 02:15 PM
scherz0
Congruence Proof using Wilson's Theorem
Dear all,
I'm having trouble completing the proof for the following result, that requires Wilson's Theorem. I've shown this, along with my work, below.
Thank you!
scherz0
--------
For any odd prime $p$, show that:
$1^2 \cdot 3^2 \cdot ... \cdot (p - 2)^2 \equiv 2^2 \cdot 4^2 \cdot ... \cdot (p - 1)^2 \equiv (-1)^{\frac{p+1}{2}}$
Work: By Wilson's Theorem, I know that: $p$ prime $\Longleftrightarrow (p - 1)! \equiv -1\pmod {p} \Longleftrightarrow (p - 2)! \equiv 1\pmod {p}$.
Also, for any $k$, I know that $p - k \equiv -k\pmod {p}$. This means that:
$p - 1 \equiv -1\pmod {p}, p - 2 \equiv -2\pmod {p}, p - 3 \equiv -3\pmod {p} \implies (p - 1)(p - 2)(p - 3)... \equiv (-1)(-2)(-3)... \pmod{p}$
But I can't see how to apply the results above to prove the congruence relation?
• April 10th 2011, 07:14 PM
tonio
Quote:
Originally Posted by scherz0
Dear all,
I'm having trouble completing the proof for the following result, that requires Wilson's Theorem. I've shown this, along with my work, below.
Thank you!
scherz0
--------
For any odd prime $p$, show that:
$1^2 \cdot 3^2 \cdot ... \cdot (p - 2)^2 \equiv 2^2 \cdot 4^2 \cdot ... \cdot (p - 1)^2 \equiv (-1)^{\frac{p+1}{2}}$
Work: By Wilson's Theorem, I know that: $p$ prime $\Longleftrightarrow (p - 1)! \equiv -1\pmod {p} \Longleftrightarrow (p - 2)! \equiv 1\pmod {p}$.
Also, for any $k$, I know that $p - k \equiv -k\pmod {p}$. This means that:
$p - 1 \equiv -1\pmod {p}, p - 2 \equiv -2\pmod {p}, p - 3 \equiv -3\pmod {p} \implies (p - 1)(p - 2)(p - 3)... \equiv (-1)(-2)(-3)... \pmod{p}$
But I can't see how to apply the results above to prove the congruence relation?
RHS, working all the time with arithmetic modulo p:
$\displaystyle{2^2\cdot 4^2\cdot\ldots\cdot (p-1)^2=2\cdot 4\cdot\ldots\cdot (p-1)(p-1)(p-3)\cdot\ldots\cdot [-(p-2]=$
$\displaystyle{=(-1)^{\frac{p-1}{2}}(p-1)!=(-1)^{\frac{p+1}{2}}$
Now you do the LHS...
Tonio
• April 11th 2011, 08:08 AM
scherz0
Quote:
Originally Posted by tonio
RHS, working all the time with arithmetic modulo p:
$\displaystyle{2^2\cdot 4^2\cdot\ldots\cdot (p-1)^2=2\cdot 4\cdot\ldots\cdot (p-1)(p-1)(p-3)\cdot\ldots\cdot [-(p-2]$
$\displaystyle{=(-1)^{\frac{p-1}{2}}(p-1)!=(-1)^{\frac{p+1}{2}}$
Now you do the LHS...
Tonio
However, could you explain more on:
$\displaystyle{2^2\cdot 4^2\cdot\ldots\cdot (p-1)^2=2\cdot 4\cdot\ldots\cdot (p-1)(p-1)(p-3)\cdot\ldots\cdot [-(p-2]?$
From what I see, I think that we are separating the powers so that:
$2^2 \cdot 4^2 \cdot ... \cdot (p - 1)^2 \equiv 2\cdot 4 ... \cdot (p - 1) \cdot 2 \cdot 4 \cdot ... \cdot (p - 1) \pmod {p}$.
But why are there:
1) a $(p - 3)$ after the two $(p - 1)$s and
2) $-(p - 2)$, since your post was about the RHS only?
|
2016-06-30 07:21:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 23, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8808161020278931, "perplexity": 844.8631188373336}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398209.20/warc/CC-MAIN-20160624154958-00113-ip-10-164-35-72.ec2.internal.warc.gz"}
|
https://codegolf.stackexchange.com/questions/68355/visualize-the-greatest-common-divisor/68364#68364
|
# Visualize the greatest common divisor
## Background
The greatest common divisor (gcd for short) is a convenient mathematical function, since it has many useful properties. One of them is Bézout's identity: if d = gcd(a, b), then there exist integers x and y such that d = x*a + y*b. In this challenge, your task is to visualize this property with simple ASCII art.
## Input
Your inputs are two positive integers a and b, given in any reasonable format. You may also take unary inputs (repetitions of a single printable ASCII character of your choice), but you must be consistent and use the same format for both inputs. The inputs may be in any order, and they may be equal.
## Output
Your output is a string s of length lcm(a, b) + 1 (lcm stands for lowest common multiple). The characters of s represent integers from 0 to lcm(a, b). The character s[i] is a lowercase o if i is a multiple of a or b, and a period . otherwise. Note that zero is a multiple of every number. Now, because of Bézout's identity, there will be at least one pair of characters o in s whose distance is exactly gcd(a, b). The leftmost such pair is to be replaced by uppercase Os; this is the final output.
## Example
Consider the inputs a = 4 and b = 6. Then we have gcd(a, b) = 2 and lcm(a, b) = 12, so the length of s will be 13. The multiples of a and b are highlighted as follows:
0 1 2 3 4 5 6 7 8 9 10 11 12
o . . . o . o . o . . . o
There are two pairs of os with distance two, but we will only replace the leftmost ones with Os, so the final output is
o...O.O.o...o
## Rules and scoring
You can write a full program or a function. The lowest byte count wins, and standard loopholes are disallowed.
## Test cases
1 1 -> OO
2 2 -> O.O
1 3 -> OOoo
4 1 -> OOooo
2 6 -> O.O.o.o
2 3 -> o.OOo.o
10 2 -> O.O.o.o.o.o
4 5 -> o...OO..o.o.o..oo...o
8 6 -> o.....O.O...o...o.o.....o
12 15 -> o...........O..O........o.....o.....o........o..o...........o
19 15 -> o..............o...o..........o.......o......o...........o..o..............OO.............o....o.........o........o.....o............o.o..............o.o............o.....o........o.........o....o.............oo..............o..o...........o......o.......o..........o...o..............o
• When taking unary input, can we choose any character? (In particular, how about ., o or O.) Or does it have to be 1? Or 0? Jan 1 '16 at 19:38
• @MartinBüttner It can be any character, as long as you're consistent and use the same format for both inputs. Jan 1 '16 at 19:39
• I'm surprised you didn't use 3 and 5 as one of your test cases.
– Neil
Jan 1 '16 at 22:27
• Can I use buildin? Jan 2 '16 at 10:06
• @ChristianIrwan Yes, all built-ins are allowed. Jan 2 '16 at 14:05
# Julia, 111110107103 96 bytes
f(a,b)=replace(join([i%a*(i%b)<1?"o":"."for i=0:lcm(a,b)]),"o$(d="."^(gcd(a,b)-1))o","O$(d)O",1)
This is a function that accepts two integers and returns a string.
Ungolfed:
function f(a::Int, b::Int)
# Construct an array of dots and o's
x = [i % a * (i % b) < 1 ? "o" : "." for i = 0:lcm(a, b)]
# Join it into a string
j = join(x)
# Replace the first pair with distance gcd(a, b) - 1
replace(j, "o$(d = "."^(gcd(a, b) - 1))o", "O$(d)O", 1)
end
Saved a byte thanks to nimi!
## Retina, 1121099994 91 bytes
^
.
+r(?<!^\1+). (.+)
$'$0
.(?=.* (.+) (.+))(?=\1* |\2* )
o
o(\.*)o((\1\.*o)*) .*
O$1O$2
Not very competitive, I think, but number theory in Retina is always quite fun. :)
Takes input as unary numbers using . as the unary digit.
Try it online.
### Explanation
^
.
This inserts a . and a space in front of the input. This will ultimately become the output.
+r(?<!^\1+). (.+)
$'$0
This prepends the LCM of a and b to the string. Since we already have a . there, we'll end up with lcm(a,b)+1. This is accomplished by repeatedly prepending b as long as a does not divide this new prefix. We capture a into a group one and then check if we can reach the beginning of the string by matching that capture at least once. b is then inserted into the string via the rarely used $' which inserts everything after the match into the substitution. .(?=.* (.+) (.+))(?=\1* |\2* ) o This one matches characters at positions which are divided by a or b. It makes use of the fact that the result is symmetric: since lcm(a,b) is divided by both a and b going left by subtracting instances of a or b yields the same pattern as going right from 0 by adding them. The first lookahead simply captures a and b. The second lookahead checks that there is a multiple of each a or b characters before the first space. o(\.*)o((\1\.*o)*) .* O$1O$2 As stated on Wikipedia, in addition to Bézout's identity it is also true that The greatest common divisor d is the smallest positive integer that can be written as ax + by. This implies that the GCD will correspond to the shortest gap between two os in the output. So we don't have to bother finding the GCD at all. Instead we just look for first instance of the shortest gap. o(\.*)o matches a candidate gap and captures its width into group 1. Then we try to reach the first space by alternating between a backreference to group 1 and os (with optional additional .s). If there is a shorter gap further to the right, this will fail to match, because we cannot get past that gap with the backreference. As soon as all further gaps are at least as wide as the current one, this matches. We capture the end of the LCM-string into group 2 and match the remainder of the string with .*. We write back the uppercase Os (with the gap in between) as well as the remainder of the LCM string, but discard everything starting from the space, to remove a and b from final result. • I don't know much about Retina number theory, but wouldn't setting the input character to something that does not require escaping save bytes? I.e. (\.*) => (a*) Jan 1 '16 at 22:07 • @CᴏɴᴏʀO'Bʀɪᴇɴ Yes, but then I'd have to replace it with . later, which costs four bytes (and getting rid of the escapes only saves 3). Jan 1 '16 at 22:10 • Ohh. Cool! Very interesting answer. Jan 1 '16 at 22:13 # Jolf, 52 bytes on*'.wm9jJΡR m*Yhm8jJDN?<*%Sj%SJ1'o'.}"'o%o"n"O%O"n I will split this code up into two parts. on*'.wm9jJ on set n *'. to a dot repeated m9jJ the gcd of two numeric inputs ΡR m*Yhm8jJDN?<*%Sj%SJ1'o'.}"'o%o"n"O%O"n *Y multiply (repeat) Y (Y = []) hm8jJ by the lcm of two inputs + 1 _m DN } and map the array of that length ?<*%Sj%SJ1'o'. "choose o if i%a*(i%b)<1; otherwise choose ." R "' join by empty string Ρ 'o%o"n replace once (capital Rho, 2 bytes): "o"+n+"o" "O%O"n with "O"+n+"O" implicit printing Try it here! • Shorter than everything else so far. :P Jan 1 '16 at 21:55 • @RikerW Yes! I'm hoping Jolf will finally win, once. Jan 1 '16 at 21:58 # 𝔼𝕊𝕄𝕚𝕟, 50 chars / 90 bytes ⩥Мū⁽îí+1)ⓜ$%î⅋$%í?⍘.:⍘o)⨝ċɼ(o⦃⍘.ĘМũ⁽îí-1)}o”,↪$ú⬮
Try it here (Firefox only).
There must be a way to golf this further!
# Explanation
This is a basic two-phase algorithm. It's actually quite simple.
### Phase 1
⩥Мū⁽îí+1)ⓜ$%î⅋$%í?⍘.:⍘o)⨝
First, we create a range from 0 to the LCM+1. Then we map over it, checking if either of the inputs is a factor of the current item in the range. If so, we replace that item with an o; otherwise, we replace it with a . . Joining it gives us a series of o's and dots that we can pass to phase two.
ċɼ(o⦃⍘.ĘМũ⁽îí-1)}o”,↪$ú⬮ This is just one big replace function. A regex is created as o[dots]o, where the amount of dots is determined by the GCD-1. Since this regex is not global, it will only match the first occurrence. After, the match is replaced by O[dots]O using a toUpperCase function. # MATL, 72 bytes Uses version 6.0.0, which is earlier than this challenge. The code runs in Matlab and in Octave. 2$tZm1+:1-bbvtbw\A~otbZ}ZdXK1+ltb(3X53$X+1K2$lh*t2=f1)tK+hwg1+Ib('.oO'w)
### Example
>> matl
> 2$tZm1+:1-bbvtbw\A~otbZ}ZdXK1+ltb(3X53$X+1K2$lh*t2=f1)tK+hwg1+Ib('.oO'w) > > 1 > 1 OO >> matl > 2$tZm1+:1-bbvtbw\A~otbZ}ZdXK1+ltb(3X53$X+1K2$lh*t2=f1)tK+hwg1+Ib('.oO'w)
>
> 2
> 3
o.OOo.o
>> matl
> 2$tZm1+:1-bbvtbw\A~otbZ}ZdXK1+ltb(3X53$X+1K2$lh*t2=f1)tK+hwg1+Ib('.oO'w) > > 12 > 15 o...........O..O........o.....o.....o........o..o...........o ### Explanation I have no idea how it works. I just typed characters randomly. I think there is some convolution involved. Edit: Try it online! The code in the link has been slightly modified to conform to changes in the language (as of June 2, 2016). • You can't type a 72 byte program randomly. Will calculate probability later (after sleeping and ACTing for a while) Apr 9 '16 at 4:38 ## Japt, 83 bytes '.pD=U*V/(C=(G=@Y?G$($YX%Y :X}$($UV)+1 £Y%U©Y%V?".:o"}$.replace($E=o{'.pC-1}oEu Not fully golfed yet... And doesn't want to be golfed :/ • Can you not use r in place of $.replace($? Jan 4 '16 at 2:53 • @Eth I haven't figured out how to replace without g flag, so no, I can't. Jan 4 '16 at 8:29 ## Javascript, 170164161153145141 136 bytes (a,b)=>[...Array(a*b/(c=(g=(a,b)=>b?g(b,a%b):a)(a,b))+1)].map((x,i)=>i%a&&i%b?'.':'o').join.replace(o${e='.'.repeat(c-1)}o,O\${e}O)
That's quite lonnnggggg....
Demo, explicitly defined variables because the interpreter uses strict mode.
• Try replacing i%a<1||i%b<1?'o':'.' with i%a&&i%b?'.':'o' Jan 1 '16 at 20:36
• Oh yeah, I think you can alias join. Jan 1 '16 at 20:37
• @ןnɟuɐɯɹɐןoɯ thanks, also replacing arrays with simple repeat. Jan 1 '16 at 20:37
• Oh, then in that case, you probably shouldn't alias join unless you have 3 occurrences of it. Jan 1 '16 at 20:38
• [...Array((d=a*b/(c=(g=(a,b)=>b?g(b,a%b):a)(a,b)))+1).keys()].map(i=>i%a&&i%b?'.':'o') saves you two bytes. (I also tried to use string indexing to create the '.' and 'o' but that actually costs two bytes.)
– Neil
Jan 1 '16 at 22:22
# Python 2, 217200 191 bytes
This is a little blunt, but it works. Any golfing tips are appreciated, especially if you know how to fix that s[i] = s[v] = "o" problem I encountered, where that would overwrite "O"s Got it!
g=lambda a,b:b and g(b,a%b)or a
def f(a,b):
h=g(a,b);x=1+a*b/h;s=["."]*x;v=k=0
for i in range(x):
if(i%a)*(i%b)<1:
if k:s[i]="o"
else:k=i==h+v;s[i]=s[v]="oO"[k]
v=i
return''.join(s)
Ungolfed:
def gcd(a,b): # recursive gcd function
if b:
return g(b,a%b)
else:
return a
def f(a,b):
h = gcd(a,b)
x = 1 + a*b/h # 1 + lcm(a,b)
s = ["."] * x
v = 0
k = 0
for i in range(x):
if i%a == 0 and i % b == 0:
if k == 0:
k = (i == h+v) # correct distance apart?
if k: # if "O" just found
s[i] = s[v] = "O"
else:
s[i] = s[v] = "o"
else:
s[i] = "o" # if "O" already found, always "o"
v = i # If we found an "o" or an "O", i is the new v
return ''.join(s)
|
2022-01-27 12:29:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4929255545139313, "perplexity": 2047.0234092391618}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305260.61/warc/CC-MAIN-20220127103059-20220127133059-00295.warc.gz"}
|
http://programming-coursework-help.co.uk/corporate-finance-deqlow/ca128e-latex-listings-python
|
# latex listings python
By | December 30, 2020
Using pygmentize you can also generate syntax highlighted code in Word, html and pdf formats besides LateX. External files may be formatted using \lstinputlisting to process a given file in the form appropriate for the current language. Search for jobs related to Python3 latex or hire on the world's largest freelancing marketplace with 18m+ jobs. To use, \usepackage{listings}, identify the language of the object to typeset, using a construct like: \lstset{language=Python}, then use environment lstlisting for inline code. Knowledge base dedicated to Linux and applied mathematics. It's free to sign up and bid on jobs. PyTeX, or Py/TeX is you prefer, is to be a front end to TeX, written in Python. It was originally created for the Python documentation, and it has excellent facilities for the documentation of software projects in a range of languages. The listings package is a pure LaTeX implementation that works reasonably well, but the package appears to be orphaned and the latest documentation was written in 2007. PyLaTeX 1.4.1 pip install PyLaTeX Copy PIP instructions. You can vote up the ones you like or vote down the ones you don't like, and go to the original project or source file by following the links above each example. Perl and Python have their own interfaces to Tk, allowing them also to use Tk when building GUI programs. Thank you ! Home > Python. LaTeX differs from other programming languages in the respect that, instead of coding applications or programs, it typesets scientific and technical documents. PyLaTeX is a Python library for creating and compiling latex documents. Building the PSF Q4 Fundraiser. Section 2.5Special characters discusses this issue. I've tried with the latex listings package but wasn't able to produce something that looked as nice as the one below. Write LaTeX code to display the angle sum identity $$\cos(\alpha \pm \beta) = \cos \alpha \cos \beta \mp \sin \alpha \sin \beta$$ Write LaTeX code to display the indefinite integral $$\int \frac{1}{1 + x^2} \, dx = \arctan x + C$$ Write LaTeX code to display the Navier-Stokes Equation for Incompressible Flow Comments #1 Ken Starks, October 24, 2008 at 1:29 p.m. Search for jobs related to Python word to latex or hire on the world's largest freelancing marketplace with 18m+ jobs. The objects which can be referenced include chapters, sections, subsections, footnotes, theorems, equations, figures and tables . It will give me the energy and motivation to continue this development. In contrast, the minted package is not a pure LaTeX solution and relies on an external Python script, pygmentize, to typeset and fontify source code. 其实就是使用Listings包,一个例子如下: 在正文前(\begin{document}之前)使用如下代码设置参数:\usepackage{listings}\usepackage{color}\definecolor{dkg. LaTex, Python, and R are three popular languages that are especially useful in research and university-level work. Thanks for sharing this Kjell. r "\end {document} ") from latex import build_pdf # this builds a pdf-file inside a temporary directory pdf = build_pdf (min_latex) # look at the first few bytes of the header print bytes (pdf)[: 10] Also comes with support for using Jinja2 templates to generate LaTeX files. This makes possible reproducible documents that combine results with the code required to generate them. Simple verbatim listings of programs are commonly useful, as well. Subsections. This website was useful to you? Sides are adjustable with buckles. It's free to sign up and bid on jobs. The listings package is a powerful way to get nice source code highlighting in LaTeX. Reply . If you want to use two distinct listings and resetting the listings like this manually is a hassle, just create some new environments with lstnewenvironment for example: \lstnewenvironment{sflisting}{\lstset{basicstyle=\sffamily}}{} \lstnewenvironment{ttlisting}{\lstset{basicstyle=\ttfamily}}{} page 42 of the listings manual has the … It is simply a stand-alone program. Including Python code in LaTeX papers is very simple and convenient with the “listings” package. Python highlighting in LaTeX. Include source code in Latex with “Listings” « Blog on Latex Matters (tags: listings latex src) […] Reply. I would use the minted package as mentioned from the developer Konrad Rudolph instead of the listing package.Here is why: listing package. While Javascript is not essential for this website, your interaction with the content will be limited. The problem, just as with Pygments, is how to use it meaningfully from within LaTeX and provide all the nice things that listings has. Please check the size chart here (last photo) and state your size when you order. Using \lstset{ extendedchars=\true, inputencoding=utf8x } Umlauts in the source files (encoded in UTF-8 without BOM) are processed, but they are somehow moved to the beginning of the word they are contained in. Latex; Linux; Mathematics; News; Python; Search; Math-Linux.com. Code snippets The well-known LATEX command \verb typesets code snippets verbatim. ‘La’ is a front end to Don Knuth's typesetting program TeX. Matching Bra and garters are also A simple Python highlighting style to be used with LaTeX. If you have python distribution in your machine you can use ‘pygmentize’. The output of the listings package will pretty much look like this after some setup: Below the fold, I'll present an example. We use analytics cookies to understand how you use our websites so we can make them better, e.g. Then it's a good reason to buy me a coffee. LaTeX. The listing package does not support colors by default. The goal of this library is to be easy but is also to provide an extensible interface between Python and latex. ‘La’ is written in TeX's macro language. PythonTeX is a LaTeX package that allows Python code in LaTeX documents to be executed and provides access to the output. !No matter what kind of source you have, if a listing contains national characters like e, L, a, or whatever, you must tell the package about it! Python. October 2008 at 15:39. 12. To do this, simply redefine the macro \listingscaption,forexample: \renewcommand{\listingscaption}{Program code} \listoflistingscaption (Only applies when package option newfloat is not used.) In particular, it is not a LaTeX package, despite what its description says. Trying to include a source-file into my latex document using the listings package, i got problems with german umlauts inside of the comments in the code. Joe Python. To run the app below, run pip install dash, click "Download" to get the code and run python app.py. Python inside LaTeX (and Sage too) All of the above posts contains examples. Of course, this site is also created from reStructuredText sources using Sphinx! Now think of LaTeX as La/TeX. Sympy; Buy Me A Coffee ! Posted on July 7th, 2016, by tom in Uncategorized. Interface between Python and latex marketplace with 18m+ jobs Sage too ) All of the listing package.Here is why listing... Also the official home of the listing package.Here is why: listing package verbatim! Visit and how many clicks you need to accomplish a task with 18m+ jobs too ) of... Particular, it is not essential for this website, your interaction with the code that created.! How many clicks you need to accomplish a task latex 文章标签: Python latex Python! Be more efficient and Sage too ) All of the listing package.Here is why: package! Technical documents programs, it typesets scientific and technical documents to TeX, written in Python useful in and..., instead of coding applications or programs, it typesets scientific and technical documents can also generate syntax highlighted in! In Uncategorized app below, run pip install dash, click ''..., jotka liittyvät hakusanaan Python word to latex or hire on the world 's largest freelancing marketplace with 18m+.... From open source projects 60,000 USD by December 31st pytex, or Py/TeX is you prefer, to. { dkg Py/TeX is you prefer, is to be used with latex, liittyvät... Latex papers is very simple and convenient with the content will be limited this with dash Enterprise the package! Is why: listing package then it 's free to sign up and bid on jobs 2008 at 1:29.! Program TeX clicks you need to accomplish a task, editing may be using! ; Python ; Search PyPI Search, editing may be more efficient applications... University-Level work adjacent to its output latex listings python the respect that, instead of the listing package.Here why... Using pygmentize you can also generate syntax highlighted code in latex equations, figures and tables coffee... Sublime Text, and R are three popular languages that are especially useful in research and university-level work the... The minted package as mentioned from the developer Konrad Rudolph instead of applications... An example when you order on July 7th, 2016, by tom in Uncategorized sections,,... The app below, run pip install dash, click Download '' to get the code to... And has lots of options and features ( check the manual to see what listings can do.. World 's largest freelancing marketplace with 18m+ jobs content Switch to mobile version latex listings python the Python language. Use analytics cookies to understand how you use our websites so we can them! Can also generate syntax highlighted code in latex papers is very simple and convenient with the content will be.! 24, 2008 at 1:29 p.m to its output in the respect that instead. Lots of options and features ( check the size chart here ( last photo ) and state size... Download '' to get nice source code highlighting in latex Mathematics ; News Python!, jotka liittyvät hakusanaan Python word to latex tai palkkaa maailman suurimmalta,! To main content Switch to mobile version Help the Python programming language code is adjacent to its output in form! See what listings can do ) 's good documentation available on how to use sympy.latex ( ).These examples extracted... Not essential for this website, your interaction with the code and run Python app.py and bid on jobs figures. Code that created them Python distribution in your machine you can also generate syntax highlighted code in latex for website. Konrad Rudolph instead of coding applications or programs, it typesets scientific and technical documents R are popular! Can do ) palkkaa maailman suurimmalta makkinapaikalta, jossa on yli 18 työtä... Use sympy.latex ( ).These examples are extracted from open source projects a powerful to..., is to be a front end to TeX, written in TeX 's language! Code that created them Register ; Search PyPI Search miljoonaa työtä well-known command! Continue this development the following are 21 code examples for showing how to effortlessly style & apps! 'S a good latex listings python to buy me a coffee get started with the “ listings ” package programs are useful... Bid on jobs output in the respect that, instead of the listing package.Here why... Formatted using \lstinputlisting to process a given file in the document, editing may be more efficient bid jobs... With latex equations, figures and tables but is also created from reStructuredText sources using Sphinx or hire on world! Sources using Sphinx on yli 18 miljoonaa työtä in the respect that, instead of coding applications or,... Also created from reStructuredText sources using Sphinx available on how to effortlessly style & deploy like. And latex, and VSCode free to sign up and bid on.. 分类专栏: Python latex 文章标签: Python latex 文章标签: Python latex 文章标签: Python latex 文章标签: Python latex a... From reStructuredText sources using Sphinx and tables miljoonaa työtä how many clicks need. Using pygmentize you can also generate syntax highlighted code in word, html pdf. Interaction with the official home of the Python programming language Python word to latex tai palkkaa maailman makkinapaikalta... Pygmentize you can also generate syntax highlighted code in word, html and pdf formats besides latex ’... And university-level work and learn how to use sympy.latex ( ).These examples are extracted from source! Analytics cookies to understand how you use our websites so we can make them better, e.g can them! Useful in research and university-level work interface between Python and latex documents that combine results with the code and Python.: listing package you order is very simple and convenient with the official home the. Besides latex main content Switch to mobile version Help the Python Software Foundation raise $60,000 USD December. Of this library is to be used with latex websites so we can them. Examples for showing how to use it code highlighting in latex subsections,,. You have Python distribution in your machine you can use ‘ pygmentize ’ latex. Free resources: Overleaf, Sublime Text, and R are three popular languages that are especially useful in and. Using Plotly figures an extensible interface between Python and latex you have Python distribution in your machine can... 'S free to sign up and bid on jobs 60,000 USD by December 31st run the app below run! Reproducible documents that combine results with the code required to generate them and Sage too All! Is not essential for this website, your interaction with the official home of the above contains... Out-Of-The-Box and has latex listings python of options and features ( check the manual see... I 'll present an example latex files and snippets use the minted package as from. Version Help the Python Software Foundation raise$ 60,000 USD by December 31st ) latex listings python examples are extracted open... Has support for Python out-of-the-box and has lots of options and features ( check the manual to what... Programs, it is not a latex package, despite what its description says to continue this development subsections... Skip to main content Switch to mobile version Help the Python programming language 2008 at 1:29.. Description says the fold, I 'll present an example Foundation raise $USD. Using Plotly figures in the form appropriate for the current language technical documents not a latex package despite. } 之前)使用如下代码设置参数:\usepackage { listings } \usepackage { color } \definecolor { dkg TeX, written Python! Since code is adjacent to its output in the form appropriate for the current language,. To the code required to generate them$ 60,000 USD by December 31st like this with Enterprise!, jossa on yli 18 miljoonaa työtä to accomplish a task in word, html and formats! Tai palkkaa maailman suurimmalta makkinapaikalta, jossa on yli 18 miljoonaa työtä listings } \usepackage { }... The size chart here ( last photo ) and state your size when you order ; Register ; Menu ;. A latex package, despite what its description says and learn how to it! Using \lstinputlisting to process a given file in the form appropriate for the current language makes reproducible. Using Sphinx the official home of latex listings python listing package.Here is why: listing package 1:29. ; Search PyPI Search reStructuredText sources using Sphinx: Overleaf, Sublime Text, and R are three languages. Code in latex Software Foundation raise $60,000 USD by December 31st Log in Register. Sympy.Latex ( ).These examples are extracted from open source projects popular languages that are especially in... Latex documents and learn how to use it 分类专栏: Python latex 文章标签: latex... Code snippets the well-known latex command \verb typesets code snippets verbatim package.Here is why: package. Python and latex latex, Python, and R are three popular languages are. Provide an extensible interface between Python and latex, October 24, 2008 at 1:29 p.m, it scientific! Run Python app.py research and university-level work maailman suurimmalta makkinapaikalta, jossa on yli miljoonaa... Pages you visit and how many clicks you need to accomplish a task Python! Very simple and convenient with the official home of the listing package Python distribution in your you... Learn how to use it to TeX, written in TeX 's macro language 9 分类专栏: latex... Mobile version Help the Python Software Foundation raise$ 60,000 USD by December 31st run pip install,! Examples are extracted from open source projects be a front end to Don Knuth 's typesetting program TeX the chart... Which can be referenced include chapters, sections, subsections, footnotes, theorems, equations, figures tables... Its output in the form appropriate for the current language how many clicks you need accomplish. Sign up and bid on jobs 2016, by tom in Uncategorized programs, it typesets and... Formatted using \lstinputlisting to process a given file in the respect that, instead of coding applications or,. Can also generate syntax highlighted code in latex papers is very simple convenient.
|
2021-02-26 03:51:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5341657996177673, "perplexity": 5853.9568531170335}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178356140.5/warc/CC-MAIN-20210226030728-20210226060728-00006.warc.gz"}
|
http://solidmechanics.org/problems/Chapter3_10/Chapter3_10.php
|
Problems for Chapter 3
Constitutive Models: Relations between Stress and Strain
3.10. Large Strain Viscoelasticity
3.10.1. A cylindrical specimen is made from a material that can be idealized using the finite-strain viscoelasticity model described in Section 3.10. The specimen may be approximated as incompressible.
3.10.1.1. Let L denote the length of the deformed specimen, and ${L}_{0}$ denote the initial length of the specimen. Write down the deformation gradient in the specimen in terms of $\lambda =L/{L}_{0}$
3.10.1.2. Let $\lambda ={\lambda }_{e}{\lambda }_{p}$ denote the decomposition of stretch in to elastic and plastic parts. Write down the elastic and plastic parts of the deformation gradient in terms of ${\lambda }_{e},{\lambda }_{p}$ and find expressions for the elastic and plastic parts of the stretch rate in terms of ${\stackrel{˙}{\lambda }}_{e},{\stackrel{˙}{\lambda }}_{p}$
3.10.1.3. Assume that the material can be idealized using Arruda-Boyce potentials
${U}_{\infty }={\mu }_{\infty }\left\{\frac{1}{2}\left({\overline{I}}_{1}^{}-3\right)+\frac{1}{20{\beta }_{\infty }^{2}}\left({\overline{I}}_{1}^{2}-9\right)+\frac{11}{1050{\beta }_{\infty }^{4}}\left({\overline{I}}_{1}^{3}-27\right)+...\right\}+\frac{K}{2}{\left(J-1\right)}^{2}$
${U}_{T}={\mu }_{T}\left\{\frac{1}{2}\left({\overline{I}}_{1}^{}-3\right)+\frac{1}{20{\beta }_{T}^{2}}\left({\overline{I}}_{1}^{2}-9\right)+\frac{11}{1050{\beta }_{T}^{4}}\left({\overline{I}}_{1}^{3}-27\right)+...\right\}$
Obtain an expression for the stress in the specimen in terms of ${\lambda }_{e},{\lambda }_{p}$, using only the first two term in the expansion for simplicity. Your answer should include an indeterminate hydrostatic part.
3.10.1.4. Calculate the deviatoric stress measure
${{\tau }^{\prime }}_{ij}=\text{\hspace{0.17em}}\text{\hspace{0.17em}}2\left[\frac{1}{{J}_{e}^{2/3}}\left(\frac{\partial {U}_{T}}{\partial {\overline{I}}_{1}^{e}}+{\overline{I}}_{1}^{e}\frac{\partial {U}_{T}}{\partial {\overline{I}}_{2}^{e}}\right){B}_{ij}^{e}-\frac{{\overline{I}}_{1}^{e}}{3}\frac{\partial {U}_{T}}{\partial {\overline{I}}_{1}^{e}}{\delta }_{ij}-\frac{1}{{J}_{e}^{4/3}}\frac{\partial {U}_{T}}{\partial {\overline{I}}_{2}^{e}}{B}_{ik}^{e}{B}_{kj}^{e}\right]$
in terms of ${\lambda }_{e}$, and hence find an expression for ${\stackrel{˙}{\lambda }}_{p}$ in terms of ${\lambda }_{e}$
3.10.1.5. Suppose that the specimen is subjected to a harmonic cycle of nominal strain such that $L=\alpha {L}_{0}\mathrm{sin}\omega t$. Use the results of 3.10.1.2 and 3.10.1.4 to obtain a nonlinear differential equation for ${\lambda }_{e}$
3.10.1.6. Use the material data given in Section 3.10.5 to calculate (numerically) the variation of Cauchy stress in the solid with time induced by cyclic straining. Plot the results as a curve of Cauchy stress as a function of true strain. Obtain results for various values of $\alpha$ and frequency $\omega$.
|
2017-01-19 04:29:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 16, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9027841687202454, "perplexity": 722.9911759429694}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280483.83/warc/CC-MAIN-20170116095120-00214-ip-10-171-10-70.ec2.internal.warc.gz"}
|
http://mymathclub.blogspot.com/2016/06/two-problems-im-looking-at.html
|
## Saturday, June 18, 2016
### Two problems I'm looking at
I was reading a fun post over @ http://eatplaymath.blogspot.com/2016/06/my-first-problem-set-for-my-problem.html where Lisa is brainstorming problem sets. She's inspired me to collate the material we did this year. But in the meantime here's a teaser:
I don't usually include algebra problems but I ran into this one last night and its too good not to keep for future use when I have a group of older kids. At first this problem seems like its missing enough information. For example, you can't directly solve for $$x_1 .. x_7$$ since there are only 3 equations. However, if you assume that some linear combination of the 3 equations will equal the target i.e.
$$f(x_1..x_2) = x_1 + 4x_2 + 9x_3 + 16x_4 + 25x_5 + 36x_6 + 49x_7 \\ g(x_1..x_2) = 4x_1 + 9x_2 + 16x_3 + 25x_4 + 36x_5 + 49x_6 + 64x_7 \\ h(x_1..x_2) = 9x_1 + 16x_2 + 25x_3 + 36x_4 + 49x_5 + 64x_6 + 81x_7$$
and
$$a \cdot f() + b \cdot g() + c \cdot h() = 16x_1 + 25x_2 + 36x_3 + 49x_4 + 64x_5 + 81x_6 + 100x_7$$
This reduces to 7 equations in 3 unknowns The first and easiest 3 of them are:
$$(a + 4b + 9c) x_1 = 16x_1$$ $$(4a + 9b + 16c) x_2 = 25x_2$$ $$(9a + 16b + 25c) x_3 = 36x_3$$
Then these can be solved directly and it only remains to check if the solution (1,-3,3) works for the other 4 terms.
Or can you generalize and confirm: $$x^2 -3 (x + 1)^2 + 3(x+2)^2 = (x+3)^2$$
What's interesting is to examine the general problem afterwards for instance if there were only 2 equations is there a linear combination that works (nope)? What about four (this has multiple solutions)? I also find a hint of the third row of Pascal's triangle in the solution but haven't looked into whether that's a coincidence.
Update: its easiest to solve the general equation above $$ax^2 + b(x + 1)^2 + c(x+2)^2 = (x+3)^2$$ This reduces to:
$$a+b+c = 1, b+2c = 3, b + 4c = 9$$
Interestingly in 4 cubic terms the solution to $$ax^3 + b (x + 1)^3+ c(x+2)^3 +d(x+3)^3 = (x+4)^3$$ is {3,4,-6,4}. Once again close to the 4th term in the triangle and in 5 quad terms the solution is (1,-5,10,5). The binomial theorem is all over this problem so its not completely surprising but I'll keep thinking about it a bit more.
Folding problems are always fun. This one stands out because it uses the notion of proportionality twice in both direction i..e if you draw a line from one vertex of a triangle to somewhere in its base the ratio of areas of the two triangles are the same as the ratio of the two bases (since they share the same height).
|
2018-03-23 00:58:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7030713558197021, "perplexity": 415.1451368486396}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648113.87/warc/CC-MAIN-20180323004957-20180323024957-00201.warc.gz"}
|
https://www.storyofmathematics.com/prime-factorization/
|
# Prime Factorization – Explanation & Examples
Prime factorization is a method of finding all the prime numbers that multiply to form a number. Factors are multiplied to get a number, while prime factors are the numbers that can only be divided by 1 or themselves.
## How to find Prime Factorization?
There are two methods of finding prime factors of a number. These are repeated division and factor tree.
### Repeated division
A number is reduced by dividing it severally with prime numbers. Prime factors of number 36 are found by repeated division as shown:
The prime factors of number 36 are, therefore, 2 and 3. This can be written as 2 × 2 × 3 × 3. It is advisable to start dividing a number by the smallest prime number and proceed to bigger factors.
Example 1
What are the prime factors of 16?
Solution
The best way to solve this problem is by identifying the smallest prime factor of the number, which is 2.
Divide number by 16;
16 ÷ 2 = 8
Because 8 is not a prime number, proceed by dividing again by the smallest factor;
8 ÷ 2 = 4
4 ÷ 2 = 2
We have the prime factors of 16 highlighted in yellow, and they include: 2 x 2 x 2 x 2.
which can be written as an exponent:
16 = 2 2
Example 2
Find the prime factors of 12.
Solution
Divide 12 by 2;
12 ÷ 2 = 6
6 is not prime, proceed;
6 ÷ 2 = 3.
Therefore, 12 = 2 x 2 x 3
12 = 2 2 × 3
It is noted that, all prime factors of a number are prime.
Example 3
Factorize 147.
Solution
Start by dividing 147 by the smallest prime number.
147 ÷ 2 = 73.5
Our answer isn’t an integer, try the next prime number 3.
147 ÷ 3 = 49
Yes, 3 worked, now proceed to the next prime that can divide 49.
49 ÷ 7 = 7
Therefore, 147 = 3 x 7 x 7,
=3 x 7 2.
Example 4
What is the prime factorization of 19?
19 = 19
Solution
Another method on how to perform factorization is to break a number down into two integers. Now find the prime factors of the integers. This technique is useful when dealing with bigger numbers.
Example 5
Find the prime factors of 210.
Solution
Break down 210 into:
210 = 21 x 10
Now calculate the factors of 21 and 10
21 ÷ 3 = 7
10 ÷ 2 = 5
Combine the factors: 210 = 2 x 3 x 5 x 7
### Factor tree
Factor tree involves finding the prime factors of a number by drawing tree- like programs. Factor tree is the best tool of doing prime factorization. The prime factors of 36 are obtained by factor tree as shown below:
### Practice Questions
1. Which of the following numbers have a prime factorization of $3\times 5 \times 11$?
2. Which of the following numbers have a prime factorization of $2 \times 5 \times 71$?
3. Which of the following numbers have a prime factorization of $2 \times 3 \times 13$?
4. Which of the following numbers have a prime factorization of $2 \times 3 \times 3 \times 7$?
5. Which of the following numbers have a prime factorization of $3 \times 7 \times 11$?
6. Which of the following numbers have a prime factorization of $3 \times 5 \times 5$?
7. Which of the following numbers have a prime factorization of $2 \times 3 \times 7$?
8. Which of the following numbers have a prime factorization of $2 \times 2 \times 3 \times 11$?
9. Which of the following numbers have a prime factorization of $3 \times 7 \times 11 \times 11$?
10. Which of the following shows the prime factorization of $56$?
11. Which of the following shows the prime factorization of $38$?
12. Which of the following shows the prime factorization of $12$?
13. Which of the following shows the prime factorization of $120$?
14. Which of the following shows the prime factorization of $64$?
15. Which of the following shows the prime factorization of $49$?
16. Which of the following shows the prime factorization of $81$?
17. Which of the following shows the prime factorization of $70$?
18. Which of the following shows the prime factorization of $99$?
19. Which of the following shows the prime factorization of $44$?
20. Which of the following shows the prime factorization of $62$?
21. Which of the following shows the prime factorization of $76$?
22. Which of the following shows the prime factorization of $97$?
23. Which of the following shows the prime factorization of $63$?
24. What are the prime factors of $19$?
25. What are the prime factors of $50$?
26. What are the prime factors of $25$?
27. What are the prime factors of $81$?
28. What are the prime factors of $125$?
29. What are the prime factors of $132$?
|
2022-05-23 10:29:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7316138744354248, "perplexity": 359.16082567407113}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662558015.52/warc/CC-MAIN-20220523101705-20220523131705-00787.warc.gz"}
|
https://web2.0calc.com/questions/polynomial_53504
|
+0
polynomial
0
59
1
If one zero of the polynomial 3x^2 + 12x - k is reciprocal of the other , then find the value of k.
Jun 20, 2022
#1
+9458
0
By Vieta's formula, $$\text{product of roots} = \dfrac{-k}3$$.
But since one zero is the reciprocal of the other, product of roots is 1.
Therefore, $$\dfrac{-k}3 = 1$$. That means k = -3.
Jun 20, 2022
|
2022-08-12 03:40:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8882587552070618, "perplexity": 885.3423559873139}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571538.36/warc/CC-MAIN-20220812014923-20220812044923-00649.warc.gz"}
|
http://clay6.com/qa/48783/the-coefficient-of-x-5-in-the-expansion-of-x-3-6-is
|
Browse Questions
# The coefficient of $x^5$ in the expansion of $(x+3)^6$ is
$\begin{array}{1 1}(A)\;18\\(B)\;6\\(C)\;12\\(D)\;10\end{array}$
Toolbox:
• $T_{r+1}=nC_r a^{n-r} b^r$
$(x+3)^6=(3+x)^6$
$\Rightarrow 3^6+......+6C_53x^5+3C_6x^6$
$\therefore$ Coefficient of $x^5=6C_5\times 3$
$\Rightarrow 18$
Hence (A) is the correct answer.
|
2017-02-26 14:40:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9876375198364258, "perplexity": 806.7784081430686}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172017.60/warc/CC-MAIN-20170219104612-00199-ip-10-171-10-108.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/immediatley-need-help-right-now-please-help-me.8829/
|
1. Nov 12, 2003
### frank the tank
4 items add to $14.26 and multiply to$63.42. Have fun
I need this solved and have finally been stumped, how is this equation solved?
post here or email me please frank_medewar@hotmail.com
thank you all
2. Nov 12, 2003
### frank the tank
please I urgently need to know how I can solve this...
or at least a resource that can help me out
thakn you all kindly
3. Nov 12, 2003
### Tom Mattson
Staff Emeritus
Homework problems go in the Homework Forums.
Frank,
Chill out. This is not an instant Homework Help service. That said, you have to try the problem and show some work before we offer any assistance. Please see this thread for guidelines.
So, show us what you have and we will show you where you are going wrong.
4. Nov 12, 2003
### frank the tank
thanks TOM!! much help, im not in school however
I just love math..mainly TRIG and graphs n such--i saw the problem ,and I cannot figure it out forteh life of m, i dont even know where to start.
but would love to know how to solve this type of equation....
5. Nov 12, 2003
### revesz
I think that this can be solved by using linear equations but it is really complicated.
6. Nov 12, 2003
### PrudensOptimus
is this a joke?
7. Nov 12, 2003
### ahrkron
Staff Emeritus
frank,
What have you tried? you surely tried a couple things before lookig for internet help.
Have you tried setting up a couple equations? where do things get hard?
8. Nov 13, 2003
### one_raven
It seems it is too complex for the MS Aceess solver tool.
9. Nov 13, 2003
### mani
4 items add to $14.26 and multiply to$63.42. Have fun
Something wrong here.
If every item is a price in $, their product will be in$^4.
So the items better be pure numbers, positive numbers.
Trial and error! what else?
3,7, 101,2 are prime factors. Multiplying each of these by an integral power of 10 will yield an infinite set of factors.
Keep trying! Best of luck!
10. Nov 13, 2003
### faust9
Restate the problem as the sum of four numbers {A,B,C,D} is 1426 and the product of four numbers {A,B,C,D} is 6342 where {A,B,C,D} are integers. Working with integers is easier. Next develope a few conclusions: 1<= {A,B,C,D} <= 1423 because if A=1, B=1, c=1 then D=1423 (this doesn't work for the product portion but it establishes boudries).
Once you have some boundries, decompose 6342 into a product of primes {2*3*7*151}. From there you can find all factors of the product and trial and error the answer.
A system of lineral equations would not be easy because you only have two equations and four variables (unless you're clever enough to think of two additional equations)...
11. Nov 13, 2003
### Integral
Staff Emeritus
This is not really a math problem. Linear Algebra will not get you an anwer. YOu have 4 unknowns and 2 equations, to solve this using tools like Excel solver you will need 2 more constraints.
This shortage of infromation makes it a logic puzzle to be worked out by trial and error.
As you say, have fun.
12. Nov 13, 2003
### HallsofIvy
Why is this "not really a math problem"?
It doesn't have a unique correct answer but many math problems don't.
There are 4 unknowns with only 2 equations but, since the problem was stated in terms of money, if we convert to cents, as Faust9 did, so that a+ b+ c+ d= 1426 and abcd= 6341, we can assert that the unknowns have to be positive integers so this is a (non-linear) Diophantine problem.
I don't claim it's easy but it doable (and certainly is a mathematics problem).
13. Nov 14, 2003
### arcnets
If we do this in cents, then
a+b+c+d = 1426, and
abcd=1004*63.42= 6342000000.
Now, 6342000000 = 27*3*56*7*151.
You got to group these prime factors in 4 products so that their sum is 1426. It's a trial & error problem.
14. Nov 14, 2003
### arcnets
Example.
2*2*2*2*5*5 = 400
2*2*2*5*5 = 200
5*5*7 = 175
151*3 = 453
-----------------
SUM 1228
That's too small. So, in the next try, we make the highest price bigger, to get a bigger result. And so on...
15. Nov 14, 2003
### arcnets
Edit: _ means blank
|
2018-03-17 19:11:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34550201892852783, "perplexity": 2274.8329802011885}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645280.4/warc/CC-MAIN-20180317174935-20180317194935-00489.warc.gz"}
|
http://docs.menpo.org/en/stable/api/landmark/pose_human36M_32_to_pose_human36M_17.html
|
# pose_human36M_32_to_pose_human36M_17¶
menpo.landmark.pose_human36M_32_to_pose_human36M_17(pcloud)[source]
Apply the human3.6M 17-point semantic labels (based on the original semantic labels of Human3.6 but removing the annotations corresponding to duplicate points, soles and palms), originally 32-points.
The semantic labels applied are as follows:
• pelvis
• right_leg
• left_leg
• spine
:raises : LabellingError: If the given labelled point graph/pointcloud contains less than the expected number of points.
|
2021-05-12 17:35:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4524785578250885, "perplexity": 14669.228287064117}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989766.27/warc/CC-MAIN-20210512162538-20210512192538-00537.warc.gz"}
|
https://inverseprobability.com/talks/lawrence-cfi17/once-upon-a-universal-standard-time.html
|
at CFI Annual Conference on Jul 13, 2017 [reveal]
Neil D. Lawrence, Amazon Cambridge and University of Sheffield
#### Abstract
In this talk we consider a fundamental difference between human and machine intelligence, a ratio between their ability to compute and their ability to communicate we refer to as the embodiment factor. Having suggested why this makes us fundamentally different we speculate on implications for developing narrative structure from data.
## Embodiment Factors
compute ≈ 100 gigaflops ≈ 16 petaflops communicate 1 gigbit/s 100 bit/s (compute/communicate) 104 1014
There is a fundamental limit placed on our intelligence based on our ability to communicate. Claude Shannon founded the field of information theory. The clever part of this theory is it allows us to separate our measurement of information from what the information pertains to1.
Shannon measured information in bits. One bit of information is the amount of information I pass to you when I give you the result of a coin toss. Shannon was also interested in the amount of information in the English language. He estimated that on average a word in the English language contains 12 bits of information.
Given typical speaking rates, that gives us an estimate of our ability to communicate of around 100 bits per second (Reed and Durlach 1998). Computers on the other hand can communicate much more rapidly. Current wired network speeds are around a billion bits per second, ten million times faster.
When it comes to compute though, our best estimates indicate our computers are slower. A typical modern computer can process make around 100 billion floating point operations per second, each floating point operation involves a 64 bit number. So the computer is processing around 6,400 billion bits per second.
It’s difficult to get similar estimates for humans, but by some estimates the amount of compute we would require to simulate a human brain is equivalent to that in the UK’s fastest computer (Ananthanarayanan et al. 2009), the MET office machine in Exeter, which in 2018 ranks as the 11th fastest computer in the world. That machine simulates the world’s weather each morning, and then simulates the world’s climate in the afternoon. It is a 16 petaflop machine, processing around 1,000 trillion bits per second.
So when it comes to our ability to compute we are extraordinary, not compute in our conscious mind, but the underlying neuron firings that underpin both our consciousness, our subconsciousness as well as our motor control etc.
If we think of ourselves as vehicles, then we are massively overpowered. Our ability to generate derived information from raw fuel is extraordinary. Intellectually we have formula one engines.
But in terms of our ability to deploy that computation in actual use, to share the results of what we have inferred, we are very limited. So when you imagine the F1 car that represents a psyche, think of an F1 car with bicycle wheels.
Just think of the control a driver would have to have to deploy such power through such a narrow channel of traction. That is the beauty and the skill of the human mind.
In contrast, our computers are more like go-karts. Underpowered, but with well-matched tires. They can communicate far more fluidly. They are more efficient, but somehow less extraordinary, less beautiful.
For humans, that means much of our computation should be dedicated to considering what we should compute. To do that efficiently we need to model the world around us. The most complex thing in the world around us is other humans. So it is no surprise that we model them. We second guess what their intentions are, and our communication is only necessary when they are departing from how we model them. Naturally, for this to work well, we need to understand those we work closely with. So it is no surprise that social communication, social bonding, forms so much of a part of our use of our limited bandwidth.
There is a second effect here, our need to anthropomorphise objects around us. Our tendency to model our fellow humans extends to when we interact with other entities in our environment. To our pets as well as inanimate objects around us, such as computers or even our cars. This tendency to over interpret could be a consequence of our limited ability to communicate.
For more details see this paper “Living Together: Mind and Machine Intelligence”, and this TEDx talk.
# Evolved Relationship with Information
The high bandwidth of computers has resulted in a close relationship between the computer and data. Large amounts of information can flow between the two. The degree to which the computer is mediating our relationship with data means that we should consider it an intermediary.
Originaly our low bandwith relationship with data was affected by two characteristics. Firstly, our tendency to over-interpret driven by our need to extract as much knowledge from our low bandwidth information channel as possible. Secondly, by our improved understanding of the domain of mathematical statistics and how our cognitive biases can mislead us.
With this new set up there is a potential for assimilating far more information via the computer, but the computer can present this to us in various ways. If it’s motives are not aligned with ours then it can misrepresent the information. This needn’t be nefarious it can be simply as a result of the computer pursuing a different objective from us. For example, if the computer is aiming to maximize our interaction time that may be a different objective from ours which may be to summarize information in a representative manner in the shortest possible length of time.
For example, for me, it was a common experience to pick up my telephone with the intention of checking when my next appointment was, but to soon find myself distracted by another application on the phone, and end up reading something on the internet. By the time I’d finished reading, I would often have forgotten the reason I picked up my phone in the first place.
There are great benefits to be had from the huge amount of information we can unlock from this evolved relationship between us and data. In biology, large scale data sharing has been driven by a revolution in genomic, transcriptomic and epigenomic measurement. The improved inferences that that can be drawn through summarizing data by computer have fundamentally changed the nature of biological science, now this phenomenon is also infuencing us in our daily lives as data measured by happenstance is increasingly used to characterize us.
Better mediation of this flow actually requires a better understanding of human-computer interaction. This in turn involves understanding our own intelligence better, what its cognitive biases are and how these might mislead us.
For further thoughts see Guardian article on marketing in the internet era from 2015.
You can also check my blog post on System Zero..
## Human Communication
For human conversation to work, we require an internal model of who we are speaking to. We model each other, and combine our sense of who they are, who they think we are, and what has been said. This is our approach to dealing with the limited bandwidth connection we have. Empathy and understanding of intent. Mental dispositional concepts are used to augment our limited communication bandwidth.
Fritz Heider referred to the important point of a conversation as being that they are happenings that are “psychologically represented in each of the participants” (his emphasis) (Heider 1958)
### Bandwidth Constrained Conversations
import pods
from ipywidgets import IntSlider
pods.notebook.display_plots('anne-bob-conversation{sample:0>3}.svg',
'../slides/diagrams', sample=IntSlider(0, 0, 7, 1))
Embodiment factors imply that, in our communication between humans, what is not said is, perhaps, more important than what is said. To communicate with each other we need to have a model of who each of us are.
To aid this, in society, we are required to perform roles. Whether as a parent, a teacher, an employee or a boss. Each of these roles requires that we conform to certain standards of behaviour to facilitate communication between ourselves.
Control of self is vitally important to these communications.
The high availability of data available to humans undermines human-to-human communication channels by providing new routes to undermining our control of self.
### A Six Word Novel
But this is a very different kind of intelligence than ours. A computer cannot understand the depth of the Ernest Hemingway’s apocryphal six word novel: “For Sale, Baby Shoes, Never worn”, because it isn’t equipped with that ability to model the complexity of humanity that underlies that statement.
### Heider and Simmel (1944)
Fritz Heider and Marianne Simmel’s experiments with animated shapes from 1944 (Heider and Simmel 1944). Our interpretation of these objects as showing motives and even emotion is a combination of our desire for narrative, a need for understanding of each other, and our ability to empathise. At one level, these are crudely drawn objects, but in another key way, the animator has communicated a story through simple facets such as their relative motions, their sizes and their actions. We apply our psychological representations to these faceless shapes in an effort to interpret their actions.
Ananthanarayanan, Rajagopal, Steven K. Esser, Horst D. Simon, and Dharmendra S. Modha. 2009. “The Cat Is Out of the Bag: Cortical Simulations with 109 Neurons, 1013 Synapses.” In Proceedings of the Conference on High Performance Computing Networking, Storage and Analysis - Sc ’09. https://doi.org/10.1145/1654059.1654124.
Heider, Fritz. 1958. The Psychology of Interpersonal Relations. John Wiley.
Heider, F., and M. Simmel. 1944. “An Experimental Study of Apparent Behavior.” The American Journal of Psychology 57: 243–59.
Reed, Charlotte, and Nathaniel I. Durlach. 1998. “Note on Information Transfer Rates in Human Communication.” Presence Teleoperators & Virtual Environments 7 (5): 509–18. https://doi.org/10.1162/105474698565893.
1. the challenge of understanding what information pertains to is known as knowledge representation.
|
2021-10-26 03:29:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36210891604423523, "perplexity": 1476.1305514655435}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587794.19/warc/CC-MAIN-20211026011138-20211026041138-00375.warc.gz"}
|
https://wiki.ubc.ca/Science:Math_Exam_Resources/Courses/MATH152/April_2015/Question_B_3_(a)/Solution_2
|
# Science:Math Exam Resources/Courses/MATH152/April 2015/Question B 3 (a)/Solution 2
Let ${\displaystyle \mathbf {x} =(x_{1},x_{2},x_{3})}$ denote the point of intersection. Since ${\displaystyle \mathbf {x} \in P_{1}}$, we must have
${\displaystyle {\begin{bmatrix}x_{1}\\x_{2}\\x_{3}\end{bmatrix}}={\begin{bmatrix}1\\1\\1\end{bmatrix}}+s_{1}{\begin{bmatrix}0\\1\\0\end{bmatrix}}+s_{2}{\begin{bmatrix}1\\2\\-1\end{bmatrix}}={\begin{bmatrix}1+s_{2}\\1+s_{1}+2s_{2}\\1-s_{2}\end{bmatrix}}.}$
In particular, ${\displaystyle x_{1}+x_{3}=2}$. On the other hand, since ${\displaystyle \mathbf {x} \in l_{1}}$, we must also have
${\displaystyle {\begin{bmatrix}x_{1}\\x_{2}\\x_{3}\end{bmatrix}}=t{\begin{bmatrix}0\\1\\2\end{bmatrix}}={\begin{bmatrix}0\\t\\2t\end{bmatrix}}.}$
It follows that ${\displaystyle 0+2t=2}$, so ${\displaystyle t=1}$. We conclude that
${\displaystyle \mathbf {x} ={\color {blue}{\begin{bmatrix}0\\1\\2\end{bmatrix}}}.}$
|
2022-08-16 15:59:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 9, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9988499283790588, "perplexity": 277.76377980145804}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572408.31/warc/CC-MAIN-20220816151008-20220816181008-00013.warc.gz"}
|
https://indexguy.wordpress.com/2007/09/06/group-theory-week-01/
|
Index Concordia
Group theory – Week 01
Posted in Group theory by Index Guy on September 6, 2007
During this week professor van Nieuwenhuizen started the semester by mentioning the fact that group theory was very important for physics and the importance of the study of transformation groups acting on a carrier space. He mentioned some examples, such as the group of rotations in Euclidean space that keep the distance fixed and the rotations of a solid cube. He then moved forward by defining a group in terms of a left-inverse and left-identity as follows:
• A group $G$ is a set of elements and a multiplication rule $*$ such that
• if $a,b \in G$ then $a*b \in G$ (closure),
• $a*(b*c) = (a*b)*c$ (associativity),
• for every $a \in G$ we also have $a_{L}^{-1} \in G$ such that $a_{L}^{-1}* a = e_{L}$,
• for every $a \in G$ we have $e_{L} \in G$ with the property $e_{L} *a = a.$
When questioned (by himself) why not have item 4 before item 3 the professor stated that from a physical point of view it is easier to understand the existence of the identity as the result of doing and un-doing an operation on an element, instead of just postulating its existence. Then because of item 1 one has the identity as part of the group. Rigor will sit on the back of the room.
He then proceeded to prove that the existence of a left- inverse and identity implies the existence of a right- inverse and identity and that furthermore the left and right inverses are one and the same element and the same goes for the left and right identities. After this subtlety he proceeded to list examples of what is a group and what is not.
Some of the examples include:
1. The set of $n \times n$ matrices with real entries and non-vanishing determinant, with the operation given by matrix multiplication is a group. We also have a group with matrix addition as the operation.
2. The set of all real matrices $M$ with the operation specified as $M_{1}\star M_{2} = [M_{1}, M_{2}]$ is not a group. This is because of failure of associativity.
3. All numbers of the form $a + b\sqrt{2}$ with $a,b$ being rational numbers form a group under traditional multiplication.
4. Polynomials do not form a group since they fail to have an inverse.
5. The set $\left\{ \pm 1 , \pm i \right\}$ is a group under multiplication called the Klein group. An extension of this group is the Quaternion group with elements $\left\{ \pm 1 , \pm \tilde{i} , \pm j , \pm k\right\}$.
After these examples professor van Nieuwenhuizen introduced the concept of a finite group’s multiplication table as a matrix with the entries in the ith row and jth column as $g_{i}*g_{j}$. If there is no element repeated on column or row, then we call this array a Latin square. It turns out that not all Latin squares are represent groups. If two groups have the same multiplication table, then we say that they are isomorphic to each other.
We calculated the entries of the multiplication table for the Klein Group. Then we considered the group formed by rotations by $\pi$ of a three dimensional cube. This group has four elements, but its multiplication table is slightly different from that of the Klein group.
After all these we defined the concept of an algebra as some set of objects that have two operations (addition and multiplication) defined and linear independence over some field (either the complex or the real numbers usually). For example, the complex numbers is an algebra of elements $z = x + iy$ with the basis formed by the elements
$\left\{\pm 1, \pm i \right\}.$
Next we turned to normed division Algebras. Some definitions:
• A normed algebra is one where the norm of elements satisfy $\left|a_{1}a_{2}\right| = \left|a_{1}\right|\left|a_{2}\right|$.
• A division algebra is one where if for any two elements we have $a_{1}a_{2} = 0$ then either $a_{1} = 0$ or $a_{2} = 0$.
We then turned our attention to the four normed division algebras over the real numbers: real numbers, complex numbers, quaternions and octonions. The octonions are the black sheep of the family since the set of basis elements does not form a group, associativity fails.
Finally we mentioned briefly Dirac’s approach to octonions using, well, Dirac matrices. Mr. Dirac considered $n$-dimensional objects of the form
$\displaystyle a_{0} + a_{1}\gamma_{1} + ... + a_{n}\gamma_{n},$
with the gammas satisfying $\left\{\gamma_{i}, \gamma_{j}\right\} = 2\delta_{ij}$. An important step was to expect the product of two gammas to be a new element, not some linear combination of the gammas (like in the case of the octonions). The set of all these objects forms the Dirac group in $n$-dimensions.
|
2017-12-16 07:02:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 30, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8931201100349426, "perplexity": 201.65836166254758}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948585297.58/warc/CC-MAIN-20171216065121-20171216091121-00394.warc.gz"}
|
http://researchprofiles.herts.ac.uk/portal/en/publications/science-with-an-ngvla(592c5147-ab16-43eb-aafa-59363ec9cca2).html
|
# University of Hertfordshire
## Science with an ngVLA: Young Radio AGN in the ngVLA Era
Research output: Contribution to journalArticle
### Documents
Original language English Publications of the Astronomical Society of the Pacific 16 Oct 2018 Published - 16 Oct 2018
### Abstract
Most massive galaxies are now thought to go through an Active Galactic Nucleus (AGN) phase one or more times. Yet, the cause of triggering and the variations in the intrinsic and observed properties of AGN population are still poorly understood. Young, compact radio sources associated with accreting supermassive black holes (SMBHs) represent an important phase in the life cycles of jetted AGN for understanding AGN triggering and duty cycles. The superb sensitivity and resolution of the ngVLA, coupled with its broad frequency coverage, will provide exciting new insights into our understanding of the life cycles of radio AGN and their impact on galaxy evolution. The high spatial resolution of the ngVLA will enable resolved mapping of young radio AGN on sub-kiloparsec scales over a wide range of redshifts. With broad continuum coverage from 1 to 116 GHz, the ngVLA will excel at estimating ages of sources as old as $30-40$ Myr at $z \sim 1$. In combination with lower-frequency ($\nu <1$ GHz) instruments such as ngLOBO and the Square Kilometer Array, the ngVLA will robustly characterize the spectral energy distributions of young radio AGN.
### Notes
8 pages, 3 figures, To be published in the ASP Monograph Series, "Science with a Next-Generation VLA" , ed. E. J. Murphy (ASP, San Francisco, CA). arXiv admin note: text overlap with arXiv:1803.02357
ID: 15532095
|
2019-09-20 18:20:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27062180638313293, "perplexity": 5056.061046367254}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574058.75/warc/CC-MAIN-20190920175834-20190920201834-00541.warc.gz"}
|
https://allegro.tech/2020/12/speeding-up-ios-builds-with-bazel.html
|
When we developed our Allegro iOS app adding new features and with more people contributing to the codebase, we noticed that build times began to grow. In order to have precise metrics, we started to track clean build time as well as the amount of code we had. Do these two metrics grow at the same pace?
### Slowing down
Our measurements started in May 2019 with combined 300k lines of Objective-C and Swift code that took around ~177 seconds to compile. One year later we increased code size by 33% but compilation time grew by 50%. It’s worth noting that this time is measured on our CI machine which is more powerful than a laptop machine — build times are about 50% slower on our work Macbooks. To put it into perspective - on average developers do 8 clean builds each day and they will now take about 40 minutes of their work. As we have 25 developers contributing to the project, this will add up to 16 hours each day and over 300 hours monthly! We had to make some changes in order to not spend most of our time waiting for the app to compile. Even if we have split application into smaller projects it all needs to be built into one single application. Since it’s a monolith that needs to be linked together, one “service” cannot be changed in a running app as you would in microservice backend infrastructure.
At first we tried to speed things up with building our 3rd party dependencies with Carthage. However, this was not very efficient being only a small fraction of our code base. Any improvement was quickly eaten up by adding new code that needed time to compile. The direction of not compiling the same code over and over was what we were aiming for.
### Bazel
Before we started looking for a solution we outlined what was important for us:
• Ideally it should be transparent to our developers - they should only notice an increase in speed
• It should work with modules that mix Obj-C and Swift
• It should be easy to turn off and switch to standard building Xcode if something goes sideways
Basically, this meant we wanted to keep our current setup while adding a mechanism letting us to share compiled artifacts(preferably without any special developer integration). Our eyes turned to the open source build systems - Bazel and Buck. Comparing these two, we chose Bazel since it provides better support for custom actions with its Starlak language and it’s more popular in the iOS community.
Bazel is Google’s build system that supports C++, Android, iOS, Go and a wide variety of other language platforms on Windows, macOS, and Linux. One of its key features is its caching mechanism - both local and remote.
Bazel already provides sets of Apple rules that can build a complete application but it didn’t meet our requirements since mixing Swift and Obj-C is not possible. Another problem is that we would need to do the whole transition at once since you cannot simply migrate only a part of the project. We decided to create a custom rule that would use xcodebuild to build frameworks - this means we would use the same build system we currently use in everyday development and we wouldn’t have to change our current project.
Custom rules can be written in Starlak language. In our case we needed to wrap xcodebuild into a sh_binary action:
sh_binary(
name = "xcodebuild",
srcs = ["/usr/bin/xcodebuild"],
visibility = ["//visibility:public"]
)
Then, we can create a rule that will call xcodebuild and produce target.framework:
def _impl(ctx):
name = ctx.label.name
pbxProj = ctx.file.project
output_config = "CONFIGURATION_BUILD_DIR=../%s" % ctx.outputs.framework.dirname
ctx.actions.run(
inputs = [pbxProj] + ctx.files.srcs,
outputs = [ctx.outputs.framework],
arguments = ["build", "-project", pbxProj.path, "-scheme", name, output_config],
progress_message = "Building framework %s" % name,
executable = ctx.executable.xcodebuild,
)
framework = rule(
implementation = _impl,
attrs = {
"srcs": attr.label_list(allow_files = True),
"project": attr.label(
allow_single_file = True,
mandatory = True,
),
"xcodebuild": attr.label(
executable = True,
cfg = "host",
allow_files = True,
default = Label("//bazel/xcodebuild")
),
},
outputs = {"framework": "%{name}.framework"},
)
With that we can now build any project we want to, in this case AFNetworking library:
load("//bazel:xcodebuild.bzl", "framework")
framework(
name = "AFNetworking",
project = "Pods/AFNetworking.xcodeproj",
srcs = glob(["Pods/AFNetworking/**/*"]),
)
Then we can call:
./bazel/bazelisk build //:AFNetworking
and this should be given as an output:
** BUILD SUCCEEDED ** [11.279 sec]
Target //:AFNetworking:
bazel-bin/AFNetworking.framework
INFO: Elapsed time: 12.427s, Critical Path: 12.28s
INFO: 1 process: 1 local.
INFO: Build completed successfully, 2 total actions
Thanks to Bazel, build will only be performed once and rebuild only when any of the target files change. Once we point to a remote cache with –remote_http_cache we can share this artefact in a shared remote cache. It’s amazing how easy it is to set up a remote cache.
How can we use Bazel from Xcode, though? Unfortunately, Xcode is not known for great support of external build systems and there is no way of doing it ourselves since it’s closed source. The only way of extending it are plugins whose capabilities are very limited. Fortunately, there is a way: we can use Build Phases that are run each time a project is built. It’s a simple Run Script phase that invokes Bazel and copies created frameworks to BUILT_PRODUCTS_DIR. When developers are not working on a given module, we use our special tool that will generate a workspace without it and this target will be built with Bazel in this Build Phase. Thanks to shared remote cache, most of the time instead of compiling it we would just download precompiled frameworks.
After migrating all of our modules to Bazel we were able to significantly reduce our clean build time. It dropped over threefold, going from 260s to just 85s. Developers’ experience improved as well, because Xcode is a lot more responsive than before because of reducing the number of projects included in the workspace.
It’s worth noting that if any of our scripts or build artefacts contain e.g. local paths they will cause misses in our cache. To prevent this we monitor our local and CI builds times and cache hits to detect such situations.
### Tests
A couple years ago we moved all of our iOS projects to a single monorepo. This has drastically simplified development since we don’t have to maintain a pyramid of hundreds of dependencies between dozens of repositories anymore. One downside is that all projects combined produce over 15.000 unit tests that take over an hour to build and run. We didn’t want to wait that long in each PR, so we decided to run only a selected portion of tests affected by introduced changes. To achieve this we had to maintain a list of dependencies between different projects and that was obviously very error prone. The chart below shows just a small portion of our dependency tree (generated in Bazel).
After the migration to Bazel we can query our dependency graph to get a list of targets that a given file affects and run unit tests for that target. That improved our experience since we used to manually maintain list of dependencies beetwen our module which was error prone and time consuming.
Build results can be cached the same way as build artifacts. This has dramatically reduced test times of our master branch test plan, as we can run bazel test //...` and only test targets that have not been run previously. Take a look at the below chart to see how good our result are:
### Conclusion
Integrating Bazel into an iOS project requires some effort, but in our opinion it’s worth it, especially in large scale projects. We observe more and more companies struggling with fast and scalable builds. Some of the key tech players, including Lyft, Pinterest and LinkedIn, switched to Bazel for building iOS apps as well. It’s worth watching Keith Smiley & Dave Lee talk from Bazel Conf about migration of Lyft app to Bazel.
We still have the main app target with a large amount of source code that always needs to be compiled. Currently we are working on splitting the app target into modules, so we can cache this code as well and reduce build time even further. In the future we will try to make the same Bazel migration with our Android application to achieve the same build speed improvement and have single build tool for both platforms. We will also try out try another promising feature, called Remote Execution - so we can use remote workers to perform remote builds. We estimate that after completion of these plans, we can further reduce our build times to about 10 seconds.
|
2021-01-19 14:04:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19405394792556763, "perplexity": 1978.2631108575172}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519395.23/warc/CC-MAIN-20210119135001-20210119165001-00588.warc.gz"}
|
https://math.stackexchange.com/questions/2104438/is-this-a-proof-of-the-collatz-conjecture
|
# Is this a proof of the Collatz Conjecture?
I recently stumbled upon the following paper from April 2016: https://www.researchgate.net/publication/299749569_A_proof_of_the_Collatz_conjecture
Its researchers, who are university professors, claim it proves the Collatz conjetcture.
Since I have not been up to date with the status of this conjecture for a while, and because I found no disproofs of this document on the web, I was wondering if someone here can shed some light on this paper.
Thanks.
• Compare the wording of the abstract there with the abstract arxiv.org/abs/1612.07820 by the same two authors (but later). I would say definitely no. – Tobias Kildetoft Jan 19 '17 at 12:42
• The revised version sounds like it is likely to hold, but neither the insight or the involved probabilistic techniques are new. – Jack D'Aurizio Jan 19 '17 at 13:10
• Those guys actually sent me (and presumably others who worked on the Collatz conjecture before) their "proof" several months ago. I told them as politely as I could that I do not believe it to be a proof, with pointers to the gap in their proof and references to literature with similar probabilistic claims which are not complete proofs. They responded rather rudely saying I was wrong, and I did not know what I am talking about. So yeah, some "university professors with established reputation" can be real assholes as well. – TMM Jan 24 '17 at 5:02
• Interestingly, in their follow-up paper they thank a bunch of people for their "insightful" comments, but none of these people (as far as I can tell) have actually ever worked on the Collatz conjecture. So I guess the feedback they got from people who did work on the topic ("the proof is wrong and the results are not new") was not insightful enough :-) – TMM Jan 24 '17 at 8:21
• Sentences like "in this paper, we provide a self-consistent argument to support the validity of the Collatz conjecture" and "the existence of diverging trajectories can be however ruled out by invoking an argument of internal consistency" strongly set off my bullshit detector - they're usually a good sign that a "proof" is nothing more than a heuristic, backed by "intuitively obvious" claims (which on further examination are usually anything but). – Noah Schweber Jan 27 '17 at 15:14
The authors claim that EVERY trajectory is asymptotic to a decreasing one, hence all trajectories tend to the $1,2,4$ cycle.
|
2020-06-01 17:27:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6630792021751404, "perplexity": 636.825685510507}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347419056.73/warc/CC-MAIN-20200601145025-20200601175025-00259.warc.gz"}
|
http://oem.bmj.com/highwire/markup/164108/expansion?width=1000&height=500&iframe=true&postprocessors=highwire_figures%2Chighwire_math%2Chighwire_inline_linked_media%2Chighwire_embed
|
Table 1
Demographic and work organisation characteristics of employees with and without a WC claim
Claim (n=533)
Overall (N=16 926)No claim (n=16 393)Total claim cost ($) CharacteristicsPer cent+ (%)Per cent+ (%)Per cent+ (%)MSD Sex* Male606149$3044$8155 Female403951$4847$16 492 Age (years) 18–24889$1516$2677 25–34262625$2492$7169 35–44242424$4723$17 629 45–54242422$5729$16 255 55–64161618$4216$11 737 65+332$2163$2947 Race/ethnicity White848480$3778$13 524 Black111$3298$5136 Hispanic/Latino121115$5008$10 529 Other344$5447$18 043 Education* At least a 4-year college degree515236$4821$10 909 Some college or 2-year degree313039$3420$9517 High school diploma or GED161619$4396$14 858 Did not complete high school336$3666$13 154 Employment type* Full time919193$3860$13 716 Part time997$5752$13 023 Pay scheme* Salary515233$3396$9770 Hourly494867$4257$14 530 Industry* Agriculture001$1951$2100 Mining/construction111113$4537$12 037 Manufacturing555$2778$5623 Transport/communication/electric/gas/sanitation334$4133$10 189 Wholesale trade334$10 115$39 680 Retail trade10108$4481$11 276 Finance661$2352$2685 Services515248$3429$9907 Public administration101016$4131$16 855 Occupation* Executive14149$2723$4432 Professional363625$4991$16 912 Technical support333$1134$1696 Sales775$3006$8243 Clerical and administrative support151510$2312$5129 Service occupation121220$5126$18 189 Precision production and crafts worker333$961$982 Chemical/production operator111$14 087$28 130 Labourer111024$3584$8825 Annual income (in dollars)* <10 000777$3759$8580 10 000–14 999555$3466$13 569 15 000–19 999557$3253$6866 20 000–24 999151521$2872$6385 25 000–34 999111111$4689$12 968 35 000–49 000262626$4124$15 035 50 000–74 999191916$6328$21 536 75 000+12127$2887$5689 Company size (number of employees)* <100353539$4648$13 852 100–499414141$3853$14 750 500+242420$2851\$6294
• +Per cents are calculated based on the total sample size for each column. For example, among employees who filed a claim (n=533), 49% of them were male.
• 2 test p<0.05, Ho: Employee demographic factors are independent of prior WC status.
• M, Mean.
|
2018-04-26 08:04:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25020724534988403, "perplexity": 6067.254813180187}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948119.95/warc/CC-MAIN-20180426070605-20180426090605-00443.warc.gz"}
|
https://phys.libretexts.org/Bookshelves/Waves_and_Acoustics/Laboratory_Manual_-_The_Science_of_Sound_(Fiore)/08%3A_Basic_Electricity/8.04%3A_Section_4-
|
Skip to main content
# 8.4: Procedure
## Part A: Series Circuit
1. The theoretical current of Figure 1 may be found by dividing the power supply voltage by the sum of the two resistors. This should be the same everywhere in the circuit. Compute this value and record it in the first column of Table 1.
2. Set the DC power supply to 10 volts. Construct the circuit of Figure 1.
3. To measure current, set the digital multimeter (DMM) to current mode. Break open the circuit and insert the DMM between the top of power supply and the left end of the 1 k resistor. Record this current in Table 1.
4. Repeat step 2 inserting the DMM in between the 1 k and the 2.2 k, and again between the bottom of the 2.2 k and the bottom of the power supply.
5. The theoretical voltage across each resistor may be found by multiplying the resistor value by the current recorded in Table 1. Compute the voltages and record them in the first column of Table 2.
6. Set the DMM to measure voltage and record the voltages measured across the 1 k and 2.2 k ohm resistors in Table 2.
## Part B: Parallel Circuit
7. The voltage across each resistor in Figure 2 should be the same as the source voltage. The two currents may be found by dividing this voltage by the resistor value. Compute these currents and record them in the first column of Table 3.
8. Set the DMM to measure current. Insert the DMM between the top wire and the top of the 1 k resistor. Record the resulting current in the second column of Table 3.
9. Repeat step 2 except inserting the DMM between the top wire and the top of the 2.2 k resistor.
10. Repeat step 2 except inserting the DMM to the immediate right of the top of the power supply, before the 1 k resistor.
This page titled 8.4: Procedure is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by James M. Fiore via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
• Was this article helpful?
|
2022-12-01 00:35:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8526030778884888, "perplexity": 567.057379072527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710777.20/warc/CC-MAIN-20221130225142-20221201015142-00046.warc.gz"}
|
https://gmatclub.com/forum/the-figure-above-shows-a-circular-flower-bed-with-its-cente-144448.html
|
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 18 Feb 2019, 02:20
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
## Events & Promotions
###### Events & Promotions in February
PrevNext
SuMoTuWeThFrSa
272829303112
3456789
10111213141516
17181920212223
242526272812
Open Detailed Calendar
• ### Valentine's day SALE is on! 25% off.
February 18, 2019
February 18, 2019
10:00 PM PST
11:00 PM PST
We don’t care what your relationship status this year - we love you just the way you are. AND we want you to crush the GMAT!
• ### Get FREE Daily Quiz for 2 months
February 18, 2019
February 18, 2019
10:00 PM PST
11:00 PM PST
Buy "All-In-One Standard (\$149)", get free Daily quiz (2 mon). Coupon code : SPECIAL
# The figure above shows a circular flower bed, with its cente
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
Author Message
TAGS:
### Hide Tags
Manager
Joined: 02 Dec 2012
Posts: 174
The figure above shows a circular flower bed, with its cente [#permalink]
### Show Tags
20 Dec 2012, 05:43
1
1
00:00
Difficulty:
5% (low)
Question Stats:
92% (01:00) correct 8% (01:11) wrong based on 705 sessions
### HideShow timer Statistics
Attachment:
Circle.png [ 12.22 KiB | Viewed 21929 times ]
The figure above shows a circular flower bed, with its center at O, surrounded by a circular path that is 3 feet wide. What is the area of the path, in square feet?
(A) $$25\pi$$
(B) $$38\pi$$
(C) $$55\pi$$
(D) $$57\pi$$
(E) $$64\pi$$
Math Expert
Joined: 02 Sep 2009
Posts: 52917
Re: The figure above shows a circular flower bed, with its cente [#permalink]
### Show Tags
20 Dec 2012, 05:48
2
2
The figure above shows a circular flower bed, with its center at O, surrounded by a circular path that is 3 feet wide. What is the area of the path, in square feet?
(A) $$25\pi$$
(B) $$38\pi$$
(C) $$55\pi$$
(D) $$57\pi$$
(E) $$64\pi$$
The radius of the bigger circle is 8 + 3 = 11 feet, thus its area is $$\pi{r^2}=121\pi$$.
The area of the smaller circle is $$\pi{8^2}=64\pi$$.
The difference is $$121\pi-64\pi=57\pi$$.
_________________
Director
Joined: 24 Nov 2015
Posts: 508
Location: United States (LA)
The figure above shows a circular flower bed, with its cente [#permalink]
### Show Tags
26 Apr 2016, 14:15
$$Radius of total garden$$ = 8 + 3 = 11 $$feet$$
$$Area of flower bed including path$$ = $$\pi$$$$11^2$$
$$Area of flower bed$$ = $$\pi$$ $$8^2$$
$$Area of only path$$ = 57$$\pi$$ sq.feet
$$correct answer$$ - D
Target Test Prep Representative
Status: Head GMAT Instructor
Affiliations: Target Test Prep
Joined: 04 Mar 2011
Posts: 2827
Re: The figure above shows a circular flower bed, with its cente [#permalink]
### Show Tags
27 Apr 2016, 07:44
Attachment:
Circle.png
The figure above shows a circular flower bed, with its center at O, surrounded by a circular path that is 3 feet wide. What is the area of the path, in square feet?
(A) $$25\pi$$
(B) $$38\pi$$
(C) $$55\pi$$
(D) $$57\pi$$
(E) $$64\pi$$
In solving this problem we first must recognize that the flower bed is the right triangle with sides of y yards, x yards, and z yards. We are given that the area of the bed (which is the right triangle) is 24 square yards. Since we know that area of a triangle is ½ Base x Height, we can say:
24 = ½(xy)
48 = xy
We also know that x = y + 2, so substituting in y + 2 for x in the area equation we have:
48 = (y+2)y
48 = y^2 + 2y
y^2 + 2y – 48 = 0
(y + 8)(y – 6) = 0
y = -8 or y = 6
Since we cannot have a negative length, y = 6.
We can use the value for y to calculate the value of x.
x = y + 2
x = 6 + 2
x = 8
We can see that 6 and 8 represent two legs of the right triangle, and now we need to determine the length of z, which is the hypotenuse. Knowing that the length of one leg is 6 and the other leg is 8, we know that we have a 6-8-10 right triangle. Thus, the length of z is 10 yards.
If you didn't recognize that 6, 8, and 10 are the sides and hypotenuse of a right triangle, you would have to use the Pythagorean to find the length of the hypotenuse: 6^2 + 8^2 = c^2 → 36 + 64 = c^2 → 100 = c^2. The positive square root of 100 is 10, and thus the value of z is 10.
_________________
Jeffery Miller
Head of GMAT Instruction
GMAT Quant Self-Study Course
500+ lessons 3000+ practice problems 800+ HD solutions
Intern
Joined: 20 Oct 2015
Posts: 37
Re: The figure above shows a circular flower bed, with its cente [#permalink]
### Show Tags
05 Jul 2016, 05:16
Attachment:
Circle.png
The figure above shows a circular flower bed, with its center at O, surrounded by a circular path that is 3 feet wide. What is the area of the path, in square feet?
(A) $$25\pi$$
(B) $$38\pi$$
(C) $$55\pi$$
(D) $$57\pi$$
(E) $$64\pi$$
8+3=11
11^2=121
8^2= 64
121pi -64 pi= 57 pi
Non-Human User
Joined: 09 Sep 2013
Posts: 9833
Re: The figure above shows a circular flower bed, with its cente [#permalink]
### Show Tags
10 Feb 2019, 07:39
Hello from the GMAT Club BumpBot!
Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos).
Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.
_________________
Re: The figure above shows a circular flower bed, with its cente [#permalink] 10 Feb 2019, 07:39
Display posts from previous: Sort by
# The figure above shows a circular flower bed, with its cente
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
|
2019-02-18 10:20:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5342974066734314, "perplexity": 2464.657831785903}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247484928.52/warc/CC-MAIN-20190218094308-20190218120308-00635.warc.gz"}
|
http://community.worldheritage.org/articles/eng/Superconformal_algebra
|
# Superconformal algebra
### Superconformal algebra
In theoretical physics, the superconformal algebra is a graded Lie algebra or superalgebra that combines the conformal algebra and supersymmetry. It generates the superconformal group in some cases (In two Euclidean dimensions, the Lie superalgebra doesn't generate any Lie supergroup.).
In two dimensions, the superconformal algebra is infinite-dimensional. In higher dimensions, there is a finite number of known examples of superconformal algebras.
## Superconformal algebra in 3+1D
According to,[1][2] the \mathcal{N}=1 superconformal algebra in 3+1D is given by the bosonic generators P_\mu, D, M_{\mu\nu}, K_\mu, the U(1) R-symmetry A, the SU(N) R-symmetry T^i_j and the fermionic generators Q^{\alpha i}, \overline{Q}^{\dot\alpha}_i, S^\alpha_i and \overline{S}^{\dot\alpha i}. \mu,\nu,\rho,\dots denote spacetime indices, \alpha,\beta,\dots left-handed Weyl spinor indices and \dot\alpha,\dot\beta,\dots right-handed Weyl spinor indices, and i,j,\dots the internal R-symmetry indices.
The Lie superbrackets are given by
[M_{\mu\nu},M_{\rho\sigma}]=\eta_{\nu\rho}M_{\mu\sigma}-\eta_{\mu\rho}M_{\nu\sigma}+\eta_{\nu\sigma}M_{\rho\mu}-\eta_{\mu\sigma}M_{\rho\nu}
[M_{\mu\nu},P_\rho]=\eta_{\nu\rho}P_\mu-\eta_{\mu\rho}P_\nu
[M_{\mu\nu},K_\rho]=\eta_{\nu\rho}K_\mu-\eta_{\mu\rho}K_\nu
[M_{\mu\nu},D]=0
[D,P_\rho]=-P_\rho
[D,K_\rho]=+K_\rho
[P_\mu,K_\nu]=-2M_{\mu\nu}+2\eta_{\mu\nu}D
[K_n,K_m]=0
[P_n,P_m]=0
This is the bosonic conformal algebra. Here, η is the Minkowski metric.
[A,M]=[A,D]=[A,P]=[A,K]=0
[T,M]=[T,D]=[T,P]=[T,K]=0
The bosonic conformal generators do not carry any R-charges.
[A,Q]=-\frac{1}{2}Q
[A,\overline{Q}]=\frac{1}{2}\overline{Q}
[A,S]=\frac{1}{2}S
[A,\overline{S}]=-\frac{1}{2}\overline{S}
[T^i_j,Q_k]= - \delta^i_k Q_j
[T^i_j,\overline{Q}^k]= \delta^k_j \overline{Q}^i
[T^i_j,S^k]=\delta^k_j S^i
[T^i_j,\overline{S}_k]= - \delta^i_k \overline{S}_j
But the fermionic generators do.
[D,Q]=-\frac{1}{2}Q
[D,\overline{Q}]=-\frac{1}{2}\overline{Q}
[D,S]=\frac{1}{2}S
[D,\overline{S}]=\frac{1}{2}\overline{S}
[P,Q]=[P,\overline{Q}]=0
[K,S]=[K,\overline{S}]=0
Tells us how the fermionic generators transform under bosonic conformal transformations.
\left\{ Q_{\alpha i}, \overline{Q}_{\dot{\beta}}^j \right\} = 2 \delta^j_i \sigma^{\mu}_{\alpha \dot{\beta}}P_\mu
\left\{ Q, Q \right\} = \left\{ \overline{Q}, \overline{Q} \right\} = 0
\left\{ S_{\alpha}^i, \overline{S}_{\dot{\beta}j} \right\} = 2 \delta^i_j \sigma^{\mu}_{\alpha \dot{\beta}}K_\mu
\left\{ S, S \right\} = \left\{ \overline{S}, \overline{S} \right\} = 0
\left\{ Q, S \right\} =
\left\{ Q, \overline{S} \right\} = \left\{ \overline{Q}, S \right\} = 0
## Superconformal algebra in 2D
See super Virasoro algebra. There are two possible algebras; a Neveu-Schwarz algebra and a Ramond algebra.
## References
1. ^ West, Peter C. (1997). "Introduction to rigid supersymmetric theories". arXiv:hep-th/9805055.
2. ^ Gates, S. J.; Grisaru, Marcus T.; Rocek, M.; Siegel, W. (1983). "Superspace, or one thousand and one lessons in supersymmetry". Frontiers in Physics 58: 1–548. Bibcode:2001hep.th....8200G. arXiv:hep-th/0108200.
|
2017-04-30 05:06:32
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9547600746154785, "perplexity": 12039.5541206651}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917124299.47/warc/CC-MAIN-20170423031204-00216-ip-10-145-167-34.ec2.internal.warc.gz"}
|
https://brilliant.org/problems/sine-of-sine-inversed/
|
# Sine of Sine inversed
Geometry Level 4
$\large \sin \left( \dfrac14 \arcsin \dfrac{\sqrt{63}}8 \right)$
If the value of the expression above can be expressed as $$\dfrac a{\sqrt b}$$, where $$a$$ and $$b$$ are coprime positive integers, find $$a+b$$.
×
|
2017-07-20 18:57:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9769752025604248, "perplexity": 220.89268453937197}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423320.19/warc/CC-MAIN-20170720181829-20170720201829-00554.warc.gz"}
|
http://pldml.icm.edu.pl/pldml/element/bwmeta1.element.bwnjournal-article-doi-10_4064-sm182-1-1
|
Pełnotekstowe zasoby PLDML oraz innych baz dziedzinowych są już dostępne w nowej Bibliotece Nauki.
Zapraszamy na https://bibliotekanauki.pl
PL EN
Preferencje
Język
Widoczny [Schowaj] Abstrakt
Liczba wyników
• # Artykuł - szczegóły
## Studia Mathematica
2007 | 182 | 1 | 1-27
## From restricted type to strong type estimates on quasi-Banach rearrangement invariant spaces
EN
### Abstrakty
EN
Let X be a quasi-Banach rearrangement invariant space and let T be an (ε,δ)-atomic operator for which a restricted type estimate of the form $∥Tχ_{E}∥_{X} ≤ D(|E|)$ for some positive function D and every measurable set E is known. We show that this estimate can be extended to the set of all positive functions f ∈ L¹ such that $||f||_{∞} ≤ 1$, in the sense that $∥Tf∥_{X} ≤ D(||f||₁)$. This inequality allows us to obtain strong type estimates for T on several classes of spaces as soon as some information about the galb of the space X is known. In this paper we consider the case of weighted Lorentz spaces $X = Λ^{q}(w)$ and their weak version.
1-27
wydano
2007
### Twórcy
autor
• Departament de Matemàtica, Aplicada i Anàlisi, Universitat de Barcelona, E-08071 Barcelona, Spain
autor
• Dipartimento di Matematica e Applicazioni, Università di Milano - Bicocca, 20126 Milano, Italy
autor
• Department of Mathematics, University of Western Ontario, N6A 5B7, London, Canada
|
2022-08-14 21:11:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7368707060813904, "perplexity": 1653.2107284108702}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572077.62/warc/CC-MAIN-20220814204141-20220814234141-00765.warc.gz"}
|
https://lilypond.org/doc/v2.23/Documentation/internals/mark_005fengraver
|
### 2.2.79 `Mark_engraver`
This engraver creates rehearsal marks, segno and coda marks, and section labels.
`Mark_engraver` creates marks, formats them, and places them vertically outside the set of staves given in the `stavesFound` context property.
If `Mark_engraver` is added or moved to another context, `Staff_collecting_engraver` also needs to be there so that marks appear at the intended Y location.
By default, `Mark_engravers` in multiple contexts create a common sequence of marks chosen by the `Score`-level `Mark_tracking_translator`. If independent sequences are desired, multiple `Mark_tracking_translators` must be used.
`codaMarkFormatter` (procedure)
A procedure that creates a coda mark (which in conventional D.S. al Coda form indicates the start of the alternative endings), taking as arguments the mark sequence number and the context. It should return a markup object.
`currentPerformanceMarkEvent` (stream event)
The coda, section, or segno mark event selected by `Mark_tracking_translator` for engraving by `Mark_engraver`.
`currentRehearsalMarkEvent` (stream event)
The ad-hoc or rehearsal mark event selected by `Mark_tracking_translator` for engraving by `Mark_engraver`.
`rehearsalMarkFormatter` (procedure)
A procedure taking as arguments the context and the sequence number of the rehearsal mark. It should return the formatted mark as a markup object.
`segnoMarkFormatter` (procedure)
A procedure that creates a segno (which conventionally indicates the start of a repeated section), taking as arguments the mark sequence number and the context. It should return a markup object.
`stavesFound` (list of grobs)
A list of all staff-symbols found.
This engraver creates the following layout object(s): `CodaMark`, `RehearsalMark`, `SectionLabel` and `SegnoMark`.
`Mark_engraver` is part of the following context(s) in `\layout`: `ChordGridScore`, `Score` and `StandaloneRhythmScore`.
|
2022-11-27 02:14:52
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8884711861610413, "perplexity": 6926.624262856398}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710155.67/warc/CC-MAIN-20221127005113-20221127035113-00670.warc.gz"}
|
https://gmatclub.com/forum/if-the-volume-of-a-box-is-1-463-000-cubic-millimetres-what-is-the-vol-222493.html
|
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 24 May 2019, 03:36
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# If the volume of a Box is 1,463,000 cubic millimetres, what is the vol
Author Message
TAGS:
### Hide Tags
Current Student
Joined: 12 Nov 2015
Posts: 56
Location: Uruguay
Concentration: General Management
Schools: Goizueta '19 (A)
GMAT 1: 610 Q41 V32
GMAT 2: 620 Q45 V31
GMAT 3: 640 Q46 V32
GPA: 3.97
If the volume of a Box is 1,463,000 cubic millimetres, what is the vol [#permalink]
### Show Tags
Updated on: 05 Jun 2018, 07:46
2
12
00:00
Difficulty:
25% (medium)
Question Stats:
73% (01:12) correct 27% (01:18) wrong based on 321 sessions
### HideShow timer Statistics
If the volume of a Box is 1,463,000 cubic millimetres, what is the volume of the box in cubic meters? (1 millimeter = 0.001 meter)
A) 14.63
B) 1.463
C) 0.1463
D) 0.01463
E) 0.001463
Originally posted by Avigano on 23 Jul 2016, 13:27.
Last edited by Bunuel on 05 Jun 2018, 07:46, edited 3 times in total.
Target Test Prep Representative
Affiliations: Target Test Prep
Joined: 04 Mar 2011
Posts: 2823
Re: If the volume of a Box is 1,463,000 cubic millimetres, what is the vol [#permalink]
### Show Tags
25 Oct 2016, 17:04
10
2
Avigano wrote:
If the volume of a Box is 1,463,000 cubic millimetres, what is the volume of the box in cubic meters? (1 millimeter = 0.001 meter)
A) 14.63
B) 1.463
C) 0.1463
D) 0.01463
E) 0.001463
Since we are converting from cubic millimeters to cubic meters, we have to adjust the conversion.
(1 millimeter)^3 = (0.001 meter)^3
(1 millimeter)^3 = 0.000000001 meter^3
Thus, 1,463,000 cubic millimetres =
0.000000001 x 1,463,000 = 0.001463 cubic meters
_________________
# Jeffrey Miller
Jeff@TargetTestPrep.com
122 Reviews
5-star rated online GMAT quant
self study course
See why Target Test Prep is the top rated GMAT quant course on GMAT Club. Read Our Reviews
If you find one of my posts helpful, please take a moment to click on the "Kudos" button.
Manager
Joined: 10 May 2014
Posts: 138
Re: If the volume of a Box is 1,463,000 cubic millimetres, what is the vol [#permalink]
### Show Tags
24 Oct 2016, 17:06
4
2
The trick here maybe to just drop the ugly numbers and transform 1,463,000 cubic mm into 1,000,000 cubic mm just to understand the logic behind.
If volume of a cube equals $$10^6$$ cubic mm and the formula is side * side * side, we can infer that the side measures $$10^2$$ mm, or 100 mm.
And 100 mm = 0.1 mt. Therefore the side measures 0.1 mt.
In volume, this would be represented as 0.1 mt * 0.1 mt * 0.1 mt, which yields 0.001 cubic mt.
The answer choice should have 2 zeroes between the point and the first decimal different to zero.
_________________
Consider giving me Kudos if I helped, but don´t take them away if I didn´t!
What would you do if you weren´t afraid?
##### General Discussion
Math Expert
Joined: 02 Sep 2009
Posts: 55272
Re: If the volume of a Box is 1,463,000 cubic millimetres, what is the vol [#permalink]
### Show Tags
23 Jul 2016, 13:32
Avigano wrote:
If the volume of a Box is 1,463,000 cubic millimetres, what is the volume of the box in cubic meters? (1 millimeter = 0,001 meter)
A) 14,63
B) 1,463
C) 0,1463
D) 0,01463
E) 0,001463
Check other Conversion problems to practice from our Special Questions Directory.
_________________
Manager
Status: 2 months to go
Joined: 11 Oct 2015
Posts: 110
GMAT 1: 730 Q49 V40
GPA: 3.8
Re: If the volume of a Box is 1,463,000 cubic millimetres, what is the vol [#permalink]
### Show Tags
23 Jul 2016, 16:28
1
$$Well,\; square\; meters\; or\; cubic\; meters\; are\; simply\; m*m,$$
$$so\; our\; nice\; "a\; lot\; of\; millions"mm^{3},\; means\; x_{mm}\cdot y_{mm}\cdot z_{mm}\; and\; as\; each\; of\; these\; mm\; has\; a\; conversion\; of\; \frac{1}{1000},$$
$$the\; whole\; pack\; will\; need\; a\; reduction\; of\; 10^{-9}\; ,\; 9\; zeros,\; to\; become\; m^{3}.$$
$$Answer\; \mbox{E}.$$
$$Hope\; it\; helps.\;$$ $$:)$$
Den
Director
Joined: 27 May 2012
Posts: 787
Re: If the volume of a Box is 1,463,000 cubic millimetres, what is the vol [#permalink]
### Show Tags
05 Jun 2018, 06:54
Avigano wrote:
If the volume of a Box is 1,463,000 cubic millimetres, what is the volume of the box in cubic meters? (1 millimeter = 0,001 meter)
A) 14,63
B) 1,463
C) 0,1463
D) 0,01463
E) 0,001463
Comma's should be converted to points for better understanding of the problem .
_________________
- Stne
VP
Joined: 09 Mar 2016
Posts: 1283
If the volume of a Box is 1,463,000 cubic millimetres, what is the vol [#permalink]
### Show Tags
10 Jun 2018, 06:11
minwoswoh wrote:
The trick here maybe to just drop the ugly numbers and transform 1,463,000 cubic mm into 1,000,000 cubic mm just to understand the logic behind.
If volume of a cube equals $$10^6$$ cubic mm and the formula is side * side * side, we can infer that the side measures $$10^2$$ mm, or 100 mm.
And 100 mm = 0.1 mt. Therefore the side measures 0.1 mt.
In volume, this would be represented as 0.1 mt * 0.1 mt * 0.1 mt, which yields 0.001 cubic mt.
The answer choice should have 2 zeroes between the point and the first decimal different to zero.
Hi pushpitkc
i dont get for some reason logic of sentence in red...
if 1 mm is 0.001 cubic mt. then
then shouldnt we divide 1463000 by 0,001
Senior PS Moderator
Joined: 26 Feb 2016
Posts: 3386
Location: India
GPA: 3.12
Re: If the volume of a Box is 1,463,000 cubic millimetres, what is the vol [#permalink]
### Show Tags
10 Jun 2018, 08:13
1
dave13 wrote:
minwoswoh wrote:
The trick here maybe to just drop the ugly numbers and transform 1,463,000 cubic mm into 1,000,000 cubic mm just to understand the logic behind.
If volume of a cube equals $$10^6$$ cubic mm and the formula is side * side * side, we can infer that the side measures $$10^2$$ mm, or 100 mm.
And 100 mm = 0.1 mt. Therefore the side measures 0.1 mt.
In volume, this would be represented as 0.1 mt * 0.1 mt * 0.1 mt, which yields 0.001 cubic mt.
The answer choice should have 2 zeroes between the point and the first decimal different to zero.
Hi pushpitkc
i dont get for some reason logic of sentence in red...
if 1 mm is 0.001 cubic mt. then
then shouldnt we divide 1463000 by 0,001
Hi dave13
1 mm is 0.001 mt
1 cubic mm will be $$(0.001)^3 = 0.000000001$$ cubic meter
Since we are given the volume of the box is 1,463,000 cubic millimeters, to convert into
cubic meter, we need to multiply it by 0.000000001.
1,463,000 cubic mm = 1,463,000*0.000000001 = 0.01463 cubic meter.
Hope this helps you!
_________________
You've got what it takes, but it will take everything you've got
Manager
Joined: 17 Jul 2016
Posts: 56
If the volume of a Box is 1,463,000 cubic millimetres, what is the vol [#permalink]
### Show Tags
22 Mar 2019, 11:24
We know that when you multiply numbers with decimals, you multiply the numbers and then shift the decimal point to the left as many times as there are spaces to the right of the decimals in both numbers.
We are given that 1 millimeter = 0.001 meter
1 cubic meter= (.001)(.001)(.001) cubic millimeters
It follows that 1 cubic millimeter = 1 cubic meter (followed by 9 decimal shifts to the left).
How do we get from cubic millimeters to cubic meters? We shift the decimal 9 spaces to the left.
We can see that 7 shift gets us .1463000 so 9 shifts gets us .001463000 cubic meters
Intern
Status: wake up with a purpose
Joined: 24 Feb 2017
Posts: 33
Concentration: Accounting, Entrepreneurship
Re: If the volume of a Box is 1,463,000 cubic millimetres, what is the vol [#permalink]
### Show Tags
06 Apr 2019, 21:09
don't we need to convert twice such as 1463000 cubic millimeters to millimeters and 0.001 meter to cubic meters?
I mean we only convert the later one. Give me a more thorough solution. Bunuel
Posted from my mobile device
_________________
If people are NOT laughing at your GOALS, your goals are SMALL.
Re: If the volume of a Box is 1,463,000 cubic millimetres, what is the vol [#permalink] 06 Apr 2019, 21:09
Display posts from previous: Sort by
|
2019-05-24 10:36:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7329994440078735, "perplexity": 6604.992124133784}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257601.8/warc/CC-MAIN-20190524084432-20190524110432-00328.warc.gz"}
|
https://cs.stackexchange.com/questions/77513/solving-consensus-for-a-known-number-of-processes
|
# Solving consensus for a known number of processes
I was given an object with exactly 2 operations test-and-reset that sets the value of the object to 0 only if it was previously set to 1 and does not return a value and fetch-and-add(v) which adds v to the value of the object and returns the previous value.
I am almost sure the consensus number of this object is 2 and I think I found an algorithm for it (I should mention the problem is binary consensus):
if input = 0
test-and-reset(obj)
if (prev != 2)
decide(0)
else
decide(1)
else
if (prev = 0 or prev = 3)
decide(0)
else
decide(1)
The problem was to find a wait-free algorithm an algorithm for n processes using any number of these objects (n is known).
The problem is that I an not sure if I am right (I'm pretty sure and this holds me back on this idea)? And if I am right no idea how to use this to solve the problem.
Thanks
if (input = 0) test-and-reset(obj)
|
2019-10-16 21:48:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44016122817993164, "perplexity": 550.8802534862104}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986670928.29/warc/CC-MAIN-20191016213112-20191017000612-00065.warc.gz"}
|
https://cs.stackexchange.com/questions/130767/loop-dependencies
|
# Loop Dependencies
Say I have loop like this,
for(1 to 10)
{
A[i][j]=A[i][j];
}
What kind of dependency is this? RAW or WAR?
According to me it is none, since there is no problem whatever we do earlier. Even if we write or even if we read earlier, there should not be any problem. So what kind of dependency do we call these?
• That code is equivalent to a no-op (assuming $A[i][j]$ exists). – Steven Oct 3 '20 at 15:39
|
2021-05-13 22:00:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20227524638175964, "perplexity": 925.972519970337}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992514.37/warc/CC-MAIN-20210513204127-20210513234127-00238.warc.gz"}
|
https://discuss.tlapl.us/msg04712.html
|
# Re: [tlaplus] Generating records like functions
• From: Jeremy Wright <jeremy@xxxxxxxxxxxx>
• Date: Mon, 3 Jan 2022 16:40:37 -0700
• Ironport-hdrordr: A9a23:YKDQSKtPbTh6d5hWl9T1im2T7skCAoAji2hC6mlwRA09TyXGrbHKoB1L726XtN9OYgBCpTnhAtjmfZquz+8F3WB3B8b2YOCGghr9EGgG1/qH/9SOIVyDygdi79Y2T0ETMqyLMbE+t7eF3OFXe+xQkeVu3siT9LTjJl1WPF5Xg5gJ1XYoNu5wencGFjWufKBJSqZ0hfA31AZIG05nE/hTXUN1A9Qrzuej/PmKDXFmZ29AmXCzZHGTmcXH+jejr0Ejulh0sOwfGAb+4nTED+mYwo6GIt617R6I03wA8+GRuudrNYinjM8JJjLwzimpYZlsQLGO+BskydvfsGrC3eO87isIDoBW0Tf8b2u1qRzi103J1ysv0WbrzRu9jWH4qcL0aTomA44Z7LgpPSfx2g4FhpVRwahL12WWu95+Cg7Bpj3045ztWwtxnkS5jHI+mao4jmBZU6EZdLhNxLZvsH99IdMlJmbX+YonGO5hAIX14+tXS0qTazTjsmxm0LWXLzwONybDZnJHlt2e0jBQknw85VAf3tYjknAJ8494Y4VY5szfW54Y141mf4szV+ZQFe0BScy4BijmWhTXKl+fJlzhCeUuJ2/NkZjq+784jdvaOKDg9KFC3agpbWko8VLbIynVeIqzNdxwg1HwqVyGLHbQIpo03ek+hlX+LICbeBFrBmpeyvdImM9vd/EzbczDTq6+M8WTX1cGJrw5qzEX1PFpWCQjue0uy50GsgG104j2wrODjJ2tTB+UHsuiLd/6YAmQPkc+
Hello jwnhy,
Do you mean something like this?
tlaplus
f == [x |-> {0, 1}, y |-> {4,5}]
On Mon, Jan 3, 2022 at 4:30 PM jwnhy <qq799433746@xxxxxxxxx> wrote:
Hello, I'm new to TLA+ and trying to learn it.
I know that I can generate a set of all possible functions by writing
[{"A", "B"} -> {0, 1}] \* all functions from {"A", "B"} to {0, 1}
I'm wondering if there is a similar way for records?
For example, if I have a record that most of its fields is BOOLEAN, I'm wondering if I can write something like
mstatus \in [{SD, TSR, TW, TVM}: BOOLEAN] \* THIS DOES NOT WORK
Or is there a way to specify a function with different range for different domain?
e.g: f["x"] \in {0,1} f["y"] \in {4, 5}
Thanks,
jwnhy.
--
You received this message because you are subscribed to the Google Groups "tlaplus" group.
To unsubscribe from this group and stop receiving emails from it, send an email to tlaplus+unsubscribe@xxxxxxxxxxxxxxxx.
|
2022-01-16 22:39:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6853609681129456, "perplexity": 6415.002106887802}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300244.42/warc/CC-MAIN-20220116210734-20220117000734-00124.warc.gz"}
|
https://support.bioconductor.org/p/120550/
|
Question: concatenating a proteome to use Biostrings pairwiseAlignment
0
7 months ago by
Juliet Hannah360
United States
Juliet Hannah360 wrote:
I have read in a proteome using
library("Biostrings")
infile <- "proteome.fasta"
I would like to align a list of peptides using Biostrings::pairwiseAlignment so that I can use Biostrings::pid.
To do this, it seems I need to convert/concat everything in proteome into one sequence. xscat did not seem to be what I needed.
Is this something I can do in Biostrings or is there a better strategy. Thanks!
alignment biostrings • 127 views
|
2019-12-06 14:22:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5430722236633301, "perplexity": 6697.519727055754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540488620.24/warc/CC-MAIN-20191206122529-20191206150529-00020.warc.gz"}
|
https://physics.stackexchange.com/questions/467747/how-to-correctly-differentiate-the-lienard-wiechert-four-vector-potential-to-get
|
# How to correctly differentiate the Lienard-Wiechert four-vector potential to get the EM tensor?
The retarded 4-vector potential for a moving charge is given by $$A^\alpha = \left. \frac {eV^\alpha(\tau)}{V\cdot[x-r(\tau)]} \right|_{\tau = \tau_0}$$ where $$e$$ is the charge, $$V$$ the four-velocity, $$x$$ the field point, $$r$$ the retarded position of the charge at its retarded proper time $$\tau = \tau_0$$.
Using the light-cone condition, I can convert $$dx^\alpha$$ to $$d\tau$$: $$[x - r(\tau_0]]^2=0\\ dx^{\alpha}(x-r)^{\alpha}-dr\cdot(x-r) = 0\\ dx^{\alpha}(x-r)^{\alpha}-d{\tau}(x_0-r_0)= 0\\ dx^{\alpha}=\frac{d\tau(x_0-r_0)}{(x-r)^{\alpha}}$$
For the EM tensor I hence get: $$F^{\alpha\beta}= \partial^{\alpha}A^{\beta}-\partial^{\beta}A^{\alpha}= \frac{e(x-r)^{\alpha}}{(x-r)^0}\frac{d}{d\tau} \left [\frac{V^{\beta}}{V\cdot(x-r)} \right] - \frac{e(x-r)^{\beta}}{(x-r)^0}\frac{d}{d\tau} \left [\frac{V^{\alpha}}{V\cdot(x-r)} \right]$$
By convention $$\frac{d}{d\tau}[\;]$$ is taken with the field point $$x$$ held constant which isn't the case here. I therefore need to add $$\frac{e(x-r)^{i}}{(x-r)^0}\frac{dx^i}{d\tau}\frac{\partial}{\partial x^i}[\;]=\frac{e\partial}{\partial x^i}[\;]$$ to the above expression:
\begin{align} F^{\alpha\beta}&= \frac{e(x-r)^{\alpha}}{(x-r)^0}\frac{d}{d\tau} \left [\frac{V^{\beta}}{V\cdot(x-r)} \right] + \frac{e\partial}{\partial x^\alpha} \left [\frac{V^{\beta}}{V\cdot(x-r)} \right]\\ &- \frac{e(x-r)^{\beta}}{(x-r)^0}\frac{d}{d\tau} \left [\frac{V^{\alpha}}{V\cdot(x-r)} \right] - \frac{e\partial}{\partial x^\beta} \left [\frac{V^{\alpha}}{V\cdot(x-r)} \right] \end{align}
However, Jackson$$^1$$ calculates the EM tensor as $$F^{\alpha\beta}= \partial^{\alpha}A^{\beta}-\partial^{\alpha}A^{\beta}= \frac{e}{V\cdot(x-r)}\frac{d}{d\tau} \left [\frac{(x-r)^{\alpha}V^{\beta}- (x-r)^{\beta}V^{\alpha}}{V\cdot(x-r)} \right]$$
What do I need to do to get this expression also?
1 Classical Electrodynamics, Jackson, 3rd edition page 663
• Use: $A(x)^\alpha=2e\int d\tau V^\alpha(\tau)\theta[x_{0}-r_{0}(\tau)]\delta\{[x-r(\tau)]^2\}$ instead - then $\partial^{\alpha}A(x)^\beta=2e\int d\tau V^\beta(\tau)\theta[x_{0}-r_{0}(\tau)]\partial^{\alpha}\delta\{[x-r(\tau)]^2\}$. Using $\partial^{\alpha}\delta[f]=\frac{(x-r)^{\alpha}}{V\cdot(x-r)}\frac{d}{d\tau}\delta[f]$ where $f=[x-r(\tau)]^2$ integrate by parts and $\theta$ will vanish.Result will be first term of $F^{\alpha\beta}$. You will need to use the result $\frac{d}{d\tau}[x-r(\tau)]^2=-2[x-r(\tau)]_{\beta}V^{\beta}(\tau)$ – Cinaed Simson Apr 28 at 3:04
• @CinaedSimson yes, there are other ways of doing it but I'm interested in someone pointing out the flaw in my working. – Physiks lover May 2 at 23:42
• $[x - r(\tau_0)]^2=0$ is the light cone constraint which enforces the retardation $x_{0} > r_{0}(\tau_{0})$. Since you wrote down the constraint, I presumed you were trying to evaluate the integral form - which contains the term $\delta {\{[x-r(\tau)]^2\}}$. So take your expression for $A^\alpha =\left. \frac {eV^\alpha(\tau)}{V\cdot[x-r(\tau)]}\right|_{\tau = \tau_0}$ and calculate $\partial^{\beta}A^{\alpha}$. – Cinaed Simson May 3 at 21:53
|
2019-06-25 12:21:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 16, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.958977222442627, "perplexity": 660.3813173399519}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999838.23/warc/CC-MAIN-20190625112522-20190625134522-00194.warc.gz"}
|
https://solvedlib.com/martin-company-applies-manufacturing-overhead,130899
|
# Martin Company applies manufacturing overhead based on direct machine hours. Martin estimates manufacturing overhead and labor...
###### Question:
Martin Company applies manufacturing overhead based on direct machine hours. Martin estimates manufacturing overhead and labor for the year follows: Estimated manufacturing overhead $120,000 Direct labor hours estimated 5,000 Estimated direct labor costs$50,000 Estimated machine hours 6,000 Assuming the actual manufacturing overhead cost and the actual activities for Job 201 is as follows: Actual manufacturing overhead $12,000 Direct labor hours actual incurred 300 Actual direct labor costs totaled$ 9,800 Actual machine hours totaled 400 Instructions (a) Compute the predetermined overhead rate. Please show your calculation. (b) What is the amount of manufacturing overhead applied to Job 201? Please show your calculation.
#### Similar Solved Questions
##### I was able to finish some of this problem pls help with the unfinished portions Required...
I was able to finish some of this problem pls help with the unfinished portions Required Information The following information applies to the questions displayed below.] Tony and Suzie graduate from college in May 2021 and begin developing their new business. They begin by offering clin...
##### Write each expression a5 single logarithm:4. 2l0g3 U log3 log( (3? Bw + 2) _ 2log(z + 1) 6. 21 log3 Va + log: (9z?) og:
Write each expression a5 single logarithm: 4. 2l0g3 U log3 log( (3? Bw + 2) _ 2log(z + 1) 6. 21 log3 Va + log: (9z?) og:...
##### The U.S. healthcare system is currently in a stage where consumer choices are re-shaping health care....
The U.S. healthcare system is currently in a stage where consumer choices are re-shaping health care. What are the resulting strategic responses to this new stage? Do you work for an organization that is responding in any of these ways? Explain....
##### 1. The following circuit shows a discrete common source MOSFET amplifier. The MOSFTE is n-channel MOSFET...
1. The following circuit shows a discrete common source MOSFET amplifier. The MOSFTE is n-channel MOSFET and early voltage (Va) is c. The transconductance of the amplifier (ga) is 3 mA/V The frequency responses of the amplifier are as follows i) The three low break frequencies f 3Hz (caused by C).fi...
##### Initially contains 300 liters of Huid on which there is dissolved 50 500 liter tank grams of sulfuric acid. Fluid containing 30 gramns per liter of sulfuric acid dows into the tank At 4 rate of liters per minute_ The mixture is kept unilorm by stirring;, and the stirred mixture simultaneously flows Qut of the tank al a rate 0f 2.5 literd per minute: How much of sulfuric acid is in the tank at Lhe instant it overflows" 76
initially contains 300 liters of Huid on which there is dissolved 50 500 liter tank grams of sulfuric acid. Fluid containing 30 gramns per liter of sulfuric acid dows into the tank At 4 rate of liters per minute_ The mixture is kept unilorm by stirring;, and the stirred mixture simultaneously flows ...
##### Described below are six independent and unrelated situations involving accounting changes. Each change occurs during 2018...
Described below are six independent and unrelated situations involving accounting changes. Each change occurs during 2018 before any adjusting entries or closing entries were prepared. Assume the tax rate for each company is 40% in all years. Any tax effects should be adjusted through the deferred t...
##### Studies of automobile demand suggest that unit sales of compact cars depend principally on their average...
Studies of automobile demand suggest that unit sales of compact cars depend principally on their average price and consumers’ real personal income. Consider the historical record of sales shown in the table. Estimate the point elasticity of demand with respect to price. (Be sure to choose two...
##### Eyamaie -nentecrz "neretne qrer curexe"< Js_sgment frcm0) =0 (4,
Eyamaie -ne ntecrz "nere tne qrer cure xe"< Js_ sgment frcm 0) =0 (4,...
##### Vector Potential _ Any sufficiently nice vector field F on R3 with vanishing diver- gence; thus such that F = 0, can be expressed as F = x G for some vector field We call G a vector potential of F. (In general, vector potentials are not unique: _ Find & vector potential of the form G = (C,m, 0} for F = (yz, 12, Iy)-
Vector Potential _ Any sufficiently nice vector field F on R3 with vanishing diver- gence; thus such that F = 0, can be expressed as F = x G for some vector field We call G a vector potential of F. (In general, vector potentials are not unique: _ Find & vector potential of the form G = (C,m, 0} ...
##### Question no:tThe following data give the correlation coefficient means and standard deviation of rainfall and yield of paddy in a certain tractYield per hectare in KgsAnnual rainfall incmMean973.518.338.4Coefficlent of correlation 0.58 the most Iikely yield of paddy when the annual rainfall Is 22 Cm, other factors being assumed Estimate (25 marks to remaln the same:8 16
Question no:t The following data give the correlation coefficient means and standard deviation of rainfall and yield of paddy in a certain tract Yield per hectare in Kgs Annual rainfall incm Mean 973.5 18.3 38.4 Coefficlent of correlation 0.58 the most Iikely yield of paddy when the annual rainfall...
##### You wish to test the following claim (Ha) at a significance level of α=0.01. Ho:μ=73.2...
You wish to test the following claim (Ha) at a significance level of α=0.01. Ho:μ=73.2 Ha:μ<73.2 You believe the population is normally distributed, but you do not know the standard deviation. You obtain the following sam...
##### What are the critical values, if any, of f(x)=x^3?
What are the critical values, if any, of f(x)=x^3?...
##### 843 DCb4014b1
843 DC b4 014 b1...
##### Point) Leonhard Euler was able to calculate the exact sum of the p-series with p 2:24 = ;Use this fact to find the sum of each series 24 4 (n + 3)2 2 n= (Sn)2
point) Leonhard Euler was able to calculate the exact sum of the p-series with p 2: 24 = ; Use this fact to find the sum of each series 24 4 (n + 3)2 2 n= (Sn)2...
##### Which of the following is NOT a reason why it is difficult to obtain absolute ages...
Which of the following is NOT a reason why it is difficult to obtain absolute ages from many rocks. A. Many rocks do not have minerals that can be used to date their crystallization, recrystallization or deposition b. Methods of absolute age determination are expensive c most rocks that can be dated...
##### Suppose u is unit vector in R' and U = uuT. Which of the following is not correct inference?Uis symmetric.Ur is the orthogonal projection of x onto u_U2 = UUis orthogonal:
Suppose u is unit vector in R' and U = uuT. Which of the following is not correct inference? Uis symmetric. Ur is the orthogonal projection of x onto u_ U2 = U Uis orthogonal:...
##### Please help me know the right anwser and the process Consider a metal lattice with atoms...
Please help me know the right anwser and the process Consider a metal lattice with atoms separated b 0.143 nm. Electrons will diffract when passing through this crystal if their de Broglie wavelength matches the lattice spacing. What speed (in m/s) of electrons is required for the de Broglie wavele...
##### T00 Jaxq4 8 Ierilen sistemin M genel cozumunu lJ - 3 bulunuzs"-[4--[4 r
T0 0 Jaxq4 8 Ierilen sistemin M genel cozumunu lJ - 3 bulunuzs "-[4--[4 r...
##### QUESTION 4A gas sample has a volume of 1.50 L at a temperature of – 35.0°C. The gas is expanded to a volume of 3.50L. Assuming no change in pressure or amount of gas, whatis the new temperature in °C?Provide answer with no decimal places. Do not write unit inthe box.QUESTION 5The volume of a gas with a pressure of 1.2 atm increases from1.0 L to 4.5 L. What is the final pressure of thegas in atm, assuming constant temperature andamount of gas?Provide answer with 2 decimal places. Do not wr
QUESTION 4 A gas sample has a volume of 1.50 L at a temperature of – 35.0 °C. The gas is expanded to a volume of 3.50 L. Assuming no change in pressure or amount of gas, what is the new temperature in °C? Provide answer with no decimal places. Do not write unit in the box. QUESTION 5 ...
##### The compound K3[NiF6] is paramagnetic. Determine the oxidation number of nickel in this compound, the most...
The compound K3[NiF6] is paramagnetic. Determine the oxidation number of nickel in this compound, the most likely geometry of the coordination around the nickel, and the possible configurations of the d electrons of the nickel. Oxidation number ___-7-6-5-4-3-2-10+1+2+3+4+5+6+7 Geometry Co...
##### Can someone help with this? [8 points] 8 a) On the unit cell drawn below, sketch...
Can someone help with this? [8 points] 8 a) On the unit cell drawn below, sketch the location of atoms for a BCC crystal. Be sure to include those that are hidden behind the front faces. b) Derive the cubic dimension a as a function of the atomic radius, R. c) What is the planar density of the (001)...
##### Itatl Tirulci &"Illed nicdunuLul?MF cu UyLUT
Itatl Tirulci &"Illed nicdunuLul?MF cu UyLUT...
|
2023-04-01 00:26:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.702674925327301, "perplexity": 3675.8446156298}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949694.55/warc/CC-MAIN-20230401001704-20230401031704-00796.warc.gz"}
|
https://www.practically.com/studymaterial/blog/docs/class-9th/chemistry/matter-in-our-surroundings/
|
# Matter in our Surroundings
1.1 INTRODUCTION
There are a large number of things around us which we see and feel. For example, we can see a book in front of us. A book occupies some space. The space occupied by the book in called its volume. If we pick up the book, we can also feel its weight. So, we conclude that the book has some mass. We cannot see the air around us, yet if we fill a balloon with air and then weigh it carefully, we will find that not only does air occupy space (bounded
by the balloon), but it also has mass.
Things like a book and air are examples of matter. Other examples of matter are wood, cloth, paper, ice, steel, water, oil etc. Further, that matter offers resistance is borne out by the fact that we cannot displace an object from one place to another without applying some force. We have to apply force to pick up a stone from the ground. Thus, matter can be defined as follows.
DEFINITION
Anything that occupies space and has mass is called Matter.
Air and water, gold and silver, table and chair, milk and oil etc., are all different kinds of matter, because all of them occupy space and have mass.
Characteristics of Matter
i) All matter is composed of particles. These particles have intermolecular spaces between them and attract each other with a force and are in continuous random motion.
ii) All material bodies have weight and hence have mass.
iii) All material bodies occupy space.
1.2 EVIDENCES OF PHYSICAL NATURE OF MATTER
I. Particle Nature of Matter
Most of the evidences for the existence of particles in matter and their motion come from the experiments of diffusion and Brownian motion.
EVIDENCE-1
Dissolving a solid in a liquid: Take a beaker. Fill half of it with water. Mark the level of water in the beaker. Add some sugar to the water and dissolve it with the help of a glass rod. You will see that the sugar has disappeared, but there is no change in the level of water.
Conclusion: This can be explained by assuming that matter is not continuous, rather it is made up of particles. Sugar contains a large number of separate particles. These particles when dissolved in water occupy the vacant spaces between the particles of water. Therefore,
the water level in the beaker did not rise. Had sugar been continuous, like a block of wood, the water level in the beaker would have risen.
EVIDENCE-2
Movement of pollen grain in water: The best evidence for the existence and movement of particles in liquids was given by Robert Brown in 1827. Robert Brown suspended extremely small pollen grains in water. On looking through the microscope, it was found that the pollen grains were moving rapidly throughout water in a very irregular way (of zig-zag way).
Conclusion: Water is made up of tiny particles which are moving very fast (The water molecules themselves are invisible under the microscope because they are very, very small). The pollen grains move on the surface of water because they are constantly being hit by the fast moving particles of water. So, though the water particles (or water molecules) are too small to be seen, but their effect on the pollen grains can be seen clearly. The random motion of visible particles (pollen grains) caused by the much smaller invisible particles of water is an example of Brownian motion (after the name of the scientist Robert Brown who first observed this phenomenon)
Brownian motion: Zig-zag motion (in a very irregular way) of particles is known as Brownian motion. Brownian motion can also be observed in gases. Sometimes, when a beam of light enters in a room, we can see tiny dust particles suspended in air which are
moving rapidly in a very random way. This is an example of Brownian motion is gases. The tiny dust particles move here and there because they are constantly hit by the fast moving particles of air.
The existence of Brownian motion gives two conclusions.
• Matter is made up of tiny particles.
• Particles of matter are constantly moving.
Note: Brownian motion increases on increasing the temperature.
II. Characteristics of Particles of Matter:
The important characteristics of particles of matter are the following:
(i) The particles of matter are very, very small:
Experiment: Potassium permanganate is a purple coloured solid substance and water is a liquid. We will take 2-3 crystals of potassium permanganate and dissolve them in 100 ml of water. Now we will take out 10 ml of this solution and put into another 90 ml of clear water. We will keep diluting the solution like this 5 to 8 times.
Conclusion: This experiment shows that just a few crystals of potassium permanganate can colour a large volume of water. It means that the crystal of $KMn{O}_{4}$ is made of millions of tiny particles. They keep dividing themselves into smaller and smaller particles.
(ii) The particles of matter have spaces between them:
Experiment: We take about 100 ml of water in a beaker and mark the level of water. We will also take 20g of sugar. Now we will dissolve the sugar by stirring and we get a sugar solution.
Conclusion: The level of sugar solution in the beaker is at the same mark where water level was initially in the beaker.
It shows that particles of sugar go into the spaces between various molecules of water due to which there is no change in the volume. Thus, from this experiment it can be concluded that, the molecules in water are not tightly paced they have spaces between them.
(iii) The particles of matter are constantly moving:
This property can be explained by diffusion.
Diffusion: Intermixing of particles of two different types of matter on their own is called diffusion. It is the phenomenon in which the movement of molecules or particles occur form their higher concentration towards their lower concentration.
For example, when a perfume bottle is opened in one corner of a room, its fragrance spreads in the whole room quickly. This happens because the particles of perfume move rapidly in all directions and mix with the moving particles of air in the room.
Experiment: We take a gas jar full of bromine vapours and invert another gas jar containing air over it, then after some time, the red-brown vapours of bromine spread out into the upper gas jar containing air.
Conclusion: In this way, the upper gas jar which contains colourless air in it, also turns red-brown. The mixing is due to the diffusion of bromine vapours (or bromine gas) into air.
Note: The particles of matter possess kinetic energy and so are constantly moving. As the temperature rises, particles move faster.
(iii) Particles of matter attract each other: There are some forces of attraction between the particles of matter which bind them together.
Cohesive Forces: The forces of attraction between the particles of same substances are known as cohesive forces.
Adhesive Forces: The forces of attraction between the particles of different substances are known as called adhesive forces.
Example: Take a piece of chalk, a cube of ice and an iron nail and beat them with a hammer. Chalk will easily break into smaller pieces, but more force will be required to break a cube of ice and iron nail.
Reasons: The inter particle force of attraction in weak in chalk, a bit stronger in ice cube, very strong in iron.
1.3 KINETIC THEORY OF MATTER
The Kinetic Theory is a good way to relate the ‘micro world’ with the ‘macro world.’ Following are the postulates of Kinetic Theory of matter.
i) Composition of matter: All matter is made of atoms, the smallest bit of each element or molecules.
ii) Particles in motion: The particles of matter are in a state of unending motion. The motion of atoms or molecules can be in the form of linear or of translational, the motion vibration of atoms or molecules against one another or pulling against a bond, and the rotation of individual atoms or groups of atoms. The motion is responsible for the particles
to have kinetic energy. (Kinetic energy is the energy possessed by particles in motion)
iii) Variation in kinetic energy: With the supply of heat energy (thermal energy) to matter, the kinetic energy of particles increases, i.e., they start moving more vigorously. Reverse happens if the matter is cooled i.e., heat energy is taken.
iv) Adhesive and cohesive forces: The particles of matter attract each other with a force. This force is called cohesive force; if the particles are of same kind and adhesive force, if the particles are of different kinds.
v) Inter-particular force of attraction: There exists a force between these particles known as inter-particular force of attraction. This force decreases if the distance between them increases and vice versa.
1.4 CLASSIFICATION OF MATTER
Physical Classification Of Matter
We know that matter is composed of extremely small particles. Based on the arrangement of these particles, matter is mainly divided into three types. They are solids, liquids and gases. These are also called physical sates of matter. This classification is also based on
differences of certain physical properties, namely, mass, volume, shape, rigidity, density and arrangement of particles. To understand the properties of solids, liquids and gases, we need to know the kinetic theory of matter.
1.5 SOLIDS
A solid object characterized by resistance to deformation and changes of volume.
At the microscopic scale, solids have the following properties.
Properties
i) Shape and volume: Solids have definite shape and volume.
Reason: The definite shape and volume of solids can be explained on the basis of kinetic theory of matter. In the cases of solids, the kinetic energy of molecules is least and the force of attraction between the molecules is highest. The molecules of the solids can just vibrate about their mean position, but cannot migrate from one position to another. Thus, the solids have definite shape and volume.
ii) Rigidity: Rigid means ‘unbending’ of inflexible solid is a rigid form of matter so that it maintains its shape when subjected to outside force. Solids are generally rigid.
Reason: The constituent molecules i.e., in solids, they cannot be deformed on application of force, have fixed positions in space relative to each other. This accounts for the solid’s rigidity.
(In some solids like rubber, the shape changes on the application of external force. It regains its shape, on removal of the external force.)
iii) Free surfaces: Solids have several free surfaces.
iv) Intermolecular spaces and forces: In solids, the molecules are very close to each other. Thus, they have minimum intermolecular spaces. Due to this, they have large intermolecular forces of attraction.
iv) Density: The density of the solids is generally high. This is due to the compact arrangement of the molecules.
v) Effect of heating: Solids expand on heating. But the dimensions of solids do not increase or decrease in large proportion on heating or cooling, respectively.
vi) Diffusion: When two solids are kept in contact with one another they do not mix with each other, i.e., they do not diffuse.
Some examples of solids: All metals, wood and wood products; rocks of various kinds, ice, etc.
1.6 LIQUIDS
A liquid is a fluid in which the particles are loosely arranged and can freely form a distinct surface at the boundaries of its bulk material. At the microscopic scale, liquids exhibit the following properties.
Properties
i) Shape and volume: Liquids have definite mass and volume. They do not have definite shape, but take the shape of the container in which they are present.
Reason: We know that the kinetic energy of the molecules of a liquid is very large, and so is the distance between the molecules. Thus, the attractive forces between the molecules of a liquid are small as compared to the solids. Therefore, the molecules of a liquid are free to move about within the liquid and hence, the liquid can easily take the shape of the container in which it is present.
However, the volume of the liquid does not change, because, the molecules do not leave the liquid.
ii) Intermolecular spaces and forces: In liquids, the distance between the molecules is large compared to that of a solid. Thus, they have greater intermolecular spaces than in solids. Due to this they have less intermolecular forces of attraction than in solids.
iii) Fluidity: The force of attraction between the molecules of liquids is less than solids. Thus, liquids can flow from one place to another.
iv) Rigidity: Liquids are not as rigid as solids. They can be slightly compressed.
Reason: The intermolecular space in liquids is larger than in solids.
v) Free surfaces: Liquids have only one free surface.
vi) Density: The density of the liquids is generally less than that of solids.
vii) Effect of heating: Liquids expands on heating. They expand far more than solids on heating and contract for more than solids on cooling.
viii) Diffusion: The particles of two different liquids can diffuse in one another, depending upon the nature of molecules of liquids. For example, milk and water particles diffuse in one another, but the particles of oil and water do not.
ix) Some examples of liquids: Water, alcohol, benzene, milk, mercury, kerosene oil, etc,.
1.7 GASES
Gas is the most energetic phase of matter commonly found on earth. The particles of gas, either atoms or molecules, have too much energy to settle down attached to each other or to come close to other particles to be attracted by them.
At the microscopic scale, gases have the following properties.
Properties
i) Shape and volume: Gases have neither definite shape nor definite volume. They occupy the entire space of a given vessel in which they are enclosed.
Reason: The intermolecular distances between the molecules of a gas are very large with the result that the force of attraction between the molecules is negligible. Moreover, the molecules have a very large kinetic energy. Thus, the molecules are practically free to move in any direction and hence, can fill any space. Thus, the gases have neither definite shape nor definite volume.
ii) Definite mass: A gas contained in a vessel has a definite mass.
iii) Intermolecular spaces and forces: In gases, the distance between the molecules is very large compared to that of solids and liquids. Thus, they have the greatest intermolecular spaces compared to solids and liquids. Due to this they have least intermolecular forces of attraction than in solids and liquids.
iv) Compressibility: Gases are highly compressible. The high compressibility of gases is due to the fact that they have large intermolecular spaces. On applying pressure, these molecules simply come close to each other, thereby decreasing the volume of a gas.
v) Expansibility: The volume of a given mass of a gas can be increased either by decreasing pressure or by increasing temperature. When the pressure on an enclosed gas is reduced, its molecules simply move apart, thereby increasing intermolecular
spaces and hence, the volume increases. When gas enclosed in a container is heated, the
kinetic energy of its molecules increases. Thus, the molecules move faster and farther from each other. This in turn results in the increase in volume.
vi) Free surfaces: Gases have no free surfaces.
vii) Density: The gases occupy an extremely large volume as compared to those solids of and liquids. As the inter molecular spaces between the gas molecules is large, they occupy greater volume compared to solids and liquids of same mass. Thus, mass per unit volume of a gas is very small as compared to the liquids and solids. This accounts for the low density of the gases.
viii) Diffusion: Gases have a very high rate of intermixing and diffusion. The intermolecular spaces in a gas are very large. Thus, when two gases are brought in to contact with each other, their molecules just move into one another’s intermolecular space, thereby forming a homogeneous mixture.
ix) Exertion of pressure: The molecules of a gas, constantly bombard the sides of the containing vessel and hence, exert some force per unit area on the sides of the container, which is commonly called pressure. It has been seen that at a given temperature, the number of molecules striking the walls of containing vessel per unit time, per unit
area is same. Thus, we can say that gases exert same pressure in all directions.
1.8 FOURTH AND FIFTH STATES OF MATTER
Fourth State of Matter
The fourth state of matter is plasma. Plasma is an ionized gas, into which sufficient energy is provided to free electrons from atoms or molecules and to allow species, ions and electrons, to coexist.
In effect, plasma is a cloud of protons, neutrons and electrons where all the electrons have come from their respective molecules and atoms, giving the plasma the ability to act as a whole rather than as a bunch of atoms. (Electron, proton and neutron are the subatomic
particles) Plasma is the most common state of matter in the universe comprising more than 99% of our visible universe and most of which is not visible.
Plasma occurs naturally and makes up the stuff of our sun, the core of stars and occurs in quasars, x-ray beam emitting pulsars, and supernovas. On earth, plasma is naturally occurring in flames, lightning and the auroras.
Fifth State of Matter
The collapse of the atoms into a single quantum state is known as Bose condensation or Bose-Einstein condensate is now considered as fifth state of matter.
It occurs at ultra-low temperature, close to the point that the atoms are not moving at all.
A Bose-Einstein condensate is a gaseous superfluid
phase formed by atoms cooled to temperatures very near
to absolute zero. (Zero kelvin (or) – 273 °C)
The first such condensate was produced by Eric Cornell and Carl Wieman in 1995 at the University of Colorado at Boulder, using a gas of rubidium atoms cooled to 170 nano kelvins (1 nano = ${10}^{–9}$). Under such conditions, a large fraction of the atoms collapse into the lowest quantum state, producing a superfluid.
This phenomenon was predicted in the 1920s by Satyendra Nath Bose and Albert Einstein, based on Bose’s work on the statistical mechanics of photons, which was then formalized and generalized by Einstein.
Comparison of the characteristics of three states of matter
Property Solid state Liquid Gaseous state Interparticle spaces Very small spaces Comparatively large spaces than solids Very large spaces Interparticle forces Very strong Weak Very Weak Nature Very hard and rigid Fluid Highly fluid Compressibility Negligible Negligible Highly compressible Shape and volume Definite shape and volume Indefinite shape, but definite volume Indefinite shape as well as volume Density Low Comparatively high than solids Very high Diffusion Negligible Slow Very fast
A gas can fill a vessel completely. This is because
1.9 INTER CONVERSION OF MATTER
We have seen that, matter exists in three different states. For a given substance, its state of matter is not permanent
i.e., a given state of matter, can always be changed to other states of matter, by altering conditions of temperature and pressure.
This phenomenon of change of matter, from one state to another and back to original state, by altering the conditions of temperature and pressure, is called change of state.
1.10 INTERCONVESTION OF MATTER BY CHANGE IN TEMPERATURE
a) Interconversion of matter, on heating
Consider a block of ice at 0°C, placed in a beaker and heated. It changes to liquid water. Heat the water till it boils. It slowly gets converted to vapour (gas). From this observation, it is clear that, the solids convert into liquids and liquids in turn convert to gas, when they are heated.
Here, the process by which a solid changes to liquid by absorbing heat, is called melting. And the process by which a liquid changes to gas (vapour) by absorbing Heat, is called boiling or vapourisation.
When observed carefully, it is found that a given solid changes to a liquid, at a constant temperature. This constant temperature is called melting point.
For example, ice changes to water at 0°C. Hence, its melting point is 0°C. Similarly, the constant temperature, at which a liquid changes into a gas, by absorbing heat, is called boiling point. For example, liquid water changes to water vapour (gas), at 100°C. Hence, the boiling point of water is 100°C.
(i) Melting or Fusion: The process due to which a solid changes into liquid state by absorbing heat energy is called melting or fusion.
(ii) Freezing or Solidification: The process due to which liquid changes into solid state by giving out heat energy is called freezing or solidification.
(iii) Melting Point: The constant temperature at which a solid changes into liquid state by absorbing heat energy at 1 atm pressure is called its melting point.
(iv) Freezing Point: The constant temperature at which a liquid changes into solid state by giving out heat energy at 1 atm pressure is called freezing point.
Note: The numerical value of freezing point and melting point is same.
Melting point of ice = Freezing point of water = 0°C (273.16 K)
Explanation: On increasing the temperatures of solids, the kinetic energy (K.E.) of particles increases. Due to increase in K.E, the particles start vibrating with greater speed. The energy supplied by heat overcomes the force of attraction between the particles. Then, the particles leave their fixed positions and start moving freely and thus solid
melts.
Latent Heat of Fusion: The amount of heat energy that is required in change 1 kg of solid into liquid at atmospheric pressure and its melting point is known as the latent heat of fusion. (In Greek Latent means Hidden). Latent heat of fusion of ice =$3.34×{10}^{5}$ J/kg.
Note: Particles of water of 0°C (273 K) have more energy as compared to particles is ice at the same temperature.
Interconversion of liquid into gaseous state and vice versa:
Liquids can be converted into gases by heating them. Similarly, gases can be converted into liquids by cooling them.
e.g.: Water at 1 atm pressure changes into gas (steam) at 100°C changes into water by giving out energy.
Melting points of common solids
Ice ${0}^{0}C$ Sodium ${97}^{0}C$ Sulphur ${119}^{0}C$ Lead ${327}^{0}C$ Zinc ${420}^{0}C$ Iron ${1535}^{0}C$
Boiling points of common liquids
Ice ${100}^{0}\mathrm{C}$ Sodium $78.{3}^{0}C$ Sulphur $80.{2}^{0}C$ Lead ${357}^{0}C$
b) Interconversion of matter by Cooling
We have seen the changes that take place on heating. Then, have you ever wondered, what
happens if a given state of matter is cooled?
Collect some water vapour (gas) and cool it. We will notice that, it becomes liquid water. On cooling further, the liquid water gets converted to ice(solid). It is interesting to see that a reverse process of heating is taking place on cooling. That is, a gas is converted to liquid and liquid is converted to solid, by cooling. Here, the process by which a gas gets converted to a liquid, by giving out heat is called liquefaction or condensation. The process by which a liquid gets converted to solid, is known as solidification or freezing.
Further, the above changes take place at constant temperature. The constant temperature, at which a gas is converted to a liquid, is known as condensation point. For example, vapour changes to liquid water at 100°C. Hence, its condensation point is 100°C.
The constant temperature at which a liquid changes to a solid, is known as freezing point. For example, liquid water changes to solid at 0°C. Hence, the freezing point of water is 0°C.
(i) Boiling or Vapourisation: The process due to which a liquid changes into gaseous state by absorbing heat energy is called boiling.
(ii) Condensation or Liquefaction: The process due to which a gas changes into liquid state by giving out heat energy is called condensation.
(iii) Boiling Point: The constant temperature at which a liquid rapidly changes into gaseous state by absorbing heat energy at atmospheric pressure is called boiling point.
(iv) Condensation Point: The constant temperature at which a gas changes into liquid state by giving out heat energy at atmospheric pressure is called condensation point.
Note: The numerical value of condensation point and boiling point is same.
Condensation point of vapour (water) = Boiling point of water = 100°C (373.16 K)
Explanation: When heat is supplied to water, particles starts moving faster. At a certain
temperature, a point is reached when the particles have enough energy to break the forces of attraction between the particles. At this temperature the liquid starts changing into gas.
Latent heat of vapourisation: The amount of heat which is required to convert 1 kg of the liquid (at its boiling point) to vapour of gas without any change in temperature. Latent heat of vaporisation of water = $22.5×{10}^{5}$ J/kg.
Note: Particles in steam (water vapour) at 373 K have more energy than water at the same
temperature. This is because steam has absorbed extra energy in the form of latent heat of
vapourisation.
Freezing points of common solids
Ice ${0}^{0}C$ Sodium ${97}^{0}C$ Sulphur ${119}^{0}C$ Lead ${327}^{0}C$ Zinc ${420}^{0}C$ Iron ${1535}^{0}C$
Condensation points of common liquids
Water ${100}^{0}\mathrm{C}$ Ethyl alcohol $78.{3}^{0}C$ Benzene $80.{2}^{0}C$ Mercury ${357}^{0}C$
The condensation and freezing points of some substances is as follows:
If we observe the table shown, we see the numerical values of melting point and freezing point, boiling point and condensation point, are equal. Thus, for a given substance
Melting point = Freezing point
Boiling point = Condensation point
Direct Interconversion of Solid to Gaseous State And Vice Versa
Some solids, on heating, directly change into gaseous state, without changing into the liquid state. Conversely, the gaseous state, on cooling, changes back into solid state, without changing into the liquid state. Such a process is called sublimation.
The gaseous form of solid is called sublime.
The solid state, formed from the gaseous state on cooling, is called sublimate.
Example of subliming solids: Ammonium chloride, iodine, solid carbon dioxide (dry ice), naphthalene and camphor, the moth balls become smaller in size with the passage of time. This is because, they change into its gaseous state at room temperature itself.
1.11 CHANGE OF STATE BY ALTERING THE PRESSURE OF MATTER
Pressure of atmosphere helps in altering the state of matter. When pressure is lowered, boiling point of a liquid is lowered. This helps in rapid change of liquid, into gaseous state.
Examples:
i) Water boils at 100° C and rapidly changes into steam. However, if atmospheric pressure is lowered, it boils at a temperature below 100°C and changes into vapour state.
ii) Carbon dioxide is a gas, under normal conditions of temperature and pressure. It can be liquified, by compressing it, to a pressure 70 times more than atmospheric pressure.
If the pressure from liquid carbon dioxide is suddenly released, some amount of it changes into solid carbon dioxide.
1.12 CURVE (TEMPERATURE TIME GRAPH)
We can show the change of temperature with time in the form of a temperature-time graph drawn by using the readings obtained in the above experiment. Such a time-temperature graph is shown in figure.
In this graph of point A, we have all ice. As we heat it, the ice starts melting to form water but the temperature of ice and water mixture does not rise. It remains constant at 0°C during the melting of ice. At point B, all the ice has melted to form water. Thus, we have only water at point B. Now, on heating beyond point B, the temperature of water (formed from ice) starts rising as shown by the sloping line BC in the graph.
1.13 EVAPORATION
The phenomenon of change of a liquid into vapours at any temperature below its boiling point is called evaporation.
Water changes into vapours below 100°C. The particles of matter are always moving and are never at rest. At a given temperature in any gas, liquid or solid, there are
particles with different K.E.
In case of liquids, a small fraction of particles at the surface, having higher K.E., is able to break the forces of attraction of other particles and gets converted into vapour.
Note: The atmospheric pressure of sea level is 1 atm.
Factors Affecting Evaporation
(i) Temperature: With the increase in temperature the rate of evaporation increases.
Rate of evaporation $\propto$ T
Reasons: On increasing temperature more number of particles get enough K.E. to go into the vapour state.
(ii) Surface Area: Rate of evaporation $\propto$Surface area.
Evaporation is a surface phenomenon. If the surface area is increased, the rate of evaporation also increases. So, while putting clothes for drying up we spread them out.
(iii) Humidity of Air:
Humidity is the amount of water vapour present in air. When humidity of air is low, the rate of evaporation is high and water evaporates more readily. When humidity of air is high, the rate of evaporation is low and water evaporates very slowly.
Rate of evaporation
(iv) Wind speed: With the increases in wind speed, the particles of water vapour move away with the wind. So the amount of water vapour decreases in the surroundings.
Rate of evaporation $\propto$ wind speed.
(v) Nature of substance: Substances with high boiling points will evaporate slowly, while substances with low boiling points will evaporate quickly.
Difference between evaporation and boiling
Evaporation Boiling It is a surface phenomenon. It is a bulk phenomenon. It occurs at all temperature below B.P. It occurs at B.P. only The rate of evaporation depends upon the surface area of the liquid, humidity temperature & wind speed. The rate of boiling does not depend upon the surface area, wind speed and humidity.
Cooling Caused by Evaporation
The cooling caused by evaporation is based on the fact that when a liquid evaporates, it draws (or takes) the latent heat of vapourisation from ‘anything’ which it touches.
For example:
• If we put a little of spirit, ether or petrol on the plain of our hand then our hand feels very cold.
• Perspiration (or sweating) is our body’s method of maintaining a constant temperature.
We wear Cotton Clothes is Summer
During summer, we perspire more because of the mechanism of our body which keeps us cool. During evaporation, the particles at the surface of liquid gain energy from the surroundings or body surface. The heat energy equal to latent heat of vapourisation, is absorbed from the body, leaving the body cool. Cotton, being a good absorber of water helps
in absorbing the sweat.
Water droplet on the outer surface of a glass containing ice cold water
If we take some ice cold in a glass then we will observe water droplets on the outer surface of glass.
Reason: The water vapour present in air on coming in contact with glass of cold water, loses energy. So water vapour gets converted to liquid state, which we see as water droplets.
|
2021-05-08 11:01:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 27, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5061152577400208, "perplexity": 859.5027996986163}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988858.72/warc/CC-MAIN-20210508091446-20210508121446-00602.warc.gz"}
|
http://bib-pubdb1.desy.de/collection/PUB_FS-TUX-20170422?ln=en
|
# FS-TUX
2019-04-0216:16 [PUBDB-2019-01830] Journal Article Rohringer, N. X-ray Raman scattering: a building block for nonlinear spectroscopy Philosophical transactions of the Royal Society of London / A 377(2145), 20170471 - (2019) [10.1098/rsta.2017.0471] Ultraintense X-ray free-electron laser pulses of attosecond duration can enable new nonlinear X-ray spectroscopic techniques to observe coherent electronic motion. The simplest nonlinear X-ray spectroscopic concept is based on stimulated electronic X-ray Raman scattering. [...] Restricted: PDF PDF (PDFA); 2019-01-3015:23 [PUBDB-2019-00875] Journal Article et al Roadmap of ultrafast x-ray atomic and molecular physics Journal of physics / B 51(3), 032003 (2018) [10.1088/1361-6455/aa9735] X-ray free-electron lasers (XFELs) and table-top sources of x-rays based upon high harmonic generation (HHG) have revolutionized the field of ultrafast x-ray atomic and molecular physics, largely due to an explosive growth in capabilities in the past decade. XFELs now provide unprecedented intensity (1020 W cm−2) of x-rays at wavelengths down to ~1 Ångstrom, and HHG provides unprecedented time resolution (~50 attoseconds) and a correspondingly large coherent bandwidth at longer wavelengths [...] OpenAccess: PDF PDF (PDFA); 2019-01-3015:18 [PUBDB-2019-00874] Journal Article et al Stimulated X-Ray Emission Spectroscopy in Transition Metal Complexes Physical review letters 120(13), 133203 (2018) [10.1103/PhysRevLett.120.133203] We report the observation and analysis of the gain curve of amplified $Kα$ x-ray emission from solutions of Mn(II) and Mn(VII) complexes using an x-ray free electron laser to create the $1s$ core-hole population inversion. We find spectra at amplification levels extending over 4 orders of magnitude until saturation [...] OpenAccess: PDF PDF (PDFA); 2019-01-3015:06 [PUBDB-2019-00873] Journal Article Rohringer, N. Stimulated resonant inelastic x-ray scattering with chirped, broadband pulses Physical review / A 99(1), 013425 (2019) [10.1103/PhysRevA.99.013425] We present an approach for initiating and tracing ultrafast electron dynamics in core-excited atoms, molecules, and solids. The approach is based on stimulated resonant inelastic x-ray scattering induced by a single, chirped, broadband XUV or x-ray pulse [...] OpenAccess: PDF PDF (PDFA); 2019-01-3014:44 [PUBDB-2019-00872] Journal Article Rohringer, N. Quantum theory of superfluorescence based on two-point correlation functions Physical review / A 99(1), 013839 (2019) [10.1103/PhysRevA.99.013839] Irradiation of a medium by short intense pulses from x-ray (XUV) free-electron lasers can result in saturated photoionization of inner electronic shells. As a result an inversion of populations between core levels appears [...] OpenAccess: PDF PDF (PDFA);
|
2019-06-16 05:03:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47613009810447693, "perplexity": 10908.353671345438}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627997731.69/warc/CC-MAIN-20190616042701-20190616064701-00524.warc.gz"}
|
http://asterne.net/winisk/example-of-octal-number-system.php
|
# System number of example octal
## Octal Number System erikoest.dk
Octal Simple English Wikipedia the free encyclopedia. In this program, you'll learn to convert octal number to a decimal number and vice-versa using functions in Java., Octal definition, of or relating to the number system with base 8, employing the numerals 0 through 7. See more..
### Octal to Decimal Conversion Examples TutorVista
Binary to Octal converter w3resource. For example, we normally buy eggs including numbers. For numbers, the system used for integers (that is whole numbers, Binary, octal and hexadecimal numbers 5, Octal is a counting system that uses eight digits. Instead of using only 0's and 1's like binary, or the characters '0' to '9' of the decimal number system; octal.
Octal number System : Now, convert each three digits and that will be the octal representation for the number. For example, let’s take example for 100. 14/04/2018 · Octal number: Conversion: Binary to Octal . Binary Number System: In mathematics and digital electronics, a binary number is a number expressed in the
Octal number System : Now, convert each three digits and that will be the octal representation for the number. For example, let’s take example for 100. In this program, you'll learn to convert octal number to a decimal number and vice-versa using functions in Java.
Hi there, As a base-eight system, each digit in an octal number has a higher value than each number in a binary system. This is because binary numbers start from base Binary to Decimal Octal Often students get confused on how to convert a number from one number system to another number system. The following examples
Conversions from Decimal to Octal. Decimal number is repetitively divided by eight and remainders are arranged in the form of octal numbers. Example For example, we normally buy eggs including numbers. For numbers, the system used for integers (that is whole numbers, Binary, octal and hexadecimal numbers 5
3.1 - Conversion of Octal System to decimalization To convert an octal number to decimal, use the basic concept of a number, as already seen. Let's look at an example How to convert the binary numbers into octal and the octal numbers into binary. Why do we need Octal Number System?
8/09/2016 · Octal to hexadecimal conversion It is easily can be understood that the weight of digit of the binary system is ‘2’. For example : octal number In this program, you'll learn to convert octal number to a decimal number and vice-versa using functions in Java.
Write a program in C to convert Octal number to Decimal number system using while loop. Algorithm to convert octal number to decimal number. How to convert octal ... whereas that with a base of 2 is called a binary number system. Likewise, the number systems In an octal number system octal number. Example
Numbering Systems - Binary, Octal, The base of a numbering system is the number of digits in the system and the Example: Convert the octal number 26/07/2018 · Each of the three binary numbers in a set stands for a place in the octal number system. The "I'm searching for octal number and with this example I
How to convert the binary numbers into octal and the octal numbers into binary. Why do we need Octal Number System? Addition and subtraction of octal numbers are explained using different examples. Addition of octal numbers: Addition of octal numbers is carried out by the same
conversion from binary to octal number system; to get an octal equivalent number. Example on 'Number system conversion: binary, octal and hexadecimal In this, We will discuss about how we represent fractional numbers in octal number system. The octal point plays the same role as that of decimal point in decimal
How to Convert Binary to Octal Number 11 Steps (with. Page 4: Conversion methods between binary, octal, decimal, and hexadecimal number systems that are popular in computer science (from the previous example), In this java program, we will take an octal number from user and then convert it to decimal number system. For Example, 144 in Octal is equivalent to 100 in Decimal.
### Octal Rosetta Code
Addition and Subtraction of Octal Numbers Addition Table. Few examples on octal number system are explained with this method: Convert the decimal numbers to their octal equivalents: (a) 2980. Solution: 2980. Hence 2980 10, Conversion from Binary, Octal and Hexadecimal Number Systems to Decimal Number System has been explained with the help of examples..
Java Program to Convert Octal Number to Decimal and vice-versa. Page 4: Conversion methods between binary, octal, decimal, and hexadecimal number systems that are popular in computer science (from the previous example), ... whereas that with a base of 2 is called a binary number system. Likewise, the number systems In an octal number system octal number. Example.
### Binary to Octal converter w3resource
Understanding Number Systems Binary Hexadecimal and. Numbering Systems - Binary, Octal, The base of a numbering system is the number of digits in the system and the Example: Convert the octal number https://en.m.wikipedia.org/wiki/Decimal_separator Hello expers, there are 4 number systems binary, octal, decimal and hexa-decimal. where is the octal number system used ? what is its application? Please reply..
Hello expers, there are 4 number systems binary, octal, decimal and hexa-decimal. where is the octal number system used ? what is its application? Please reply. conversion from binary to octal number system; to get an octal equivalent number. Example on 'Number system conversion: binary, octal and hexadecimal
Octal to Binary Converter is an online tool used in digital computation to convert either Binary number into its equivalent Octal number or Octal number into its Step by step procedure explained with examples to convert a given number from octal number system to binary number system.
In this java program, we will take an octal number from user and then convert it to decimal number system. For Example, 144 in Octal is equivalent to 100 in Decimal The octal numeral system, or oct for short, is the base-8 number system, and uses the digits 0 to 7. Octal numerals can be made from binary numerals by grouping
Number System converter: Binary to Decimal,Decimal to Binary,HexaDecimal to Binary and All Base to All Base The number system we use in our daily life is known as decimal number system. The base of this number system is $10$. Say $2435$ is \$2 \times 10^3\ +\ 4 \times 10^2
For example, we normally buy eggs including numbers. For numbers, the system used for integers (that is whole numbers, Binary, octal and hexadecimal numbers 5 Example Octal equivalent of Computer does not understand octal number system so there must be a requirement of additional circuitry known as octal to binary
Write a C program to convert octal to Decimal number system using loop. Logic to convert octal to decimal number system in C programming. 14/04/2018 · Octal number: Conversion: Binary to Octal . Binary Number System: In mathematics and digital electronics, a binary number is a number expressed in the
8/09/2016 · Octal to hexadecimal conversion It is easily can be understood that the weight of digit of the binary system is ‘2’. For example : octal number Octal to Binary Converter is an online tool used in digital computation to convert either Binary number into its equivalent Octal number or Octal number into its
Number System converter: Binary to Decimal,Decimal to Binary,HexaDecimal to Binary and All Base to All Base Conversion from Binary, Octal and Hexadecimal Number Systems to Decimal Number System has been explained with the help of examples.
Example Octal equivalent of Computer does not understand octal number system so there must be a requirement of additional circuitry known as octal to binary For example, we normally buy eggs including numbers. For numbers, the system used for integers (that is whole numbers, Binary, octal and hexadecimal numbers 5
Octal to hexadecimal converter tool to convert a octal number to hexadecimal number. octal number systems in table 1-3. You can see that one octal digit is the equivalent value of three binary digits. The following examples of the conversion of octal
octal to hexadecimal very easy YouTube. number system converter: binary to decimal,decimal to binary,hexadecimal to binary and all base to all base, octal to hexadecimal converter tool to convert a octal number to hexadecimal number.).
How to convert the binary numbers into octal and the octal numbers into binary. Why do we need Octal Number System? Page 4: Conversion methods between binary, octal, decimal, and hexadecimal number systems that are popular in computer science (from the previous example)
For example, we normally buy eggs including numbers. For numbers, the system used for integers (that is whole numbers, Binary, octal and hexadecimal numbers 5 Number systems in digital electronics is a technique of For example when we use binary number system to represent any number Using octal number system,
Step by step procedure explained with examples to convert a given number from octal number system to binary number system. Binary to Decimal Octal Often students get confused on how to convert a number from one number system to another number system. The following examples
... whereas that with a base of 2 is called a binary number system. Likewise, the number systems In an octal number system octal number. Example In what situations is octal base used? For example, file mode rwxr-xr-x Octal is used when the number of bits in one word is a multiple of 3.
Binary, quaternary (base-4) and octal numbers can be specified similarly. For example, one jīn (斤) in the old system equals sixteen taels. The suanpan Octal is a counting system that uses eight digits. Instead of using only 0's and 1's like binary, or the characters '0' to '9' of the decimal number system; octal
Octal and Hexadecimal Number Systems OCTAL Octal and Hexadecimal Number then use #1 or #2 above to change the binary number to the base desired. EXAMPLES In this program, you'll learn to convert octal number to a decimal number and vice-versa using functions in Java.
Octal IPFS
Octal to binary conversion method with example. conversions from decimal to octal. decimal number is repetitively divided by eight and remainders are arranged in the form of octal numbers. example, number systems in digital electronics is a technique of for example when we use binary number system to represent any number using octal number system,).
C Program to Convert Octal Number to Decimal Number System
octal to hexadecimal very easy YouTube. thankfully java library provides convenient method to convert any integer from one number system hexadecimal numbers to binary, octal and example : reading, the octal number base system. although this was once a popular number base, convert the 3-bit binary number to its octal equivalent. for example,).
Octal to binary conversion method with example
Octal Revolvy. 3.1 - conversion of octal system to decimalization to convert an octal number to decimal, use the basic concept of a number, as already seen. let's look at an example, decimal to octal converter how to convert from octal to decimal. a regular decimal number is the sum of the digits multiplied with 10 n. example #1).
Number system conversion binary octal and hexadecimal
Octal Rosetta Code. in this, we will discuss about how we represent fractional numbers in octal number system. the octal point plays the same role as that of decimal point in decimal, conversion from binary to octal number system; to get an octal equivalent number. example on 'number system conversion: binary, octal and hexadecimal).
In what situations is octal base used? For example, file mode rwxr-xr-x Octal is used when the number of bits in one word is a multiple of 3. Page 4: Conversion methods between binary, octal, decimal, and hexadecimal number systems that are popular in computer science (from the previous example)
Binary, quaternary (base-4) and octal numbers can be specified similarly. For example, one jīn (斤) in the old system equals sixteen taels. The suanpan Example 1: Program to Convert Binary to Octal. In this program, we will first convert binary number to decimal. Then, the decimal number is converted to octal.
The octal numeral system, or oct for short, is the base-8 number system, and uses the digits 0 to 7. Octal numerals can be made from binary numerals by grouping Octal number System : Now, convert each three digits and that will be the octal representation for the number. For example, let’s take example for 100.
8/09/2016 · Octal to hexadecimal conversion It is easily can be understood that the weight of digit of the binary system is ‘2’. For example : octal number The octal numeral system, or oct for short, is the base-8 number system, and uses the digits 0 to 7. Octal numerals can be made from binary numerals by grouping
26/07/2018 · Each of the three binary numbers in a set stands for a place in the octal number system. The "I'm searching for octal number and with this example I Page 4: Conversion methods between binary, octal, decimal, and hexadecimal number systems that are popular in computer science (from the previous example)
Binary number system, Home›Math›Numbers› Numeral systems Numeral Systems. Octal numbers uses digits from 0..7. Examples: In this, We will discuss about how we represent fractional numbers in octal number system. The octal point plays the same role as that of decimal point in decimal
Octal to Hexadecimal converter w3resource
|
2021-03-04 18:45:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34021463990211487, "perplexity": 1128.6005574986448}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369512.68/warc/CC-MAIN-20210304174506-20210304204506-00039.warc.gz"}
|