url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://people.maths.bris.ac.uk/~matyd/GroupNames/480/C5xDic3s4D4.html
Copied to clipboard G = C5×Dic3⋊4D4order 480 = 25·3·5 Direct product of C5 and Dic3⋊4D4 Series: Derived Chief Lower central Upper central Derived series C1 — C6 — C5×Dic3⋊4D4 Chief series C1 — C3 — C6 — C2×C6 — C2×C30 — S3×C2×C10 — C10×C3⋊D4 — C5×Dic3⋊4D4 Lower central C3 — C6 — C5×Dic3⋊4D4 Upper central C1 — C2×C10 — C5×C22⋊C4 Generators and relations for C5×Dic34D4 G = < a,b,c,d,e | a5=b6=d4=e2=1, c2=b3, ab=ba, ac=ca, ad=da, ae=ea, cbc-1=dbd-1=b-1, be=eb, cd=dc, ce=ec, ede=d-1 > Subgroups: 404 in 188 conjugacy classes, 86 normal (58 characteristic) C1, C2, C2, C3, C4, C22, C22, C22, C5, S3, C6, C6, C2×C4, C2×C4, D4, C23, C23, C10, C10, Dic3, Dic3, C12, D6, D6, C2×C6, C2×C6, C2×C6, C15, C42, C22⋊C4, C22⋊C4, C4⋊C4, C22×C4, C2×D4, C20, C2×C10, C2×C10, C2×C10, C4×S3, C2×Dic3, C2×Dic3, C3⋊D4, C2×C12, C22×S3, C22×C6, C5×S3, C30, C30, C4×D4, C2×C20, C2×C20, C5×D4, C22×C10, C22×C10, C4×Dic3, Dic3⋊C4, D6⋊C4, C3×C22⋊C4, S3×C2×C4, C22×Dic3, C2×C3⋊D4, C5×Dic3, C5×Dic3, C60, S3×C10, S3×C10, C2×C30, C2×C30, C2×C30, C4×C20, C5×C22⋊C4, C5×C22⋊C4, C5×C4⋊C4, C22×C20, D4×C10, Dic34D4, S3×C20, C10×Dic3, C10×Dic3, C5×C3⋊D4, C2×C60, S3×C2×C10, C22×C30, D4×C20, Dic3×C20, C5×Dic3⋊C4, C5×D6⋊C4, C15×C22⋊C4, S3×C2×C20, Dic3×C2×C10, C10×C3⋊D4, C5×Dic34D4 Quotients: C1, C2, C4, C22, C5, S3, C2×C4, D4, C23, C10, D6, C22×C4, C2×D4, C4○D4, C20, C2×C10, C4×S3, C22×S3, C5×S3, C4×D4, C2×C20, C5×D4, C22×C10, S3×C2×C4, S3×D4, D42S3, S3×C10, C22×C20, D4×C10, C5×C4○D4, Dic34D4, S3×C20, S3×C2×C10, D4×C20, S3×C2×C20, C5×S3×D4, C5×D42S3, C5×Dic34D4 Smallest permutation representation of C5×Dic34D4 On 240 points Generators in S240 (1 58 46 34 22)(2 59 47 35 23)(3 60 48 36 24)(4 55 43 31 19)(5 56 44 32 20)(6 57 45 33 21)(7 231 219 207 195)(8 232 220 208 196)(9 233 221 209 197)(10 234 222 210 198)(11 229 217 205 193)(12 230 218 206 194)(13 61 49 37 25)(14 62 50 38 26)(15 63 51 39 27)(16 64 52 40 28)(17 65 53 41 29)(18 66 54 42 30)(67 115 103 91 79)(68 116 104 92 80)(69 117 105 93 81)(70 118 106 94 82)(71 119 107 95 83)(72 120 108 96 84)(73 121 109 97 85)(74 122 110 98 86)(75 123 111 99 87)(76 124 112 100 88)(77 125 113 101 89)(78 126 114 102 90)(127 175 163 151 139)(128 176 164 152 140)(129 177 165 153 141)(130 178 166 154 142)(131 179 167 155 143)(132 180 168 156 144)(133 181 169 157 145)(134 182 170 158 146)(135 183 171 159 147)(136 184 172 160 148)(137 185 173 161 149)(138 186 174 162 150)(187 235 223 211 199)(188 236 224 212 200)(189 237 225 213 201)(190 238 226 214 202)(191 239 227 215 203)(192 240 228 216 204) (1 2 3 4 5 6)(7 8 9 10 11 12)(13 14 15 16 17 18)(19 20 21 22 23 24)(25 26 27 28 29 30)(31 32 33 34 35 36)(37 38 39 40 41 42)(43 44 45 46 47 48)(49 50 51 52 53 54)(55 56 57 58 59 60)(61 62 63 64 65 66)(67 68 69 70 71 72)(73 74 75 76 77 78)(79 80 81 82 83 84)(85 86 87 88 89 90)(91 92 93 94 95 96)(97 98 99 100 101 102)(103 104 105 106 107 108)(109 110 111 112 113 114)(115 116 117 118 119 120)(121 122 123 124 125 126)(127 128 129 130 131 132)(133 134 135 136 137 138)(139 140 141 142 143 144)(145 146 147 148 149 150)(151 152 153 154 155 156)(157 158 159 160 161 162)(163 164 165 166 167 168)(169 170 171 172 173 174)(175 176 177 178 179 180)(181 182 183 184 185 186)(187 188 189 190 191 192)(193 194 195 196 197 198)(199 200 201 202 203 204)(205 206 207 208 209 210)(211 212 213 214 215 216)(217 218 219 220 221 222)(223 224 225 226 227 228)(229 230 231 232 233 234)(235 236 237 238 239 240) (1 67 4 70)(2 72 5 69)(3 71 6 68)(7 181 10 184)(8 186 11 183)(9 185 12 182)(13 78 16 75)(14 77 17 74)(15 76 18 73)(19 82 22 79)(20 81 23 84)(21 80 24 83)(25 90 28 87)(26 89 29 86)(27 88 30 85)(31 94 34 91)(32 93 35 96)(33 92 36 95)(37 102 40 99)(38 101 41 98)(39 100 42 97)(43 106 46 103)(44 105 47 108)(45 104 48 107)(49 114 52 111)(50 113 53 110)(51 112 54 109)(55 118 58 115)(56 117 59 120)(57 116 60 119)(61 126 64 123)(62 125 65 122)(63 124 66 121)(127 190 130 187)(128 189 131 192)(129 188 132 191)(133 198 136 195)(134 197 137 194)(135 196 138 193)(139 202 142 199)(140 201 143 204)(141 200 144 203)(145 210 148 207)(146 209 149 206)(147 208 150 205)(151 214 154 211)(152 213 155 216)(153 212 156 215)(157 222 160 219)(158 221 161 218)(159 220 162 217)(163 226 166 223)(164 225 167 228)(165 224 168 227)(169 234 172 231)(170 233 173 230)(171 232 174 229)(175 238 178 235)(176 237 179 240)(177 236 180 239) (1 137 17 130)(2 136 18 129)(3 135 13 128)(4 134 14 127)(5 133 15 132)(6 138 16 131)(7 121 236 120)(8 126 237 119)(9 125 238 118)(10 124 239 117)(11 123 240 116)(12 122 235 115)(19 146 26 139)(20 145 27 144)(21 150 28 143)(22 149 29 142)(23 148 30 141)(24 147 25 140)(31 158 38 151)(32 157 39 156)(33 162 40 155)(34 161 41 154)(35 160 42 153)(36 159 37 152)(43 170 50 163)(44 169 51 168)(45 174 52 167)(46 173 53 166)(47 172 54 165)(48 171 49 164)(55 182 62 175)(56 181 63 180)(57 186 64 179)(58 185 65 178)(59 184 66 177)(60 183 61 176)(67 194 74 187)(68 193 75 192)(69 198 76 191)(70 197 77 190)(71 196 78 189)(72 195 73 188)(79 206 86 199)(80 205 87 204)(81 210 88 203)(82 209 89 202)(83 208 90 201)(84 207 85 200)(91 218 98 211)(92 217 99 216)(93 222 100 215)(94 221 101 214)(95 220 102 213)(96 219 97 212)(103 230 110 223)(104 229 111 228)(105 234 112 227)(106 233 113 226)(107 232 114 225)(108 231 109 224) (1 130)(2 131)(3 132)(4 127)(5 128)(6 129)(7 123)(8 124)(9 125)(10 126)(11 121)(12 122)(13 133)(14 134)(15 135)(16 136)(17 137)(18 138)(19 139)(20 140)(21 141)(22 142)(23 143)(24 144)(25 145)(26 146)(27 147)(28 148)(29 149)(30 150)(31 151)(32 152)(33 153)(34 154)(35 155)(36 156)(37 157)(38 158)(39 159)(40 160)(41 161)(42 162)(43 163)(44 164)(45 165)(46 166)(47 167)(48 168)(49 169)(50 170)(51 171)(52 172)(53 173)(54 174)(55 175)(56 176)(57 177)(58 178)(59 179)(60 180)(61 181)(62 182)(63 183)(64 184)(65 185)(66 186)(67 187)(68 188)(69 189)(70 190)(71 191)(72 192)(73 193)(74 194)(75 195)(76 196)(77 197)(78 198)(79 199)(80 200)(81 201)(82 202)(83 203)(84 204)(85 205)(86 206)(87 207)(88 208)(89 209)(90 210)(91 211)(92 212)(93 213)(94 214)(95 215)(96 216)(97 217)(98 218)(99 219)(100 220)(101 221)(102 222)(103 223)(104 224)(105 225)(106 226)(107 227)(108 228)(109 229)(110 230)(111 231)(112 232)(113 233)(114 234)(115 235)(116 236)(117 237)(118 238)(119 239)(120 240) G:=sub<Sym(240)| (1,58,46,34,22)(2,59,47,35,23)(3,60,48,36,24)(4,55,43,31,19)(5,56,44,32,20)(6,57,45,33,21)(7,231,219,207,195)(8,232,220,208,196)(9,233,221,209,197)(10,234,222,210,198)(11,229,217,205,193)(12,230,218,206,194)(13,61,49,37,25)(14,62,50,38,26)(15,63,51,39,27)(16,64,52,40,28)(17,65,53,41,29)(18,66,54,42,30)(67,115,103,91,79)(68,116,104,92,80)(69,117,105,93,81)(70,118,106,94,82)(71,119,107,95,83)(72,120,108,96,84)(73,121,109,97,85)(74,122,110,98,86)(75,123,111,99,87)(76,124,112,100,88)(77,125,113,101,89)(78,126,114,102,90)(127,175,163,151,139)(128,176,164,152,140)(129,177,165,153,141)(130,178,166,154,142)(131,179,167,155,143)(132,180,168,156,144)(133,181,169,157,145)(134,182,170,158,146)(135,183,171,159,147)(136,184,172,160,148)(137,185,173,161,149)(138,186,174,162,150)(187,235,223,211,199)(188,236,224,212,200)(189,237,225,213,201)(190,238,226,214,202)(191,239,227,215,203)(192,240,228,216,204), (1,2,3,4,5,6)(7,8,9,10,11,12)(13,14,15,16,17,18)(19,20,21,22,23,24)(25,26,27,28,29,30)(31,32,33,34,35,36)(37,38,39,40,41,42)(43,44,45,46,47,48)(49,50,51,52,53,54)(55,56,57,58,59,60)(61,62,63,64,65,66)(67,68,69,70,71,72)(73,74,75,76,77,78)(79,80,81,82,83,84)(85,86,87,88,89,90)(91,92,93,94,95,96)(97,98,99,100,101,102)(103,104,105,106,107,108)(109,110,111,112,113,114)(115,116,117,118,119,120)(121,122,123,124,125,126)(127,128,129,130,131,132)(133,134,135,136,137,138)(139,140,141,142,143,144)(145,146,147,148,149,150)(151,152,153,154,155,156)(157,158,159,160,161,162)(163,164,165,166,167,168)(169,170,171,172,173,174)(175,176,177,178,179,180)(181,182,183,184,185,186)(187,188,189,190,191,192)(193,194,195,196,197,198)(199,200,201,202,203,204)(205,206,207,208,209,210)(211,212,213,214,215,216)(217,218,219,220,221,222)(223,224,225,226,227,228)(229,230,231,232,233,234)(235,236,237,238,239,240), (1,67,4,70)(2,72,5,69)(3,71,6,68)(7,181,10,184)(8,186,11,183)(9,185,12,182)(13,78,16,75)(14,77,17,74)(15,76,18,73)(19,82,22,79)(20,81,23,84)(21,80,24,83)(25,90,28,87)(26,89,29,86)(27,88,30,85)(31,94,34,91)(32,93,35,96)(33,92,36,95)(37,102,40,99)(38,101,41,98)(39,100,42,97)(43,106,46,103)(44,105,47,108)(45,104,48,107)(49,114,52,111)(50,113,53,110)(51,112,54,109)(55,118,58,115)(56,117,59,120)(57,116,60,119)(61,126,64,123)(62,125,65,122)(63,124,66,121)(127,190,130,187)(128,189,131,192)(129,188,132,191)(133,198,136,195)(134,197,137,194)(135,196,138,193)(139,202,142,199)(140,201,143,204)(141,200,144,203)(145,210,148,207)(146,209,149,206)(147,208,150,205)(151,214,154,211)(152,213,155,216)(153,212,156,215)(157,222,160,219)(158,221,161,218)(159,220,162,217)(163,226,166,223)(164,225,167,228)(165,224,168,227)(169,234,172,231)(170,233,173,230)(171,232,174,229)(175,238,178,235)(176,237,179,240)(177,236,180,239), (1,137,17,130)(2,136,18,129)(3,135,13,128)(4,134,14,127)(5,133,15,132)(6,138,16,131)(7,121,236,120)(8,126,237,119)(9,125,238,118)(10,124,239,117)(11,123,240,116)(12,122,235,115)(19,146,26,139)(20,145,27,144)(21,150,28,143)(22,149,29,142)(23,148,30,141)(24,147,25,140)(31,158,38,151)(32,157,39,156)(33,162,40,155)(34,161,41,154)(35,160,42,153)(36,159,37,152)(43,170,50,163)(44,169,51,168)(45,174,52,167)(46,173,53,166)(47,172,54,165)(48,171,49,164)(55,182,62,175)(56,181,63,180)(57,186,64,179)(58,185,65,178)(59,184,66,177)(60,183,61,176)(67,194,74,187)(68,193,75,192)(69,198,76,191)(70,197,77,190)(71,196,78,189)(72,195,73,188)(79,206,86,199)(80,205,87,204)(81,210,88,203)(82,209,89,202)(83,208,90,201)(84,207,85,200)(91,218,98,211)(92,217,99,216)(93,222,100,215)(94,221,101,214)(95,220,102,213)(96,219,97,212)(103,230,110,223)(104,229,111,228)(105,234,112,227)(106,233,113,226)(107,232,114,225)(108,231,109,224), (1,130)(2,131)(3,132)(4,127)(5,128)(6,129)(7,123)(8,124)(9,125)(10,126)(11,121)(12,122)(13,133)(14,134)(15,135)(16,136)(17,137)(18,138)(19,139)(20,140)(21,141)(22,142)(23,143)(24,144)(25,145)(26,146)(27,147)(28,148)(29,149)(30,150)(31,151)(32,152)(33,153)(34,154)(35,155)(36,156)(37,157)(38,158)(39,159)(40,160)(41,161)(42,162)(43,163)(44,164)(45,165)(46,166)(47,167)(48,168)(49,169)(50,170)(51,171)(52,172)(53,173)(54,174)(55,175)(56,176)(57,177)(58,178)(59,179)(60,180)(61,181)(62,182)(63,183)(64,184)(65,185)(66,186)(67,187)(68,188)(69,189)(70,190)(71,191)(72,192)(73,193)(74,194)(75,195)(76,196)(77,197)(78,198)(79,199)(80,200)(81,201)(82,202)(83,203)(84,204)(85,205)(86,206)(87,207)(88,208)(89,209)(90,210)(91,211)(92,212)(93,213)(94,214)(95,215)(96,216)(97,217)(98,218)(99,219)(100,220)(101,221)(102,222)(103,223)(104,224)(105,225)(106,226)(107,227)(108,228)(109,229)(110,230)(111,231)(112,232)(113,233)(114,234)(115,235)(116,236)(117,237)(118,238)(119,239)(120,240)>; G:=Group( (1,58,46,34,22)(2,59,47,35,23)(3,60,48,36,24)(4,55,43,31,19)(5,56,44,32,20)(6,57,45,33,21)(7,231,219,207,195)(8,232,220,208,196)(9,233,221,209,197)(10,234,222,210,198)(11,229,217,205,193)(12,230,218,206,194)(13,61,49,37,25)(14,62,50,38,26)(15,63,51,39,27)(16,64,52,40,28)(17,65,53,41,29)(18,66,54,42,30)(67,115,103,91,79)(68,116,104,92,80)(69,117,105,93,81)(70,118,106,94,82)(71,119,107,95,83)(72,120,108,96,84)(73,121,109,97,85)(74,122,110,98,86)(75,123,111,99,87)(76,124,112,100,88)(77,125,113,101,89)(78,126,114,102,90)(127,175,163,151,139)(128,176,164,152,140)(129,177,165,153,141)(130,178,166,154,142)(131,179,167,155,143)(132,180,168,156,144)(133,181,169,157,145)(134,182,170,158,146)(135,183,171,159,147)(136,184,172,160,148)(137,185,173,161,149)(138,186,174,162,150)(187,235,223,211,199)(188,236,224,212,200)(189,237,225,213,201)(190,238,226,214,202)(191,239,227,215,203)(192,240,228,216,204), (1,2,3,4,5,6)(7,8,9,10,11,12)(13,14,15,16,17,18)(19,20,21,22,23,24)(25,26,27,28,29,30)(31,32,33,34,35,36)(37,38,39,40,41,42)(43,44,45,46,47,48)(49,50,51,52,53,54)(55,56,57,58,59,60)(61,62,63,64,65,66)(67,68,69,70,71,72)(73,74,75,76,77,78)(79,80,81,82,83,84)(85,86,87,88,89,90)(91,92,93,94,95,96)(97,98,99,100,101,102)(103,104,105,106,107,108)(109,110,111,112,113,114)(115,116,117,118,119,120)(121,122,123,124,125,126)(127,128,129,130,131,132)(133,134,135,136,137,138)(139,140,141,142,143,144)(145,146,147,148,149,150)(151,152,153,154,155,156)(157,158,159,160,161,162)(163,164,165,166,167,168)(169,170,171,172,173,174)(175,176,177,178,179,180)(181,182,183,184,185,186)(187,188,189,190,191,192)(193,194,195,196,197,198)(199,200,201,202,203,204)(205,206,207,208,209,210)(211,212,213,214,215,216)(217,218,219,220,221,222)(223,224,225,226,227,228)(229,230,231,232,233,234)(235,236,237,238,239,240), (1,67,4,70)(2,72,5,69)(3,71,6,68)(7,181,10,184)(8,186,11,183)(9,185,12,182)(13,78,16,75)(14,77,17,74)(15,76,18,73)(19,82,22,79)(20,81,23,84)(21,80,24,83)(25,90,28,87)(26,89,29,86)(27,88,30,85)(31,94,34,91)(32,93,35,96)(33,92,36,95)(37,102,40,99)(38,101,41,98)(39,100,42,97)(43,106,46,103)(44,105,47,108)(45,104,48,107)(49,114,52,111)(50,113,53,110)(51,112,54,109)(55,118,58,115)(56,117,59,120)(57,116,60,119)(61,126,64,123)(62,125,65,122)(63,124,66,121)(127,190,130,187)(128,189,131,192)(129,188,132,191)(133,198,136,195)(134,197,137,194)(135,196,138,193)(139,202,142,199)(140,201,143,204)(141,200,144,203)(145,210,148,207)(146,209,149,206)(147,208,150,205)(151,214,154,211)(152,213,155,216)(153,212,156,215)(157,222,160,219)(158,221,161,218)(159,220,162,217)(163,226,166,223)(164,225,167,228)(165,224,168,227)(169,234,172,231)(170,233,173,230)(171,232,174,229)(175,238,178,235)(176,237,179,240)(177,236,180,239), (1,137,17,130)(2,136,18,129)(3,135,13,128)(4,134,14,127)(5,133,15,132)(6,138,16,131)(7,121,236,120)(8,126,237,119)(9,125,238,118)(10,124,239,117)(11,123,240,116)(12,122,235,115)(19,146,26,139)(20,145,27,144)(21,150,28,143)(22,149,29,142)(23,148,30,141)(24,147,25,140)(31,158,38,151)(32,157,39,156)(33,162,40,155)(34,161,41,154)(35,160,42,153)(36,159,37,152)(43,170,50,163)(44,169,51,168)(45,174,52,167)(46,173,53,166)(47,172,54,165)(48,171,49,164)(55,182,62,175)(56,181,63,180)(57,186,64,179)(58,185,65,178)(59,184,66,177)(60,183,61,176)(67,194,74,187)(68,193,75,192)(69,198,76,191)(70,197,77,190)(71,196,78,189)(72,195,73,188)(79,206,86,199)(80,205,87,204)(81,210,88,203)(82,209,89,202)(83,208,90,201)(84,207,85,200)(91,218,98,211)(92,217,99,216)(93,222,100,215)(94,221,101,214)(95,220,102,213)(96,219,97,212)(103,230,110,223)(104,229,111,228)(105,234,112,227)(106,233,113,226)(107,232,114,225)(108,231,109,224), (1,130)(2,131)(3,132)(4,127)(5,128)(6,129)(7,123)(8,124)(9,125)(10,126)(11,121)(12,122)(13,133)(14,134)(15,135)(16,136)(17,137)(18,138)(19,139)(20,140)(21,141)(22,142)(23,143)(24,144)(25,145)(26,146)(27,147)(28,148)(29,149)(30,150)(31,151)(32,152)(33,153)(34,154)(35,155)(36,156)(37,157)(38,158)(39,159)(40,160)(41,161)(42,162)(43,163)(44,164)(45,165)(46,166)(47,167)(48,168)(49,169)(50,170)(51,171)(52,172)(53,173)(54,174)(55,175)(56,176)(57,177)(58,178)(59,179)(60,180)(61,181)(62,182)(63,183)(64,184)(65,185)(66,186)(67,187)(68,188)(69,189)(70,190)(71,191)(72,192)(73,193)(74,194)(75,195)(76,196)(77,197)(78,198)(79,199)(80,200)(81,201)(82,202)(83,203)(84,204)(85,205)(86,206)(87,207)(88,208)(89,209)(90,210)(91,211)(92,212)(93,213)(94,214)(95,215)(96,216)(97,217)(98,218)(99,219)(100,220)(101,221)(102,222)(103,223)(104,224)(105,225)(106,226)(107,227)(108,228)(109,229)(110,230)(111,231)(112,232)(113,233)(114,234)(115,235)(116,236)(117,237)(118,238)(119,239)(120,240) ); G=PermutationGroup([[(1,58,46,34,22),(2,59,47,35,23),(3,60,48,36,24),(4,55,43,31,19),(5,56,44,32,20),(6,57,45,33,21),(7,231,219,207,195),(8,232,220,208,196),(9,233,221,209,197),(10,234,222,210,198),(11,229,217,205,193),(12,230,218,206,194),(13,61,49,37,25),(14,62,50,38,26),(15,63,51,39,27),(16,64,52,40,28),(17,65,53,41,29),(18,66,54,42,30),(67,115,103,91,79),(68,116,104,92,80),(69,117,105,93,81),(70,118,106,94,82),(71,119,107,95,83),(72,120,108,96,84),(73,121,109,97,85),(74,122,110,98,86),(75,123,111,99,87),(76,124,112,100,88),(77,125,113,101,89),(78,126,114,102,90),(127,175,163,151,139),(128,176,164,152,140),(129,177,165,153,141),(130,178,166,154,142),(131,179,167,155,143),(132,180,168,156,144),(133,181,169,157,145),(134,182,170,158,146),(135,183,171,159,147),(136,184,172,160,148),(137,185,173,161,149),(138,186,174,162,150),(187,235,223,211,199),(188,236,224,212,200),(189,237,225,213,201),(190,238,226,214,202),(191,239,227,215,203),(192,240,228,216,204)], [(1,2,3,4,5,6),(7,8,9,10,11,12),(13,14,15,16,17,18),(19,20,21,22,23,24),(25,26,27,28,29,30),(31,32,33,34,35,36),(37,38,39,40,41,42),(43,44,45,46,47,48),(49,50,51,52,53,54),(55,56,57,58,59,60),(61,62,63,64,65,66),(67,68,69,70,71,72),(73,74,75,76,77,78),(79,80,81,82,83,84),(85,86,87,88,89,90),(91,92,93,94,95,96),(97,98,99,100,101,102),(103,104,105,106,107,108),(109,110,111,112,113,114),(115,116,117,118,119,120),(121,122,123,124,125,126),(127,128,129,130,131,132),(133,134,135,136,137,138),(139,140,141,142,143,144),(145,146,147,148,149,150),(151,152,153,154,155,156),(157,158,159,160,161,162),(163,164,165,166,167,168),(169,170,171,172,173,174),(175,176,177,178,179,180),(181,182,183,184,185,186),(187,188,189,190,191,192),(193,194,195,196,197,198),(199,200,201,202,203,204),(205,206,207,208,209,210),(211,212,213,214,215,216),(217,218,219,220,221,222),(223,224,225,226,227,228),(229,230,231,232,233,234),(235,236,237,238,239,240)], [(1,67,4,70),(2,72,5,69),(3,71,6,68),(7,181,10,184),(8,186,11,183),(9,185,12,182),(13,78,16,75),(14,77,17,74),(15,76,18,73),(19,82,22,79),(20,81,23,84),(21,80,24,83),(25,90,28,87),(26,89,29,86),(27,88,30,85),(31,94,34,91),(32,93,35,96),(33,92,36,95),(37,102,40,99),(38,101,41,98),(39,100,42,97),(43,106,46,103),(44,105,47,108),(45,104,48,107),(49,114,52,111),(50,113,53,110),(51,112,54,109),(55,118,58,115),(56,117,59,120),(57,116,60,119),(61,126,64,123),(62,125,65,122),(63,124,66,121),(127,190,130,187),(128,189,131,192),(129,188,132,191),(133,198,136,195),(134,197,137,194),(135,196,138,193),(139,202,142,199),(140,201,143,204),(141,200,144,203),(145,210,148,207),(146,209,149,206),(147,208,150,205),(151,214,154,211),(152,213,155,216),(153,212,156,215),(157,222,160,219),(158,221,161,218),(159,220,162,217),(163,226,166,223),(164,225,167,228),(165,224,168,227),(169,234,172,231),(170,233,173,230),(171,232,174,229),(175,238,178,235),(176,237,179,240),(177,236,180,239)], [(1,137,17,130),(2,136,18,129),(3,135,13,128),(4,134,14,127),(5,133,15,132),(6,138,16,131),(7,121,236,120),(8,126,237,119),(9,125,238,118),(10,124,239,117),(11,123,240,116),(12,122,235,115),(19,146,26,139),(20,145,27,144),(21,150,28,143),(22,149,29,142),(23,148,30,141),(24,147,25,140),(31,158,38,151),(32,157,39,156),(33,162,40,155),(34,161,41,154),(35,160,42,153),(36,159,37,152),(43,170,50,163),(44,169,51,168),(45,174,52,167),(46,173,53,166),(47,172,54,165),(48,171,49,164),(55,182,62,175),(56,181,63,180),(57,186,64,179),(58,185,65,178),(59,184,66,177),(60,183,61,176),(67,194,74,187),(68,193,75,192),(69,198,76,191),(70,197,77,190),(71,196,78,189),(72,195,73,188),(79,206,86,199),(80,205,87,204),(81,210,88,203),(82,209,89,202),(83,208,90,201),(84,207,85,200),(91,218,98,211),(92,217,99,216),(93,222,100,215),(94,221,101,214),(95,220,102,213),(96,219,97,212),(103,230,110,223),(104,229,111,228),(105,234,112,227),(106,233,113,226),(107,232,114,225),(108,231,109,224)], [(1,130),(2,131),(3,132),(4,127),(5,128),(6,129),(7,123),(8,124),(9,125),(10,126),(11,121),(12,122),(13,133),(14,134),(15,135),(16,136),(17,137),(18,138),(19,139),(20,140),(21,141),(22,142),(23,143),(24,144),(25,145),(26,146),(27,147),(28,148),(29,149),(30,150),(31,151),(32,152),(33,153),(34,154),(35,155),(36,156),(37,157),(38,158),(39,159),(40,160),(41,161),(42,162),(43,163),(44,164),(45,165),(46,166),(47,167),(48,168),(49,169),(50,170),(51,171),(52,172),(53,173),(54,174),(55,175),(56,176),(57,177),(58,178),(59,179),(60,180),(61,181),(62,182),(63,183),(64,184),(65,185),(66,186),(67,187),(68,188),(69,189),(70,190),(71,191),(72,192),(73,193),(74,194),(75,195),(76,196),(77,197),(78,198),(79,199),(80,200),(81,201),(82,202),(83,203),(84,204),(85,205),(86,206),(87,207),(88,208),(89,209),(90,210),(91,211),(92,212),(93,213),(94,214),(95,215),(96,216),(97,217),(98,218),(99,219),(100,220),(101,221),(102,222),(103,223),(104,224),(105,225),(106,226),(107,227),(108,228),(109,229),(110,230),(111,231),(112,232),(113,233),(114,234),(115,235),(116,236),(117,237),(118,238),(119,239),(120,240)]]) 150 conjugacy classes class 1 2A 2B 2C 2D 2E 2F 2G 3 4A 4B 4C 4D 4E 4F 4G 4H 4I 4J 4K 4L 5A 5B 5C 5D 6A 6B 6C 6D 6E 10A ··· 10L 10M ··· 10T 10U ··· 10AB 12A 12B 12C 12D 15A 15B 15C 15D 20A ··· 20P 20Q ··· 20AF 20AG ··· 20AV 30A ··· 30L 30M ··· 30T 60A ··· 60P order 1 2 2 2 2 2 2 2 3 4 4 4 4 4 4 4 4 4 4 4 4 5 5 5 5 6 6 6 6 6 10 ··· 10 10 ··· 10 10 ··· 10 12 12 12 12 15 15 15 15 20 ··· 20 20 ··· 20 20 ··· 20 30 ··· 30 30 ··· 30 60 ··· 60 size 1 1 1 1 2 2 6 6 2 2 2 2 2 3 3 3 3 6 6 6 6 1 1 1 1 2 2 2 4 4 1 ··· 1 2 ··· 2 6 ··· 6 4 4 4 4 2 2 2 2 2 ··· 2 3 ··· 3 6 ··· 6 2 ··· 2 4 ··· 4 4 ··· 4 150 irreducible representations dim 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 4 4 4 4 type + + + + + + + + + + + + + - image C1 C2 C2 C2 C2 C2 C2 C2 C4 C5 C10 C10 C10 C10 C10 C10 C10 C20 S3 D4 D6 D6 C4○D4 C4×S3 C5×S3 C5×D4 S3×C10 S3×C10 C5×C4○D4 S3×C20 S3×D4 D4⋊2S3 C5×S3×D4 C5×D4⋊2S3 kernel C5×Dic3⋊4D4 Dic3×C20 C5×Dic3⋊C4 C5×D6⋊C4 C15×C22⋊C4 S3×C2×C20 Dic3×C2×C10 C10×C3⋊D4 C5×C3⋊D4 Dic3⋊4D4 C4×Dic3 Dic3⋊C4 D6⋊C4 C3×C22⋊C4 S3×C2×C4 C22×Dic3 C2×C3⋊D4 C3⋊D4 C5×C22⋊C4 C5×Dic3 C2×C20 C22×C10 C30 C2×C10 C22⋊C4 Dic3 C2×C4 C23 C6 C22 C10 C10 C2 C2 # reps 1 1 1 1 1 1 1 1 8 4 4 4 4 4 4 4 4 32 1 2 2 1 2 4 4 8 8 4 8 16 1 1 4 4 Matrix representation of C5×Dic34D4 in GL4(𝔽61) generated by 34 0 0 0 0 34 0 0 0 0 34 0 0 0 0 34 , 1 1 0 0 60 0 0 0 0 0 60 0 0 0 0 60 , 11 0 0 0 50 50 0 0 0 0 50 0 0 0 0 50 , 1 0 0 0 60 60 0 0 0 0 11 3 0 0 0 50 , 1 0 0 0 0 1 0 0 0 0 11 3 0 0 21 50 G:=sub<GL(4,GF(61))| [34,0,0,0,0,34,0,0,0,0,34,0,0,0,0,34],[1,60,0,0,1,0,0,0,0,0,60,0,0,0,0,60],[11,50,0,0,0,50,0,0,0,0,50,0,0,0,0,50],[1,60,0,0,0,60,0,0,0,0,11,0,0,0,3,50],[1,0,0,0,0,1,0,0,0,0,11,21,0,0,3,50] >; C5×Dic34D4 in GAP, Magma, Sage, TeX C_5\times {\rm Dic}_3\rtimes_4D_4 % in TeX G:=Group("C5xDic3:4D4"); // GroupNames label G:=SmallGroup(480,760); // by ID G=gap.SmallGroup(480,760); # by ID G:=PCGroup([7,-2,-2,-2,-5,-2,-2,-3,1149,891,226,15686]); // Polycyclic G:=Group<a,b,c,d,e|a^5=b^6=d^4=e^2=1,c^2=b^3,a*b=b*a,a*c=c*a,a*d=d*a,a*e=e*a,c*b*c^-1=d*b*d^-1=b^-1,b*e=e*b,c*d=d*c,c*e=e*c,e*d*e=d^-1>; // generators/relations ׿ × 𝔽
2023-03-27 03:33:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9992249011993408, "perplexity": 12277.253964961112}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946637.95/warc/CC-MAIN-20230327025922-20230327055922-00291.warc.gz"}
https://researchonline.ljmu.ac.uk/id/eprint/16423/
# The physics governing the upper truncation mass of the globular cluster mass function Hughes, ME, Pfeffer, JL, Bastian, N, Martig, M, Kruijssen, JMD, Crain, RA, Reina-Campos, M and Trujillo-Gomez, S (2022) The physics governing the upper truncation mass of the globular cluster mass function. Monthly Notices of the Royal Astronomical Society, 510 (4). pp. 6190-6200. ISSN 0035-8711 Preview Text The physics governing the upper truncation mass of the globular cluster.pdf - Published Version The mass function of globular cluster (GC) populations is a fundamental observable that encodes the physical conditions under which these massive stellar clusters formed and evolved. The high-mass end of star cluster mass functions are commonly described using a Schechter function, with an exponential truncation mass $M_{c,*}$. For the GC mass functions in the Virgo galaxy cluster, this truncation mass increases with galaxy mass ($M_{*}$). In this paper we fit Schechter mass functions to the GCs in the most massive galaxy group ($M_{\mathrm{200}} = 5.14 \times 10^{13} M_{\odot}$) in the E-MOSAICS simulations. The fiducial cluster formation model in E-MOSAICS reproduces the observed trend of $M_{c,*}$ with $M_{*}$ for the Virgo cluster. We therefore examine the origin of the relation by fitting $M_{c,*}$ as a function of galaxy mass, with and without accounting for mass loss by two-body relaxation, tidal shocks and/or dynamical friction. In the absence of these mass-loss mechanisms, the $M_{c,*}$-$M_{*}$ relation is flat above $M_* > 10^{10} M_{\odot}$. It is therefore the disruption of high-mass GCs in galaxies with $M_{*}\sim 10^{10} M_{\odot}$ that lowers the $M_{c,*}$ in these galaxies. High-mass GCs are able to survive in more massive galaxies, since there are more mergers to facilitate their redistribution to less-dense environments. The $M_{c,*}-M_*$ relation is therefore a consequence of both the formation conditions of massive star clusters and their environmentally-dependent disruption mechanisms.
2023-01-31 19:10:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6405531764030457, "perplexity": 2534.7427395700556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499890.39/warc/CC-MAIN-20230131190543-20230131220543-00167.warc.gz"}
http://www.physicsforums.com/showthread.php?t=102424
## Phisic help with densities hi where would i start if i wantied to do this could you please tell me the formulas i need or how i should aproch it A metal object has a mass of 135 g and is submerged in water. it displace 50 cm^3 of water. Calculate the density of the metal and weight of the metal under the water.answer must be in si units Recognitions: Homework Help Science Advisor Start with Archemede's principle and the formula for the buoyant force. And when they say "calculate the weight under water," the question is vague: "weight" could mean "the force due to gravity," which will still be the same, or "weight" could mean "apparant weight" which would be the force due to gravity minus the buoyant force. and they would be? ## Phisic help with densities i am conpletly lost do you know archimede's principle? It states that the weight of the fluid displaced is the buoyant force felt by the object. For the first part- you do know what density means, do you not? For the second part- how much water is displaced?(The density of water is $1g/cm^3$ the density is the heviyness of an object and as for the second part is it 125000 the answer as 135 grams times 50 ^3 am i right Recognitions: Homework Help Science Advisor Do you have a textbook? Are you allowed to read it?
2013-05-22 13:51:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5221301913261414, "perplexity": 806.3527123456704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701806508/warc/CC-MAIN-20130516105646-00015-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.projecteuclid.org/euclid.tmna/1475178333
## Topological Methods in Nonlinear Analysis ### Some existence results for dynamical systems on non-complete Riemannian manifolds #### Abstract Let $\mathcal M^*$ be a non-complete Riemannian manifold with bounded topological boundary and $V\colon \mathcal M \to \mathbb R$ a $C^2$ potential function subquadratic at infinity. In this paper we look for curves $x\colon [0,T]\to\mathcal M$ having prescribed period $T$ or joining two fixed points of $\mathcal M$, satisfying the system $$D_t (\dot x(t))=-\nabla_R V(x(t)),$$ where $D_t(\dot x(t))$ is the covariant derivative of $\dot x$ along the direction of $\dot x$ and $\nabla_R V$ the Riemannian gradient of $V$. We assume that $V(x) \to -\infty$ if $d(x,\partial\mathcal M)\to 0$ and, in the periodic case, suitable hypotheses on the sectional curvature of $\mathcal M$ at infinity. We use variational methods in addition with a penalization technique and Morse index estimates. #### Article information Source Topol. Methods Nonlinear Anal., Volume 13, Number 1 (1999), 163-180. Dates First available in Project Euclid: 29 September 2016 https://projecteuclid.org/euclid.tmna/1475178333 Mathematical Reviews number (MathSciNet) MR1716590 Zentralblatt MATH identifier 0942.58022 #### Citation Mirenghi, Elvira; Tucci, Maria. Some existence results for dynamical systems on non-complete Riemannian manifolds. Topol. Methods Nonlinear Anal. 13 (1999), no. 1, 163--180. https://projecteuclid.org/euclid.tmna/1475178333 #### References • R. Bartolo, Periodic orbits on Riemannian manifolds with boundary , Discrete Contin. Dynam. Systems, 3 (1997), 439–450 \ref\no2 • R. Bartolo and A. Masiello, Morse theory for trajectories of Lagrangian systems on Riemannian manifolds with convex boundary , Adv. Differential Equations, 2 (1997), 593–618 \ref\no3 • V. Benci, Periodic solutions of Lagrangian systems on a compact manifold , J. Differential Equations, 63 (1986), 135–161 \ref\no4 ––––, A new approach to the Morse–Conley theory , Proceedings, International Conference, Recent Advances in Hamiltonian Systems 1986 (Dell'Antonio and B. D'Onofrio, eds.), Word Scientific, Singapore, L'Aquila (1987), 1–52 \ref\no5 ––––, A new approach to the Morse–Conley Theory and some applications , Ann. Mat. Pura Appl. (4), 158 (1991), 231–305 \ref\no6 • V. Benci and D. Fortunato, On the existence of infinitely many geodesics on space-time manifolds , Adv. Math., 105 (1994), 1–25 \ref\no7 • V. Benci, D. Fortunato and F. Giannoni, On the existence of trajectories in static Lorentz manifolds with singular boundary , Nonlinear Analysis, a tribute in honour of G. Prodi, Quaderni della Scuola Normale Superiore di Pisa (A. Ambrosetti and A. Marino Editors, eds.), Pisa (1991), 109–133 \ref\no8 • V. Benci and F. Giannoni, On the existence of closed geodesics on non-compact Riemannian manifolds , Duke Math. J., 68 (1992), 195–215 \ref\no9 • S. Cingolani, E. Mirenghi and M.Tucci, Periodic orbits and subharmonics of dynamical systems on non-compact Riemannian manifolds , J. Differential Equations, 130, (1996), 142–161 \ref\no10 • E. Mirenghi and M. Tucci, Periodic solutions on non-compact Riemannian manifolds , Ann. Univ. Ferrara Sez. VII, XXXVIII (1992), 65–75 \ref\no11 ––––, Periodic solutions with prescribed energy on non-complete Riemannian manifolds , J. Math. Anal. Appl., 199 (1996), 334–348 \ref\no12 • J. Milnor, Morse Theory ; Ann. of Math. Stud., 51 , Princeton Univ. Press, Princeton (1963) \ref\no13 • J. Nash, The imbedding problem for Riemannian manifolds , Ann. Math., 63 (1956), 20–63 \ref\no14 • B. O'Neil, Semi-Riemannian Geometry with Applications to Relativity ; Pure Appl. Math., 103 , Academic Press, New York (1983) \ref\no15 • R. S. Palais, Homotopy theory of infinite dimensional manifolds , Topology, 5 (1966), 1–16 \ref\no16 ––––, Morse theory on Hilbert manifolds , Topology, 2 (1963), 299–340 \ref\no17 • A. Salvatore, On the existence of infinitely many periodic solutions on non-complete Riemannian manifolds , J. Differential Equations, 120 (1995), 198–214 \ref\no18 • J. T. Schwartz, Nonlinear Functional Analysis, Gordon and Breach, New York (1969)
2020-05-25 06:24:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5226598381996155, "perplexity": 1751.978207588695}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347387219.0/warc/CC-MAIN-20200525032636-20200525062636-00374.warc.gz"}
http://windowsontheory.org/2012/11/16/tennis-for-the-people-ii/?like=1&_wpnonce=aaa135d5d2
I continue the discussion from last post. We are trying to add unpredictability to tennis by looking for a monotone, transitive and balanced function $f$ such that $E_p(f)$ has a wide threshold window, that is, we want the range of $p$ where $E_p(f)$ is, say, between 0.01 and 0.99, to be as large as possible. A formal treatment of this problem could be found in the seminal paper of Kalai and Freidgut. The major simplifying step we take is to look at the derivative of $E_p(f)$ at the point $p=1/2$ as a proxy for the width of the window. Think of it as the first order linear approximation for the (inverse of the) width of the threshold window: the smaller the derivative the wider the window. The main tool to help us reason about the derivative is Russo’s formula, but first a definition: given ${x_1,.. ,x_n}$ we say that bit $x_i$ is pivotal if flipping $x_i$ changes the value of $f$.  The total influence of the function is the expected number of pivotal bits when $x_1,..,x_n$ are drawn uniformly at random. Russo’s formula states  that for monotone functions the derivative of $E_p(f)$ at $p=1/2$ is exactly the total influence of $f$! One way to see why Russo’s formula makes sense is to think about the definition of a derivative and observe that increasing $p$ by a small amount could be thought of as picking a bit at random and setting it to 1. Now, the probability that $f$ changes its value is proportional to the number of pivotal bits. We now face a well-defined challenge of coming up with a function satisfying our constraints and with small total influence. As a sanity check it is easy to observe that the total influence of the majority function is indeed roughly $\sqrt n$. The reason is that with probability about $1/\sqrt n$ the majority is by exactly one point, in which case half the bits are pivotal. What about majority of majorities, or better yet , recursive-majority of three values? Well, the total influence of recursive majority can be computed to be roughly $n^{0.37}$, which is an improvement over simple majority (not surprising since simple majority could be shown to have the maximal total influence of all monotone Boolean functions). But is it the best? The answer is given in the celebrated theorem by Kahn, Kalai and Linial which implies in our case that the total influence is $\Omega(\log n)$. A tight example was provided by Ben-Or and Linial in the TRIBES function. In the TRIBES function the bits are partitioned to sets (‘tribes’) of roughly $\log n$ –  $\log{ \log n}$ bits each. Player ‘1’ wins if there is a tribe with all its bits set to 1. Player ‘0’ wins otherwise. The function is clearly transitive and monotone. Exact parameters can be set so that  it is approximately balanced. To see that the total influence is small observe that for a bit to be pivotal it has to belong to a tribe where all the other bits are 1, which happens with probability about $\log n/ n$ so the total influence is $O(\log n)$. So is TRIBES a good candidate for an athletic game? I don’t think so, for several reasons.  First, it is not exactly balanced, I can’t imagine an athlete happy with a game where he is unlikely to win even when competing on equal grounds point by point. Second, there is a difference between the role of player 1, which plays offense trying to obtain a tribe, and player 0 which plays defense trying to block player 1. Both problems are rectified by adding another requirement. We want the function  to be symmetric, that is $f(1-x)=1-f(x)$. This means that when flipping all bits the outcome also flips and since $f$ is also monotonic it also means it is balanced. Cycle-Run Here is a suggestion for a transitive, monotone, symmetric function, with low total influence. We call it Cycle-Run and it could be viewed as a symmetric version of TRIBES.  Call a consecutive sequence of ‘1’’s  a 1-run. Similarly a consecutive sequence of zeros is a ‘0-run’. We allow runs to wrap around, so if a run reaches $x_n$ it may continue with $x_1$.  The value of $f$ is determined as follows: 1. Check which player has the longest run. 2. In case of tie check which player has a larger number of maximal runs. 3. In case of tie check the total length of segments between maximal runs, where a segment starting from a 1-run clockwise is counted for the 1-player and a segment starting at a  0-run clockwise is counted for the 0-player. The player that has a larger total count is declared the winner. In the example below both players have a single maximal run of length 3, but player 0 wins the tie breaker 9-2. Monotonicity and  symmetry are readily verifiable. Transitivity is obtained via rotations of the cycle. Why is the total influence logarithmic? One way to see it is to observe that the expected number of runs of length $k$ is $n2^{-k}$, so the expected number of runs longer than $\log n$ is smaller than 1. Further work is required, but basically this is the reason. If you want to do the calculations yourself you may find this survey useful. So is this useful? In practice most tennis matches consist of roughly 200-400 points. Below we plot Majority, Recursive Majority and Cycle-Run for n=243. We see that indeed Majority  has the narrowest threshold window and Cycle-Run the widest. The difference between Cycle-Run and Recursive-Majority is not that big. When $p=0.4$ the probability of an upset rises from roughly 3% to 5.3%. Asymptotics however don’t lie and when n is increased to 2187 the differences are quite striking. Now, when $p=0.45$, for simple majority the probability of an upset is negligible. For recursive majority it is less than 2% and for cycle run it is more than 11%. What about real life? I have the data of three tennis matches. In these three examples all rules have the same result. An anecdotal observation is that at least in these cases the winning run tends to be very close to the end of the match, suggesting that the mental aspect of the game plays an important role. So should we petition the ATP for a rule change? Credits: This post is the outcome of discussions with Parikshit Gopalan, Yuval Peres, Omer Reingold and Kunal Talwar. The data was given to me through the generosity of Daniele Paserman and the facilitation of Ran Abramitzky. Professional tennis is a good setting for researching human decision-making when payoffs are well defined. If you want to read some cool papers checkout a paper by Daniele and a paper by Ran. November 17, 2012 12:04 am Excellent posts, Udi. One thing slightly complicating the tennis application is that the player serving has an advantage, so it is a bit too simplistic to simply say that the game consists of a sequence of “same” bits x_i, as the decision must be made who is serving for each i. I suppose one simple rule is to make the players alternate their serves, although this is slightly not nice as people like to serve for a while; gives them a sense of continutiy. To fix this, we can play n/2 points as follows: have player 1 start serving, and then the winner of the point serves the next point. Then, after n/2 points the same is repeated with player 2 starting. And then compare the maximum runs in both cases with something ugly in case of a tie. (Or, more generally, have have 2t “sets” of n/2t point each with something to aggrgate the 2t maximal runs.) This is also nice as it gives explicit reward for “keep holding my serve”. Another advantage of this is that the rules become very clean. Serve until you lose serve, and try to maximize the number of points you win consecutively. (Here I ignore a slight asymetry at the beginning, but I suspect it’s effect if negligible if n is large.) With these tules, we get something VERY similar to badminton and voleyball (where new set has a different player starting, or maybe a loser, I forgot). The main differences are: (a) maximum total number of sets is odd, not even (so no need for “ugly” rule 3); (b) more importantly, you are trying to maximize the length of the run, not the number of sets you win (which is important for incluence); (c) not important, but in both badminton and voleyball the server is actually at DISADVANTAGE, so the runs are likely very short, even when one polayer is much better. (does it mean we should change it, and the winner receives in these games? Not sure…) Overall, the main question is if your simplified analysis through influence stills holds when there are two probabilities involved (Pr(servring 1 wins)=p and Pr(receiving 1 wins)=q). What do you think? Yevgeniy
2014-07-30 04:59:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 34, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7736281752586365, "perplexity": 666.6881114075105}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510268660.14/warc/CC-MAIN-20140728011748-00096-ip-10-146-231-18.ec2.internal.warc.gz"}
https://tex.stackexchange.com/questions/290466/how-to-have-different-head-footer-rule-widths-on-odd-and-even-pages-using-fanc
# How to have different head & footer rule widths on odd and even pages using fancyhdr? I am trying to create a document that has just a footer on odd pages and just a header on even pages. On the odd pages, there should be a head rule width of 0pt and a foot rule width of 0.4pt. On the even pages, there should be a foot rule width of 0pt and a head rule width of 0.4pt. I know how to set up everything but the rule width alternations. How can I do the alternations for the headrulewidths and footrulewidths? Here is what I have so far: \documentclass[11pt,twoside,twocolumn]{article} \usepackage{fancyhdr} \fancypagestyle{normalpage}{ \renewcommand{\footrulewidth}{0.4pt} \fancyfoot[LO]{{\footnotesize \textbf{Title}} \textbullet ~ \footnotesize Vol. 1, Number 1, 2016 \textbullet ~Text} \fancyfoot[CO]{} \fancyfoot[RO]{ \thepage} \fancyhead[LE]{{\footnotesize \textbf{Title}} \textbullet ~ \footnotesize Vol. 1, Number 1, 2016 \textbullet ~Text} \fancyfoot[LE]{} \fancyfoot[CE]{} \fancyfoot[RE]{} } \pagestyle{normalpage} \begin{document} Text \cleardoublepage Text \end{document} • If you want to use fancyhdr then you need \pagestyle{fancy}. The header/footer width is the same as textwidth. If you want to exceed the margins you will need to use things like \rlap, \llap and \hbox to ??. – John Kormylo Feb 1 '16 at 3:42 • @JohnKormylo That's not true. You don't have to use \pagestyle{fancy} even though that is the most common way to do it because the package defines the style. – cfr Feb 1 '16 at 21:25 Something like this? \documentclass[11pt,twoside,twocolumn]{article} \usepackage{fancyhdr} \let\origfootrule\footrule \fancypagestyle{normalpage}{% \fancyhf{}% \renewcommand{\footrulewidth}{0.4pt}% \fancyhf[rof,reh]{\thepage}% \fancyhf[lof,leh]{{\footnotesize \textbf{Title}} \textbullet ~ \footnotesize Vol. 1, Number 1, 2016 \textbullet ~Text}%
2019-12-09 21:49:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7781484723091125, "perplexity": 2468.250716897901}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540523790.58/warc/CC-MAIN-20191209201914-20191209225914-00341.warc.gz"}
https://answers.gazebosim.org/answers/1751/revisions/
# Revision history [back] Hi, If I am not mistaking With ROS Fuerte Gazebo version 1.0. Try using this tutorial for importing meshes. If the model is not found, try copying it to ~/.gazebo/models folder, or fix the paths. Spawning from the launch file could also be problematic in that version, try spawning from the .world file. Take a look at this answer. Cheers, Andrei Hi, If I am not mistaking With ROS Fuerte comes with Gazebo version 1.0. Try using this tutorial for importing meshes. If the model is not found, try copying it to ~/.gazebo/models folder, or fix the paths. Spawning from the launch file could also be problematic in that version, try spawning from the .world file. Take a look at this answer. Cheers, Andrei Hi, If I am not mistaking ROS Fuerte comes with Gazebo version 1.0. Try using this tutorial for importing meshes. If the model is not found, try copying it to ~/.gazebo/models folder, or fix the paths. Spawning from the launch file could also be problematic in that version, try spawning from the .world file. Take a look at this answer. UPDATE: how to set the paths (from the install tutorial) echo "export LD_LIBRARY_PATH=<install_path>/local/lib:$LD_LIBRARY_PATH" >> ~/.bashrc echo "export PATH=<install_path>/local/bin:$PATH" >> ~/.bashrc echo "export PKG_CONFIG_PATH=<install_path>/local/lib/pkgconfig:\$PKG_CONFIG_PATH" >> ~/.bashrc echo "export OGRE_RESOURCE_PATH=/usr/lib/<see above for notes on Ogre>/OGRE-1.7.4" >> ~/.bashrc echo "source <install_path>/share/gazebo-1.0.0/setup.bash" >> ~/.bashrc source ~/.bashrc Cheers, Andrei
2021-04-19 06:08:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34547728300094604, "perplexity": 10822.991338340637}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038878326.67/warc/CC-MAIN-20210419045820-20210419075820-00109.warc.gz"}
http://mathhelpforum.com/pre-calculus/198717-even-decreasing-function.html
# Math Help - even and decreasing function 1. ## even and decreasing function Is it possible that function is even as well as decreasing in its domain? 2. ## Re: even and decreasing function Originally Posted by ayushdadhwal Is it possible that function is even as well as decreasing in its domain? depends on the definition of decreasing ... the def. used in my calculus text is ... a function is decreasing if for $b > a$ , then $f(a) \ge f(b)$ technically, any constant function would be even and satisfy this definition.
2015-05-22 16:08:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9219950437545776, "perplexity": 1157.5603056879995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207925696.30/warc/CC-MAIN-20150521113205-00001-ip-10-180-206-219.ec2.internal.warc.gz"}
https://socratic.org/questions/how-do-you-find-the-volume-of-the-bounded-region-if-y-sinx-y-0-from-x-pi-4-x-3pi
# How do you find the volume of the bounded region if y = sinx, y = 0 from x = pi/4, x = 3pi/4, revolved around the y-axis? Jun 19, 2015 Use the shell method. #### Explanation: Representative shell has volume: $2 \pi r h \cdot \text{thickness}$ $r = x$, $h = \sin x$ and $\text{thickness} = \mathrm{dx}$ So the volume of the solid is given by: $V = {\int}_{\frac{\pi}{4}}^{\frac{3 \pi}{4}} 2 \pi x \sin x \mathrm{dx}$ Use integration by parts to get: V =2pi( sinx-xcosx)]_(pi/4)^((3pi)/4 Then use trigonometry and arithmetic to get: $V = 2 \pi \cdot \frac{\pi \sqrt{2}}{2} = {\pi}^{2} \sqrt{2}$
2022-07-04 21:47:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 7, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6594652533531189, "perplexity": 1684.8746118192976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104496688.78/warc/CC-MAIN-20220704202455-20220704232455-00334.warc.gz"}
https://bitbucket.org/dhellmann/virtualenvwrapper-hg/src/e01216610fe7/docs/source/design.rst
# Why virtualenvwrapper is (Mostly) Not Written In Python If you look at the source code for virtualenvwrapper you will see that most of the interesting parts are implemented as shell functions in virtualenvwrapper.sh. The hook loader is a Python app, but doesn't do much to manage the virtualenvs. Some of the most frequently asked questions about virtualenvwrapper are "Why didn't you write this as a set of Python programs?" or "Have you thought about rewriting it in Python?" For a long time these questions baffled me, because it was always obvious to me that it had to be implemented as it is. But they come up frequently enough that I feel the need to explain. ## tl;dr: POSIX Made Me Do It The choice of implementation language for virtualenvwrapper was made for pragmatic, rather than philosophical, reasons. The wrapper commands need to modify the state and environment of the user's current shell process, and the only way to do that is to have the commands run inside that shell. That resulted in me writing virtualenvwrapper as a set of shell functions, rather than separate shell scripts or even Python programs. ## Where Do POSIX Processes Come From? New POSIX processes are created when an existing process invokes the fork() system call. The invoking process becomes the "parent" of the new "child" process, and the child is a full clone of the parent. The semantic result of fork() is that an entire new copy of the parent process is created. In practice, optimizations are normally made to avoid copying more memory than is absolutely necessary (frequently via a copy-on-write system). But for the purposes of this explanation it is sufficient to think of the child as a full replica of the parent. The important parts of the parent process that are copied include dynamic memory (the stack and heap), static stuff (the program code), resources like open file descriptors, and the environment variables exported from the parent process. Inheriting environment variables is a fundamental aspect of the way POSIX programs pass state and configuration information to one another. A parent can establish a series of name=value pairs, which are then given to the child process. The child can access them through functions like getenv(), setenv() (and in Python through os.environ). The choice of the term inherit to describe the way the variables and their contents are passed from parent to child is significant. Although a child can change its own environment, it cannot directly change the environment settings of its parent because there is no system call to modify the parental environment settings. ## How the Shell Runs a Program When a shell receives a command to be executed, either interactively or by parsing a script file, and determines that the command is implemented in a separate program file, is uses fork() to create a new process and then inside that process it uses one of the exec functions to start the specified program. The language that program is written in doesn't make any difference in the decision about whether or not to fork(), so even if the "program" is a shell script written in the language understood by the current shell, a new process is created. On the other hand, if the shell decides that the command is a function, then it looks at the definition and invokes it directly. Shell functions are made up of other commands, some of which may result in child processes being created, but the function itself runs in the original shell process and can therefore modify its state, for example by changing the working directory or the values of variables. It is possible to force the shell to run a script directly, and not in a child process, by sourcing it. The source command causes the shell to read the file and interpret it in the current process. Again, as with functions, the contents of the file may cause child processes to be spawned, but there is not a second shell process interpreting the series of commands. ## What Does This Mean for virtualenvwrapper? The original and most important features of virtualenvwrapper are automatically activating a virtualenv when it is created by mkvirtualenv and using workon to deactivate one environment and activate another. Making these features work drove the implementation decisions for the other parts of virtualenvwrapper, too. Environments are activated interactively by sourcing bin/activate inside the virtualenv. The activate script does a few things, but the important parts are setting the VIRTUAL_ENV variable and modifying the shell's search path through the PATH variable to put the bin directory for the environment on the front of the path. Changing the path means that the programs installed in the environment, especially the python interpreter there, are found before other programs with the same name. Simply running bin/activate, without using source doesn't work because it sets up the environment of the child process, without affecting the parent. In order to source the activate script in the interactive shell, both mkvirtualenv and workon also need to be run in that shell process. ## Why Choose One When You Can Have Both? The hook loader is one part of virtualenvwrapper that is written in Python. Why? Again, because it was easier. Hooks are discovered using setuptools entry points, because after an entry point is installed the user doesn't have to take any other action to allow the loader to discover and use it. It's easy to imagine writing a hook to create new files on the filesystem (by installing a package, instantiating a template, etc.). How, then, do hooks running in a separate process (the Python interpreter) modify the shell environment to set variables or change the working directory? They cheat, of course. Each hook point defined by virtualenvwrapper actually represents two hooks. First, the hooks meant to be run in Python are executed. Then the "source" hooks are run, and they print out a series of shell commands. All of those commands are collected, saved to a temporary file, and then the shell is told to source the file. Starting up the hook loader turns out to be way more expensive than most of the other actions virtualenvwrapper takes, though, so I am considering making its use optional. Most users customize the hooks by using shell scripts (either globally or in the virtualenv). Finding and running those can be handled by the shell quite easily. ## Implications for Cross-Shell Compatibility Other than requests for a full-Python implementation, the other most common request is to support additional shells. fish comes up a lot, as do various Windows-only shells. The officially :ref:supported-shells all have a common enough syntax that the same implementation works for each. Supporting other shells would require rewriting much, if not all, of the logic using an alternate syntax -- those other shells are basically different programming languages. So far I have dealt with the ports by encouraging other developers to handle them, and then trying to link to and otherwise promote the results. ## Not As Bad As It Seems Although there are some special challenges created by the the requirement that the commands run in a user's interactive shell (see the many bugs reported by users who alias common commands like rm and cd), using the shell as a programming language holds up quite well. The shells are designed to make finding and executing other programs easy, and especially to make it easy to combine a series of smaller programs to perform more complicated operations. As that's what virtualenvwrapper is doing, it's a natural fit.
2016-02-08 02:31:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3935893774032593, "perplexity": 1383.5528288610508}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701152097.59/warc/CC-MAIN-20160205193912-00136-ip-10-236-182-209.ec2.internal.warc.gz"}
https://www.mathblog.dk/tag/combinatorics/page/2/
# Combinatorics ## Project Euler 92: Investigating a square digits number chain with a surprising property. Problem 92 of Project Euler is one of the more solve puzzles in this range, so based on that parameter this should be a pretty easy thing to chew through. The problem text reads A number chain is created by continuously adding the square of the digits in a number to form a new number until it has been seen before. For example, 44 → 32 → 13 → 10 → 1 → 1 85 → 89 → 145 → 42 → 20 → 4 → 16 → 37 → 58 → 89 Therefore any chain that arrives at 1 or 89 will become stuck in an endless loop. What is most amazing is that EVERY starting number will eventually arrive at 1 or 89. How many starting numbers below ten million will arrive at 89? I have found two methods to solve this. One where we  go through all 10.000.000 numbers to check what their cycle will be with a bit of help from some caching. And then a method where we exploit the fact that the order of the digits doesn’t matter and that significantly reduces the cases we need to check. Continue reading → Posted by Kristian in Project Euler, 12 comments ## Project Euler 90: An unexpected way of using two cubes to make a square. In the ninetieth problem of Project Euler  we are back in combinatorics with a problem description that reads Each of the six faces on a cube has a different digit (0 to 9) written on it; the same is done to a second cube. By placing the two cubes side-by-side in different positions we can form a variety of 2-digit numbers. For example, the square number 64 could be formed: In fact, by carefully choosing the digits on both cubes it is possible to display all of the square numbers below one-hundred: 01, 04, 09, 16, 25, 36, 49, 64, and 81. For example, one way this can be achieved is by placing {0, 5, 6, 7, 8, 9} on one cube and {1, 2, 3, 4, 8, 9} on the other cube. However, for this problem we shall allow the 6 or 9 to be turned upside-down so that an arrangement like {0, 5, 6, 7, 8, 9} and {1, 2, 3, 4, 6, 7} allows for all nine square numbers to be displayed; otherwise it would be impossible to obtain 09. In determining a distinct arrangement we are interested in the digits on each cube, not the order. {1, 2, 3, 4, 5, 6} is equivalent to {3, 6, 4, 1, 2, 5} {1, 2, 3, 4, 5, 6} is distinct from {1, 2, 3, 4, 5, 9} But because we are allowing 6 and 9 to be reversed, the two distinct sets in the last example both represent the extended set {1, 2, 3, 4, 5, 6, 9} for the purpose of forming 2-digit numbers. How many distinct arrangements of the two cubes allow for all of the square numbers to be displayed? Posted by Kristian in Project Euler, 10 comments ## Project Euler 85: Investigating the number of rectangles in a rectangular grid And now for something completely different.. or maybe as we will see later Problem 85 of Project Euler offers us a new problem where we can use the same ideas as for the solution to a previous problem. To lets start out with the problem text: By counting carefully it can be seen that a rectangular grid measuring 3 by 2 contains eighteen rectangles: Although there exists no rectangular grid that contains exactly two million rectangles, find the area of the grid with the nearest solution. A nice and short problem text. I have found two possible solutions for the problem, one using brute force and the other applying some combinatorics to find an analytical solution to the number of rectangles that we have for a given size. Continue reading → Posted by Kristian in Project Euler, 17 comments ## Project Euler 77: What is the first value which can be written as the sum of primes in over five thousand different ways? In problem 77 of Project Euler we are asked the following question It is possible to write ten as the sum of primes in exactly five different ways: 7 + 3 5 + 5 5 + 3 + 2 3 + 3 + 2 + 2 2 + 2 + 2 + 2 + 2 What is the first value which can be written as the sum of primes in over five thousand different ways? Posted by Kristian in Project Euler, 6 comments ## Project Euler 53: How many values of C(n,r), for 1 ≤ n ≤ 100, exceed one-million? I have found Problem 53 of Project Euler to be a really interesting problem to work with, because there is a brute force solution and then there is a much more elegant solution if you dive into the mathematics behind the question.  The problem reads There are exactly ten ways of selecting three from five, 12345: 123, 124, 125, 134, 135, 145, 234, 235, 245, and 345 In combinatorics, we use the notation, 5C3 = 10. In general, $^nC_r = \frac{n!}{r!(n-r)!}$ where $r \leq n$, $n! = n\cdot (n-1)\cdot \dots \cdot 3 \cdot 2\cdot 1$ and $0! = 1$ It is not until n = 23, that a value exceeds one-million: 23C10 = 1144066. How many, not necessarily distinct, values of  nCr, for 1 n 100, are greater than one-million? I will present 2 different solution strategies each with 2 different solutions, so lets jump right into it. Continue reading → Posted by Kristian in Project Euler, 17 comments ## Project Euler 51: Find the smallest prime which, by changing the same part of the number, can form eight different primes When I worked on this problem I realised that I value two things about code – speed and simplicity. I always try to obtain both but in this case I had to sacrifice the simplicity in order to gain speed as we shall see. But before we dive into the code lets look at Problem 51 which is the problem description for the first problem on the second page of Project Euler. It reads By replacing the 1st digit of *3, it turns out that six of the nine possible values: 13, 23, 43, 53, 73, and 83, are all prime. By replacing the 3rd and 4th digits of 56**3 with the same digit, this 5-digit number is the first example having seven primes among the ten generated numbers, yielding the family: 56003, 56113, 56333, 56443, 56663, 56773, and 56993. Consequently 56003, being the first member of this family, is the smallest prime with this property. Find the smallest prime which, by replacing part of the number (not necessarily adjacent digits) with the same digit, is part of an eight prime value family. This problem can be solved in multiple ways.  You could run your way through all primes and check if each one is part of a family or you could do a bit of analysis on the problem to narrow down the problem you have to solve by code. I have chosen the latter. Continue reading → Posted by Kristian in Project Euler, 21 comments ## Project Euler 43: Find the sum of all pandigital numbers with an unusual sub-string divisibility property When I first saw pandigital numbers I thought it was just a curious thing that we would visit once. I was wrong as Problem 42 of Project Euler is also about a special group of pandigital numbers. The problem reads The number, 1406357289, is a 0 to 9 pandigital number because it is made up of each of the digits 0 to 9 in some order, but it also has a rather interesting sub-string divisibility property. Let d1 be the 1st digit, d2 be the 2nd digit, and so on. In this way, we note the following: • d2d3d4=406 is divisible by 2 • d3d4d5=063 is divisible by 3 • d4d5d6=635 is divisible by 5 • d5d6d7=357 is divisible by 7 • d6d7d8=572 is divisible by 11 • d7d8d9=728 is divisible by 13 • d8d9d10=289 is divisible by 17 Find the sum of all 0 to 9 pandigital numbers with this property. We will take two different approaches to this. First We will explore the brute force of generating all permutations and after that we will use the divisibility requirements to limit the number of permutations we have to explore. Continue reading → Posted by Kristian in Project Euler, 18 comments ## Project Euler 12 – Revisited A while ago I treated Problem 12 of Project Euler and came up with several solutions as seen here. Lets just repeat the problem here The sequence of triangle numbers is generated by adding the natural numbers. So the 7th triangle number would be 1 + 2 + 3 + 4 + 5 + 6 + 7 = 28. The first ten terms would be: 1, 3, 6, 10, 15, 21, 28, 36, 45, 55, … Let us list the factors of the first seven triangle numbers: 1: 1 3: 1,3 6: 1,2,3,6 10: 1,2,5,10 15: 1,3,5,15 21: 1,3,7,21 28: 1,2,4,7,14,28We can see that 28 is the first triangle number to have over five divisors. What is the value of the first triangle number to have over five hundred divisors? Posted by Kristian in Project Euler, 4 comments ## Project Euler 24: What is the millionth lexicographic permutation of the digits 0, 1, 2, 3, 4, 5, 6, 7, 8 and 9? Problem 24 of Project Euler is about permutations, which is in the field of combinatorics. The posed question is as follows A permutation is an ordered arrangement of objects. For example, 3124 is one possible permutation of the digits 1, 2, 3 and 4. If all of the permutations are listed numerically or alphabetically, we call it lexicographic order. The lexicographic permutations of 0, 1 and 2 are: 012 021 102 120 201 210 What is the millionth lexicographic permutation of the digits 0, 1, 2, 3, 4, 5, 6, 7, 8 and 9? There is a clever way to solve it, and a brute force solution. Since the brute force algorithm still requires some thought on how to generate permutation, I will cover both methods in this post. Continue reading → Posted by Kristian in Project Euler, 38 comments ## Project Euler 15: Routes through a 20×20 grid The problem description in Problem 15 of Project Euler contains a figure, which I wont copy, so go ahead an read the full description at the Project Euler site. The problem can be understood without it though. The problem reads Starting in the top left corner of a 2×2 grid, there are 6 routes (without backtracking) to the bottom right corner. How many routes are there through a 20×20 grid? My first question for many of the problems has been – Can it be brute forced? And my best answer to that is “probably”, but I cannot figure out how to generate all the routes. So instead I will give you two other approaches, which are both efficient. One is inspired by dynamic programming and the other gives an analytic solution using combinatorics. Continue reading → Posted by Kristian in Project Euler, 38 comments
2020-02-28 17:01:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6144648194313049, "perplexity": 378.8904603336317}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875147628.27/warc/CC-MAIN-20200228170007-20200228200007-00459.warc.gz"}
http://www.zentralblatt-math.org/zmath/en/advanced/?q=an:1224.46009
Language:   Search:   Contact Zentralblatt MATH has released its new interface! For an improved author identification, see the new author database of ZBMATH. Query: Fill in the form and click »Search«... Format: Display: entries per page entries Zbl 1224.46009 Karakaya, Vatan; Polat, Harun Some new paranormed sequence spaces defined by Euler and difference operators. (English) [J] Acta Sci. Math. 76, No. 1-2, 87-100 (2010). ISSN 0001-6969 A linear topological space $X$ over the real field $\Bbb R$ is said to be a paranormed space if there is a subadditive function $h: X\to \Bbb R$ such that $h(\theta)=0$, $h(x)=h(-x)$ and the scalar multiplication is continuous, where $\theta$ denotes the zero vector in $X$. \par In the paper under review, the authors introduce some new paranormed sequence spaces defined by Euler and difference operators (i.e., the sequence spaces $e_0^r(\Delta,p)$, $e_c^r(\Delta,p)$, $e^r_\infty(\Delta,p)$ with $p=(p_k)_{k\in\Bbb N}$ a bounded sequence of positive real numbers) and study some properties of these spaces. In particular, the authors give an inclusion relation between these sequence spaces and study their topological structure. Also, the basis and the $\alpha$-, $\beta$-, and $\gamma$-duals of these spaces are given. [Angela Albanese (Lecce)] MSC 2000: *46A45 Sequence spaces 46B45 Banach sequence spaces Keywords: paranormed sequence space; matrix mapping; Köthe-Toeplitz duals; Euler and difference sequence spaces Highlights Master Server
2013-06-19 16:38:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9713296890258789, "perplexity": 1174.647991791583}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708882773/warc/CC-MAIN-20130516125442-00061-ip-10-60-113-184.ec2.internal.warc.gz"}
http://tex.stackexchange.com/questions/20259/how-do-i-put-loops-at-an-angle-in-tikz/20263
# How do I put loops at an angle in TikZ? I'm trying to draw a graph in TikZ that has self-loops, and I'd like to be able to specify their position not only as "right" or "above", but something like "above right", i.e. at a 45° angle. How do I do that? - Welcome to TeX.SE! – ℝaphink Jun 8 '11 at 15:12 Welcome! Not sure if I understand you correctly, but are you looking for something like that shown in the first example of section 51.4, "Loops", of the PGF (v. 2.10) manual? (\node [circle,draw] {a} edge [in=30,out=60,loop] ();) – Torbjørn T. Jun 8 '11 at 15:25 That's it! Many thanks @torbjorn-t! – Anto Jun 8 '11 at 15:32 No problem. I added it as an answer as well. – Torbjørn T. Jun 8 '11 at 15:34 There is an example in the pgf/TikZ manual (for pgf v.3.0.0 dated December 20 2013, section 70.4 Loops) that demonstrates this: \documentclass{article} \usepackage{tikz} \begin{document} \begin{tikzpicture} \node [circle,draw] {a} edge [in=30,out=60,loop] (); \end{tikzpicture} \end{document} - @Anto: it would be great if you would accept the answer, you can do it by clicking on the check mark on the left of the answer. – Stefan Kottwitz Nov 27 '11 at 18:39
2016-05-05 01:05:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9028253555297852, "perplexity": 1641.2623436306014}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860125750.3/warc/CC-MAIN-20160428161525-00113-ip-10-239-7-51.ec2.internal.warc.gz"}
https://brilliant.org/problems/infinite-factorial-summation-with-a-twist/
# Infinite Factorial Summation with a Twist Calculus Level 4 $\large 1 + \dfrac{1 + \frac{1}{1!}}{2} + \dfrac{1 + \frac{1}{1!} + \frac{1}{2!}}{2^2} + \dfrac{1 + \frac{1}{1!} + \frac{1}{2!} + \frac{1}{3!}}{2^3} + \ldots$ If the above series can be expressed as $$S$$, find $$\big \lfloor 100S \rfloor$$. ×
2018-04-25 16:23:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9950248599052429, "perplexity": 3807.2428209424147}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947931.59/warc/CC-MAIN-20180425154752-20180425174752-00073.warc.gz"}
http://www.physicsforums.com/showthread.php?t=170964
# CalculusAB AP test. How do I do this? by DyslexicHobo Tags: calculusab, test P: 248 I took the Calculus AB AP test, and there were a total of 6 open-ended problems. Most of them I knew exactly how to do, but there was one in particular that gave me some real problems. We haven't done much on function composition, so maybe that's why I couldn't do it (but I have the feeling that I'm just overlooking something). This table is given: x f(x) f'(x) g(x) g'(x) 1 6 4 2 5 2 9 2 3 1 3 10 -4 4 2 4 -1 3 6 7 The functions f and g are differentiable for all real numbers, and g is strictly increasing. The table above gives the values of the functions and their first derivatives at selected values of x. The function h is given by h(x) = f(g(x)) - 6. a) Explain why there must be a value r for 1 < r < 3 such that h(r) = -5 b) Explain why there must be a value c for 1 < c < 3 such that h'(c) = -5 c) Let w be the function given by w(x) = integral of f(t)dt from 1 to g(x). Find the value of w'(3). d) If g^-1 is the inverse function of g, write an equation for the line tangent to the graph of y = g^-1(x) at x=2 So far, all I've established is that f and g are continuous, g'(x) will always be positive, and g(x) will always be less than g(x+1). Everything I know is just the obvious, I don't know where to go next! Can someone please give me the solution so I can quit worrying about this stupid problem? Thanks for any help! P: 56 First, lets take a look at part a). When you have a composite function f(g(x)), whatever the x value is (or r value in this case), you plug that number in for g(x), whatever value that yields, you then plug into f(x). So for part a, the x values are all increasing from (2,4) on the r interval 1 P: 93 i agree with nate on most of the parts, but the explanation i used for b is as follows, and i think this is the explanation the graders are looking for. the average rate of change of h on (1,3) is -5. h(3) = -7, h(1) = 3 -7-3/2 = -5 Therefore, by the mean value theorem, there must be some value in (1,3) whose derivative is -5 because there has to be a value where the derivative is equal to the avg rate of change. for c, nate forgot the chain rule. w'(x) is actually f(g(x)) * g'(x). You gotta do the chain rule when endpoints are functions. So w'(3) is f(g(3)) * g'(3) for d, u must remember the equality that the derivative of the inverse of the y value is equal to 1/(derivative at x) so, it asks for the tangent line of the inverse of g when the function is at 2. Remembr with inverse, the ordered pairs are switched. So in the original function, it's the y value that is 2. So the ordered pair you will use is (2,1). So, the derivative is 1/(g'(1)) = 1/5 so the slope of the tangent line is y-1 = 1/5 (x-2) P: 248 ## CalculusAB AP test. How do I do this? Thank you for the explanation. I feel kind of dumb for not knowing how to do this on the test. I made it seem much harder than it really was (I wasn't even able to get part of it). P: 1,572 Quote by nate808 part b) the derivative for a composite is defined as such if h(x)=f(g(x)), then h'(x)=f'(g(x))g'(x). (BTW the 6 cancels b/c it is a constant) As stated previoulsy, the g(x) values will be between 2-4, so now we must look at the f'(x) values on that interval. By plugging in values, you see that h'(2)=2 and h'(3)=-8. Because of continuity, it must =-5 on that interval. You appear to be using the intermediate value theorem on h'. How do you know that h' is continuous? P: 248 Quote by phoenixthoth You appear to be using the intermediate value theorem on h'. How do you know that h' is continuous? That's a good point! Can someone please prove that h'(r) is continuous? Also, if anyone is interested, I found this site: http://users.adelphia.net/~sismondo/AB073.html (link to this problem) $$f\left( x\right) =\left\{ \begin{array}{cc} 0, & x=0 \\ x^{2}\sin \left( 1/x\right) , & x\neq 0 \end{array} \right.$$
2014-04-20 11:25:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7488462924957275, "perplexity": 333.6585851274906}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00289-ip-10-147-4-33.ec2.internal.warc.gz"}
https://www.openfoam.com/documentation/guides/latest/api/localReferenceTemperature_8C_source.html
The open source CFD toolbox localReferenceTemperature.C Go to the documentation of this file. 1/*---------------------------------------------------------------------------*\ 2 ========= | 3 \\ / F ield | OpenFOAM: The Open Source CFD Toolbox 4 \\ / O peration | 5 \\ / A nd | www.openfoam.com 6 \\/ M anipulation | 7------------------------------------------------------------------------------- 8 Copyright (C) 2017-2020 OpenCFD Ltd. 9------------------------------------------------------------------------------- 11 This file is part of OpenFOAM. 12 13 OpenFOAM is free software: you can redistribute it and/or modify it 15 the Free Software Foundation, either version 3 of the License, or 16 (at your option) any later version. 17 18 OpenFOAM is distributed in the hope that it will be useful, but WITHOUT 19 ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 20 FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License 21 for more details. 22 23 You should have received a copy of the GNU General Public License 24 along with OpenFOAM. If not, see <http://www.gnu.org/licenses/>. 25 26\*---------------------------------------------------------------------------*/ 27 30 31// * * * * * * * * * * * * * * Static Data Members * * * * * * * * * * * * * // 32 33namespace Foam 34{ 35namespace heatTransferCoeffModels 36{ 39 ( 43 ); 44} 45} 46 47 48// * * * * * * * * * * * * * * * * Constructors * * * * * * * * * * * * * * // 49 50Foam::heatTransferCoeffModels::localReferenceTemperature:: 51localReferenceTemperature 52( 53 const dictionary& dict, 54 const fvMesh& mesh, 55 const word& TName 56) 57: 59{ 61} 62 63 64// * * * * * * * * * * * * * * Member Functions * * * * * * * * * * * * * * // 65 67( 68 const dictionary& dict 69) 70{ 72} 73 74 76( 77 volScalarField& htc, 79) 80{ 81 const auto& T = mesh_.lookupObject<volScalarField>(TName_); 82 const volScalarField::Boundary& Tbf = T.boundaryField(); 83 const scalar eps = ROOTVSMALL; 84 86 87 for (const label patchi : patchSet_) 88 { 89 const scalarField Tc(Tbf[patchi].patchInternalField()); 90 htcBf[patchi] = q[patchi]/(Tc - Tbf[patchi] + eps); 91 } 92} 93 94 95// ************************************************************************* // Macros for easy insertion into run-time selection tables. Add to construction table with typeName as the key. A field of fields is a PtrList of fields with reference counting. Definition: FieldField.H:80 Boundary & boundaryFieldRef(const bool updateAccessTime=true) Return a reference to the boundary field. Re-read model coefficients if they have changed. A list of keyword definitions, which are a keyword followed by a number of values (eg,... Definition: dictionary.H:126 Mesh data needed to do the Finite Volume discretisation. Definition: fvMesh.H:91 An abstract base class for heat transfer coeffcient models. Heat transfer coefficient calculation that employs the patch internal field as the reference temperat...
2023-02-04 05:15:57
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8425129652023315, "perplexity": 2583.6229631012993}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500094.26/warc/CC-MAIN-20230204044030-20230204074030-00869.warc.gz"}
http://rubyforge.org/pipermail/rubyinstaller-devel/2004-March/000056.html
# [Rubyinstaller-devel] Ruby Installer is Ready for Enhancements & Bug Fixes Shashank Date sdate at everestkc.net Sun Mar 28 23:54:43 EST 2004 ```> You need to install in "C:\installer\stable\Expat-1.95.5" just as the > hint says (at least, that's what's supposed to happen..). I.e., be sure > to put in the expat part with the dash and dots... Yes, that is what I meant when I mentioned about <C:\installer\stable> Infact, like Curt suggested, I had just accepted the defaults. But it still did not work. So finally after puts'ing some debug statements in builder.rb, package.rb, and commands.rb I figured out that it expected the install
2016-05-01 19:36:53
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8039004802703857, "perplexity": 8112.63654281596}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860116886.38/warc/CC-MAIN-20160428161516-00102-ip-10-239-7-51.ec2.internal.warc.gz"}
https://www.gamedev.net/forums/topic/391548-how-do-i-get-the-current-line-number-and-cursor-position-in-a-richtextbox/
# [.net] How do I get the current line number and cursor position in a RichTextBox? This topic is 4453 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts I've worked out how to select text, change it and edit which all bodes well for my IDE, but I now need to determine which line the user is currently editing as well as the cursor position. I.e the exact character he/she is editing for my intellisense.Also need to determine the character the line starts at. I'm looking for something like, int LineNum = myRichTextBox.CurrentLine; int StartChar = myRichTextBox.LineStart( LineNum ); int EndChar = StartChar + myRichTextBox.LineLength( LineNum ); String txt = myRichTextBox.Text; txt = txt.substring( startchar,endchar ); So in total I need currentline, linestart and linelength. How would I go abiout these? It must be possible otherwise I doubt any ides would even better possible. Or are there better ways of doing this I'm just not seeing? Is this is a good idea btw, I plan to tokenize the currentline, and base the intellisense on this using a listbox created over the richtextbox? It works in my tests, displays fine but how do they do it in VC# Express' ide? That's what I want to emulate mostly. ##### Share on other sites This should give you what you need. http://www.codeproject.com/cs/miscctrl/RicherRichTextBox.asp theTroll ##### Share on other sites OK with that I can get the current line, and the current character position. I still need a way to determine at which character the line starts at, and which one it ends at. I'm surprised that I need to use that site you showed me though, I thought this being net it would have all these features built in. I mean how did they manage the VC# ide without them? ##### Share on other sites OK with that I can get the current line, and the current character position. I still need a way to determine at which character the line starts at, and which one it ends at. I'm surprised that I need to use that site you showed me though, I thought this being net it would have all these features built in. I mean how did they manage the VC# ide without them? ##### Share on other sites The standard RichTextBox gives you SelectionStart, SelectionLength, GetLineFromCharIndex and Lines, so your little example could be: int LineNum = myRichTextBox.GetLineFromCharIndex(myRichTextBox.SelectionStart);string txt = myRichTextBox.Lines[LineNum]; ##### Share on other sites That works, thanks. I need just one more thing now, and that is a way to determine which character the cursor is at the current line rather than where it is globally. Possible? ##### Share on other sites Hmm, that depends on exactly what you mean by that. The character index (SelectionStart) is the offset from the start of the content, so I guess you could loop through all the previous lines and add up the lengths, then subtract that from the global index which would give you the index on that line. Alternatively you could use GetPointFromCharIndex, which gives you the X,Y of the selection, and calculate how many chars into the line that X position is ... don't know if you can do that reliably though. Oh, and on the previous code, watch out for this: Quote: The GetLineFromCharIndex method returns the physical line number where the indexed character is located within the control. For example, if a portion of the first logical line of text in the control wraps to the next line, the GetLineFromCharIndex method returns 1 if the character at the specified character index has wrapped to the second physical line. ... if you're using word-wrap, you're kind of buggered. ##### Share on other sites Ok thanks I think I get what you mean, could you write a small example so I can be sure I'm on the right track? :) ##### Share on other sites I think this dull looping code should do the trick: // As aboveint ss = richEdit.SelectionStart;int line = richEdit.myRichTextBox.GetLineFromCharIndex(ss);// Find what index the start of that line isint sofar = 0;for(int i = 0; i < line; i++) sofar += richEdit.Lines.Length;int index_in_line = ss - sofar; 1. 1 Rutin 23 2. 2 3. 3 JoeJ 20 4. 4 5. 5 • 32 • 41 • 23 • 13 • 13 • ### Forum Statistics • Total Topics 631742 • Total Posts 3001989 ×
2018-07-18 17:23:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23929090797901154, "perplexity": 1885.2108428640083}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590295.61/warc/CC-MAIN-20180718154631-20180718174631-00041.warc.gz"}
http://lorenzo-thinkingoutaloud.blogspot.com/2009/02/explaining-postmodernism.html
## Thursday, February 19, 2009 ### Explaining Postmodernism It is a profoundly enlightening experience, to read a clearly written, knowledgeable book that takes some of the points you have been groping towards and puts them in a much more complete context. Stephen Hicks’s Explaining Postmodernism: Skepticism and Socialism from Rousseau to Foucault is such an enlightening book. So enlightening that, having finished it, I proceeded to re-read it. It is worth reading just for its excellent, thorough and very clear rendition of the history of Western philosophy since John Locke. It would make a fine teaching text for any survey course on such. Not merely because it is very clearly written – Hicks is happy to include tables and flow charts – but because it shows a solid grasp of the historical context of ideas. (Something philosophers – even great philosophers – are not always good at.) Chapter Four, The Climate of Collectivism, is, for example, a fine rendition of the intellectual history "Right" and "Left" forms of collectivism up to the collapse of Right collectivism with the defeat of Nazism and Fascism. Hicks puts his central thesis at the beginning of the table of contents: The failure of epistemology made postmodernism possible, and the failure of socialism made postmodernism necessary (p.i). The first chapter sets out What Postmodernism Is relying, as Hicks does throughout the book, on frequent use of quotations from the cited thinkers. In one of his useful tables, he summarises post-modernism as anti-realism in metaphysics, social subjectivism in epistemology, a social construction view of human nature, collective egalitarianism in ethics, socialism in politics and economics, occurring in humanities and related professions in the late C20th (p.15). The second chapter deals with the Counter-Enlightenment Attack on Reason where he locates Immanuel Kant’s epistemological subjectivism (the belief that we do not know reality, just our sense-perceptions thereof) as the key break from Enlightenment thought from which a stream of thought – Hegel, Nietzsche, Heidegger – then follows. A central motive for Kant was to rescue religion (specifically Christianity) from the corrosive effects of Enlightenment scepticism. If we only know phenomena—and not reality as such—then God could be doing anything out there in the “real” world and the realm of faith is thereby safe. As Hicks points out in the last chapter (Chapter Six, Postmodern Strategies) given the failure of socialism (the subject of the preceding chapter, The Crisis of Socialism), folk on the Left had a need to safeguard a realm of faith since brute reality was being so unhelpful. So Kant’s move became their move. The realm of fact may be unhelpful, so we will just discount it—it’s all just language games—in order to protect our realm of feeling and commitment (i.e. faith). Helped along by the failure of epistemology to come up with a convincing answer to the problems of empiricism and rationalism (Chapter Three, The Twentieth Century Collapse of Reason). As Hicks notes, Postmodernism is a philosophical doctrine of the Left (typically the very far Left). This is strange in a movement in philosophy, which usually have adherents of a range of political views Hicks notes the similar biographies of the key figures of Postmodernism (Michel Foucault, Jean-François Lyotard, Jacques Derrida and Richard Rorty). All were born between 1926 and 1931 and had strong Left (typically far Left) credentials. So they were all coming of age as the 1950s showed the failures of the Soviet model while international and economic resurgence showed the vigour of liberal capitalism as a social system. The facts were being inconvenient so the Left – which had been very modernist (committed to universal values, science, economic development) – became increasingly postmodernist. They took the Kantian out of retreating into subjectivism. Of course, if Enlightenment/modernist epistemology had been fine and healthy, that would have been harder. But it was not, having collapsed into various dead-ends (e.g. Logical Positivism). The rescue of Enlightenment project by an effective epistemology Hicks states (in his last paragraph [p.201]) is the necessary task to really get on top of post-modernism. I found the book both clear and enlightening. I was not entirely convinced by his analysis of motives, though. I am not querying Hicks’s looking at the emotional basis of postmodernist beliefs: not least because he has the evidence to back it up. But his emotional analysis works quite well for older cohorts, less well regarding the emotional appeal for younger ones. There it strikes me that the effectiveness of such beliefs as status markers is worth considering. After all, if it is all ultimately about the strength of your feelings, the worthiness of your intentions, then that is not only an easy status marker, but it leads directly to the ad hominen style of rhetoric which, as he points out, is so much the postmodernist style (p.20). For your good attitudes display your (positive) status only if different attitudes display (negative) status. Hence the juxtaposition of cultural relativism with virulent moral absolutism: part of a wider pattern of (as critics frequently point out) blatant contradictions (p.184). The great strength of the book is its clear outline of the history of Western philosophy over the last three centuries. It is that clear setting out of the philosophical history, and putting it in wider context, which makes the critical analysis of postmodernism so effective. 1. Got it from Amazon for $30 but it is now$100 and has been as high as \$997 recently!!! It also is available from Hick's site as a free PDF. Go Figure. It is everything you say it is. In terms of my own experience I have to grant that Kant has a real point about the senses, but note that the Buddhist attempt to penetrate beyond appearances avoids the subjectivism and nihilism of PoMo. But it is social and economic theory that is most relevant here and the book unmasks what I have to agree is the underlying agenda of PoMo. Interesting that 'unmasking' is a central activity of PoMo and that a hidden agenda lies at its core. Sooner or later your enemies will tell you exactly what they are doing by projecting what they are up to onto you. I am rereading Fernandez Armesto's Truth to get some perspective and then I will reread Hicks. Discovering Fichte as the source of many forms of Germanic origin institutionalized abuse was a special bonus! Best book tip of the decade! 2. I am happy to encourage people to have the Hicks experience :) Particularly such an engaged reader as yourself. 3. Yes, I'm engaged because my experience of the academic world - both here and in the US - is that the political and social views of faculty are mind numbingly predictable. Yours are not. An mutually beneficial exchange of ideas is what an intellectual commons is all about. Instead we have an intellectual MacDonald's with everyone singing the praises of the Big Marx. Corporatism more conformist that is required to work for a corporation. 4. Corporatism more conformist that is required to work for a corporation. ROFL (Though I liked Instead we have an intellectual MacDonald's with everyone singing the praises of the Big Marx too.) And I am glad I am not predictable (though perhaps you just do not know me well enough yet ...) I am conscious that the book on bigotry I am working on will be disliked by orthodox Catholics, traditional Muslims and Marxists of all varieties. May make it a touch difficult to place :)
2015-09-01 16:07:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3624785542488098, "perplexity": 3218.2237384422615}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645195983.63/warc/CC-MAIN-20150827031315-00120-ip-10-171-96-226.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/trigonometry/CLONE-68cac39a-c5ec-4c26-8565-a44738e90952/chapter-4-graphs-of-the-circular-functions-section-4-3-graphs-of-the-tangent-and-cotangent-functions-4-3-exercises-page-171/24
## Trigonometry (11th Edition) Clone RECALL: (1) The period of the function $y=\cot{(bx)}$ is $\frac{\pi}{b}$. (2) The consecutive vertical asymptotes of the function $y=\cot{x}$, whose period is $\pi$, are $x=0$ and $x=\pi$. Thus, with $b=2$, the period of the given function is $\frac{\pi}{2}$. Consecutive vertical asymptotes of the given function are $x=0$ and $\frac{\pi}{2}$. This means that one period of the given function is in the interval $[0, \frac{\pi}{2}]$. Dividing this interval into four equal parts give the key x-values: $\frac{\pi}{8}, \frac{\pi}{4}, \frac{3\pi}{8}$. To graph the given function, perform the following steps: (1) Create a table of values for the given function using the key x-values listed above. (Refer to the attached image table below.) (2) Graph the consecutive vertical asymptotes. (3) Plot each point from the table then connect them using a smooth curve, making sure that the curves are asymptotic with the lines in Step (2) above. Refer to the graph in the answer part above.
2019-08-22 09:29:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9523220062255859, "perplexity": 233.82478268749537}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317037.24/warc/CC-MAIN-20190822084513-20190822110513-00418.warc.gz"}
https://math.stackexchange.com/questions/1792230/coordinate-free-notation-for-tensor-contraction
# Coordinate-free notation for tensor contraction? I am not sure if I can prevent this question from being too vague or with too large an overlap with other similar math.SE questions, but I will do my best... A standard linear operation in tensor calculus is tensor contraction, which can be conveniently expressed in a given coordinate basis (or using Penrose-Rindler's abstract index notation) by means of Einstein's summation convention. Since this is a "pointwise" (i.e. algebraic) operation, we can restrict our discussion to (constant) mixed tensors of contravariant rank $r$ and covariant rank $s$ $$T\in\otimes^r_s V\doteq(\otimes^r V)\otimes(\otimes^s V^*)=\underbrace{V\otimes\cdots\otimes V}_{r\text{ times}}\otimes\underbrace{V^*\otimes\cdots\otimes V^*}_{s\text{ times}}$$ on a (finite dimensional) vector space $V$ over $\mathbb{R}$ or $\mathbb{C}$ with dual $V^*$. If $\{e_1\ldots,e_n\}$ is a basis of $V$ with dual basis $\{\theta^1,\ldots,\theta^n\}$ on $V^*$, i.e. $$\theta^j(e_i)=\delta^j_i=\begin{cases} 1 & (i=j) \\ 0 & (i\neq j) \end{cases}\ ,$$ and we expand $T$ in the corresponding basis of $\otimes^r_s V$ as $$T=\sum^n_{\substack{i_1,\ldots,i_r,\\ j_1,\ldots,j_s=1}} T^{i_1\cdots i_r}_{j_1\cdots j_s}e_{i_1}\otimes\cdots\otimes e_{i_r}\otimes\theta^{j_1}\otimes\cdots\otimes\theta^{j_s}\ ,$$ the tensor contraction of the $k$-th contravariant index with the $l$-th covariant index in this basis yields a tensor $S$ of contravariant rank $r-1$ and covariant rank $s-1$ whose components in the corresponding basis of $\otimes^{r-1}_{s-1}V$ are given by $$S^{i_1\cdots i_{r-1}}_{j_1\cdots j_{s-1}}=\sum^n_{m=1}T^{i_1\cdots i_{k-1}mi_k\cdots i_{r-1}}_{j_1\cdots j_{l-1}mj_l\cdots j_{s-1}}\text{ or simply }T^{i_1\cdots i_{k-1}mi_k\cdots i_{r-1}}_{j_1\cdots j_{l-1}mj_l\cdots j_{s-1}}$$ by Einstein's summation convention. For example, if $r=2$, $s=3$, $k=1$ and $l=2$, the tensor contraction of $T^{ab}_{cde}$ at the aforementioned indices would be $S^b_{ce}=T^{mb}_{cme}$ if one employs Penrose-Rindler's abstract index notation instead. Since tensor contraction is a partial trace, it does not depend on a choice of basis. This can also be seen by the following, equivalent definition of tensor contraction (as done in this math.SE question): if $T$ is completely factorized, i.e. $T=X_1\otimes\cdots\otimes X_r\otimes\omega^1\otimes\cdots\otimes\omega^s$ with $X_i\in V$, $\omega^j\in V^*$, $i=1,\ldots,r$, $j=1,\ldots,s$, tensor contraction of the $k$-th contravariant index with the $l$-th covariant index yields $$S=\omega_l(X_k)X_1\otimes\cdots\otimes\widehat{X}_k\otimes\cdots\otimes X_r\otimes\omega_1\otimes\cdots\otimes\widehat{\omega}_l\otimes\cdots\otimes\omega_s\ ,$$ where the hats stand for omission. The above formula is then extended to general $T$ by linearity. In the particular case $r=s=k=l=1$, tensor contraction applied to $T$ is just the trace of the linear transformation $T:V\rightarrow V$, usually denoted by $\mathrm{Tr}\,T$. Question: However, I have not found in the literature so far a good, general and coordinate-free notation (i.e. without referring to components in a given basis) for tensor contraction besides appealing to Penrose-Rindler's abstract index notation, which, though computationally efficient (as physicists know well), aesthetically speaking is kind of a crutch. Any ideas? I would specially like to have references for such notation(s?), should they exist. (Remark: I am aware of this other closely related math.SE question, but the only answer given to it, based on the report by T.G. Kolda and B.W. Bader (Tensor Decompositions and Applications, Sandia Report 6702 (2007)), does not seem to cover my question) • There's also Penrose's diagrammatic notation. May 19 '16 at 22:55 • Ah yes, I've just checked it in my copy of "The Road to Reality"... People working with symmetric monoidal categories like to use this notation. It helps you visualize what is happening when multiple contractions take place, but it seems to get messy really quickly. Besides, it doesn't seem to have a particular notation for contraction involving non-factorized tensors (does it?), so it seems still not enough. May 20 '16 at 0:28 • @PedroLauridsenRibeiro Contraction for non-factorized tensors can be expressed just fine in the Penrose notation. For example $T^{abc}_{db}$ would be drawn as a box labelled $T$ with three wires going in at the bottom, two wires coming out from the top, and with the right wire on top looping round to connect to the middle wire going in at the bottom. May 20 '16 at 11:48 • @OscarCunningham I figured it would be something like that, but I couldn't find an example of this in "The Road to Reality" or the appendix to the first volume of Penrose-Rindler's "Spinors and Space-Time" (the latter seems to be a standard reference for diagrammatic notation). In any case, I still imagine this notation can get really messy when many of those lines start getting entangled with each other when performing multiple contractions involving both different tensor factors and within the same (non-factorized) tensor factor, for example. May 20 '16 at 17:56 I've seen in the literature the notation $C$ with some additional specifications for the contraction maps of all sorts, but the amount of decorations on the symbol $C$ varied depending on the context. See, e.g., A.Gray, Tubes, p.56, where these maps are used in the case of somewhat special tensors, and therefore the notation is simpler. In general, there is a whole family of uniquely defined maps $$C^{(r,s)}_{p,q} \colon \otimes^{r}_{s} V \to \otimes^{r-1}_{s-1} V$$ which are collectively called tensor contractions ($1 \le p \le r, 1 \le q \le s$). These maps are uniquely characterized by making the following diagrams commutative: $$\require{AMScd} \begin{CD} \times^{r}_{s} V @> {P^{(r,s)}_{p,q}} >> \times^{r-1}_{s-1} V\\ @V{\otimes^{r}_{s}}VV @VV{\otimes^{r-1}_{s-1}}V \\ \otimes^{r}_{s} V @>{C^{(r,s)}_{p,q}}>> \otimes^{r-1}_{s-1} V \end{CD}$$ Explanations are in order. Recall that the tensor products $\otimes^{r}_{s} V$ are equipped with the universal maps $$\otimes^{r}_{s} \colon \times^{r}_{s} V \to \otimes^{r}_{s} V$$ where $\times^{r}_{s} V := ( \times^r V) \times (\times^s V^*)$. Besides that, there is a canonical pairing $P$ between a vector space $V$ and its dual: $$P \colon V \times V^* \to \mathbb{R} \colon (v, \omega) \mapsto \omega(v)$$ Notice that map $P$ is bilinear and can be extended to a family of multilinear maps $$P^{(r,s)}_{p,q} \colon \times^{r}_{s} V \to \times^{r-1}_{s-1} V$$ by the formula: $$P^{(r,s)}_{p,q} (v_1, \dots, v_p, \dots, v_r, \omega_1, \dots, \omega_q, \dots, \omega_s) = \omega_q (v_p) (v_1, \dots, \widehat{v_p}, \dots, v_r, \omega_1, \dots, \widehat{\omega_q}, \dots, \omega_s)$$ where a hat means omission. Since maps $P^{(r,s)}_{p,q}$ are multilinear, the universal property of the maps $\otimes^{r}_{s}$ implies that there are uniquely defined maps $$\tilde{P}^{(r,s)}_{p,q} \colon \otimes^{r}_{s} V \to \times^{r-1}_{s-1} V$$ and then the maps $C^{(r,s)}_{p,q}$ are given by $$C^{(r,s)}_{p,q} := \otimes^{r-1}_{s-1} \circ \tilde{P}^{(r,s)}_{p,q}$$ • Thanks for the answer and the reference. Now in retrospect it seems more reasonable to replace $C$ by $\mathrm{Tr}$ in order not to mistake it for another tensor (e.g. the Weyl tensor, which is often denoted by the same letter, standing for "conformal") and to recall that we're dealing with a partial trace. It also seems to me that since the rank of the tensor is usually clear from the context, we can abuse notation a bit and write, say, $\mathrm{Tr}_{p,q}$ or even $\mathrm{Tr}^p_q$ to make clear we are contracting the $p$-th contravariant index with the $q$-th covariant index. May 29 '16 at 15:53 • One could even elaborate further on this: for multiple contractions, one could write it in a condensed form: $$\mathrm{Tr}^I_\phi=\mathrm{Tr}^{i_1}_{\phi(i_1)}\circ\cdots\circ\mathrm{Tr}^{i_{|I|}}_{\phi(i_{|I|})}\ ,$$ where $I\subset\{1,\ldots,r\}$ with $1\leq|I|=$ cardinality of $I\leq\min\{r,s\}$ and $\phi:I\rightarrow\{1,\ldots,s\}$ is any injective map. In any case, your answer is conceptually very neat and worthy of the bounty. Thanks again! May 30 '16 at 16:55 • Do we even need the superscript $(r,s)$? Couldn't we define $C_{p,q}$ on all tensors, just by linearity and $C_{p,q}(X)=0$ if $X$ has $r<p$ or $s<q$? Dec 10 '18 at 3:05 • @mr_e_man My goal was just to convey the idea. You may elaborate it to your liking. Indeed, there are many approaches to this task. Moreover, everyone is encouraged either to post their answer, or make an edit to existing ones, if appropriate. Dec 10 '18 at 8:38
2021-09-23 21:52:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9600157737731934, "perplexity": 286.67422174391055}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057447.52/warc/CC-MAIN-20210923195546-20210923225546-00237.warc.gz"}
https://physics.stackexchange.com/questions/441329/what-forces-the-maximum-speed-of-a-cyclist-on-a-steep-climb
# What forces the maximum speed of a cyclist on a steep climb? I know that on a flat road drag is the main force, I have to provide as much force as the drag (and the friction in the drive train) produces. I can also see that the faster I ride, the harder it gets and at speeds over 35 km/h that if I bend down to the handlebars, I can ride faster. But what about steep climbs? On a 15% climb I can barely ride at 5-6 km/h and the drag doesn't seem to be noticeable (I don't seem to go any faster when there's a moderate tailwind). I need to produce more force to combat gravity, but that does not depend on my speed - why can't I ride on the same climb at e.g. 9 km/h speed? On a steep climb we can ignore drag and friction. If you wish to sustain higher velocity, you need to invest the same amount of energy in a shorter time, i.e. you need more power. This energy goes into your gravitational potential energy, given by $$E_p = mgh,$$ where $$m$$ is your mass (together with your bicycle), $$g$$ is gravitational acceleration and $$h$$ is height. If your velocity is $$v$$ and angle of inclination you climb is $$\alpha$$, then your vertical component of velocity is $$v_z=\frac{d h}{d t}=v \sin\alpha.$$ In order to sustain this, the required power is $$P = \frac{d E_p}{d t} = mgv_z = mgv\sin\alpha.$$ You can see that the required power is linearly proportional to velocity. If the maximum power you can sustain is $$P$$, the top speed you can sustain is given by: $$v_{max}=\frac P{mg\sin\alpha}$$
2019-08-25 03:29:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 10, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.795649528503418, "perplexity": 163.14702520183465}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027322170.99/warc/CC-MAIN-20190825021120-20190825043120-00058.warc.gz"}
http://www.map.mpim-bonn.mpg.de/index.php?title=B-Bordism&diff=prev&oldid=14921
# B-Bordism (Difference between revisions) ## 1 Introduction On this page we recall the definition of the bordism groups of closed smooth manifolds, with extra topological structure: orientation, spin-structure, weakly almost complex structure etc. The situation for piecewise linear and topological manifolds is similar and we discuss it briefly below. The formulation of the general set-up for B-Bordism dates back to [Lashof1963]. There are detailed treatments in [Stong1968, Chapter II] and [Bröcker&tom Dieck1970] as well as summaries in [Teichner1992, Part 1: 1], [Kreck1999, Section 1], [Kreck&Lück2005, 18.10]. See also the Wikipedia bordism page. We specify extra topological structure universally by means of a fibration $\gamma : B \to BO$${{Stub}} == Introduction == ; On this page we recall the definition of the bordism groups of closed smooth manifolds, with extra topological structure: orientation, spin-structure, weakly almost complex structure etc. The situation for [[Wikipedia:Piecewise_linear_manifold|piecewise linear]] and [[Wikipedia:Topological_manifold|topological manifolds]] is similar and we discuss it briefly [[B-Bordism#Piecewise linear and topological bordism|below]]. The formulation of the general set-up for B-Bordism dates back to {{cite|Lashof1963}}. There are detailed treatments in {{cite|Stong1968|Chapter II}} and {{cite|Bröcker&tom Dieck1970}} as well as summaries in {{cite|Teichner1992|Part 1: 1}}, {{cite|Kreck1999|Section 1}}, {{cite|Kreck&Lück2005|18.10}}. See also the [[Wikipedia:Bordism|Wikipedia bordism page]]. We specify extra topological structure universally by means of a fibration \gamma : B \to BO where BO denotes the classifying space of the stable orthogonal group and B is homotopy equivalent to a CW complex of finite type. Abusing notation, one writes B for the fibration \gamma. Speaking somewhat imprecisely (precise details are below) a B-manifold is a compact manifold M together with a lift to B of a classifying map for the stable normal bundle of M: \xymatrix{ & B \ar[d]^{\gamma} \ W \ar[r]^{\nu_W} \ar[ur]^{\bar \nu} & BO.} The n-dimensional B-bordism group is defined to be the set of closed B-manifolds modulo the relation of bordism via compact B-manifolds. Addition is given by disjoint union and in fact for each n \geq 0 there is a group \Omega_n^B := \{ (M, \bar \nu) \}/\equiv. Alternative notations are \Omega_n(B) and also \Omega_n^G when (B \to BO) = (BG \to BO) for G \to O a stable representation of a topological group G. Details of the definition and some important theorems for computing \Omega_n^B follow. === Examples === ; We list some fundamental examples with common notation and also indicate the fibration B. * [[Unoriented bordism|Unoriented bordism]]: \mathcal{N}_*; B = (BO = BO). * [[Oriented bordism|Oriented bordism]]: \Omega_*, \Omega_*^{SO}; B = (BSO \to BO). * [[Spin bordism|Spin bordism]]: \Omega_*^{Spin}; B = (BSpin \to BO). * [[Spin^c bordism|Spinc bordism]]: \Omega_*^{Spin^{c}}; B = (BSpin^{c} \to BO). * [[String bordism|String bodism]] : \Omega_*^{String}, \Omega_*^{BO\langle 8 \rangle}; B = (BO\langle 8 \rangle \to BO). * [[Complex bordism|Complex bordism]] : \Omega_*^U; B = (BU \to BO). * [[Special unitary bordism|Special unitary bordism]] : \Omega_*^{SU}; B = (BSU \to BO). * [[Framed bordism|Framed bordism]] : \Omega_*^{fr}; B = (PBO \to BO), the path space fibration. == B-structures and bordisms == ; In this section we give a compressed accont of parts of {{cite|Stong1968|Chapter II}}. Let G_{r, m} denote the [[Wikipedia:Grassman_manifold|Grassmann manifold]] of unoriented r-planes in \Rr^m and let BO(r) = \text{lim}_{m \to \infty} G_{r, m} be the infinite Grassmannian and fix a fibration \gamma_r : B_r \to BO(r). {{beginthm|Definition}} Let \xi: E \to X be a rank r vector bundle classified by \xi : X \to BO(r). A B_r-structure on \xi is a vertical homotopy class of maps \bar \xi : X \to B_r such that \gamma_r \circ \bar \xi = \xi. {{endthm}} Note that if \xi_0 and \xi_1 are isomorphic vector bundles over X then the sets of B_r-structures on each are in bijective equivalence. However B_r-structures are defined on specific bundles, not isomorphism classes of bundles: a specific isomorphism, up to appropriate equivalence, is required to give a bijection between the sets of B_r structures. Happily this is the case for the normal bundle of an embedding as we now explain. Let M be a compact manifold and let i_0 : M \to \Rr^{n+r} be an embedding. Equipping \Rr^{n+r} with the standard metric, the [[Wikipedia:Normal_bundle|normal bundle]] of i_0 is a rank r vector bundle over M classified by its normal Gauss map \nu(i_0) : M \to G_{r, n+r} \subset BO(r). If i_1 is another such embedding and r >> n, then i_1 is [[Wikipedia:Regular_homotopy|regularly homotopic]] to i_0 and all regular homotopies are regularly homotopic relative to their endpoints (see {{cite|Hirsch1959}}). A regular homotopy H defines an isomorphism \alpha_H :\nu(i_0) \cong \nu(i_1) and a regular homotopy of regular homotopies gives a homotopy between these isomorphisms. Taking care one proves the following {{beginthm|Lemma|{{cite|Stong1968|p 15}}}} For r sufficiently large, (depending only on n) there is a 1-1 correspondence between the set of B_r structures of the normal bundles of any two embeddings i_0, i_1 : M \to \Rr^{n+r}. {{endthm}} This lemma is one motivation for the useful but subtle notion of a fibred stable vector bundle. {{beginthm|Definition}} A fibred stable vector bundle B = (B_r, \gamma_r, g_r) consists of the following data: a sequence of fibrations \gamma_r : B_r \to BO(r) together with a sequence of maps g_r : B_r \to B_{r+1} fitting into the following commutative diagram \xymatrix{ B_r \ar[r]^{g_r} \ar[d]^{\gamma_r} & B_{r+1} \ar[d]^{\gamma_{r+1}} \ BO(r) \ar[r]^{j_r} & BO(r+1) } where j_r is the standard inclusion. We let B = \text{lim}_{r \to \infty}(B_r). {{endthm}} {{beginrem|Remark}} A fibred stable vector bundle B gives rise to a stable vector bundle as defined in {{cite|Kreck&Lück2005|18.10}}. One defines E_r \to B_r to be the pullback bundle \gamma_r^*(EO(r)) where EO(r) is the universal r-plane bundle over BO(r). The diagram above gives rise to bundle maps \bar g_r : E_r \oplus \underline{\Rr} \to E_{r+1} covering the maps g_r; where \underline{\Rr} denotes the trivial rank 1 bundle over B_r. {{endrem}} Now a B_r-structure on the normal bundle of an embedding i: M \to \Rr^{n+r} defines a unique B_{r+1}-structure on the composition of i with the standard inclusion \Rr^{n+r} \to \Rr^{n+r+1}. Hence we can make the following {{beginthm|Definition|{{cite|Stong1968|p 15}}}} Let B be a fibred stable vector bundle. A B-structure on M is an equivalence class of B_r-structure on M where two such structures are equivalent if they become equivalent for r sufficiently large. A B-manifold is a pair (M, \bar \nu) where M is a compact manifold and \bar \nu is a B-structure on M. {{endthm}} If W is a compact manifold with boundary \partial W then by choosing the inward-pointing normal vector along \partial W, a B-structure on W restricts to a B-structure on \partial W. In particular, if (M, \bar \nu_M) is a closed B manifold then W = M \times [0, 1] has a canonical B-structure \bar \nu_{M \times [0, 1]} which restricts to (M, \bar \nu_M) on M \times \{ 0 \}. The restriction of this B-structure to M \times \{ 1 \} is denoted -\bar \nu: by construction (M \sqcup M, \bar \nu \sqcup - \bar \nu) is the boundary of (M \times [0, 1], \bar \nu_{M \times [0, 1]}). {{beginthm|Definition}} Closed B-manifolds (M_0, \bar \nu_0) and (M_1, \bar \nu_1) are B-bordant if there is a compact B-manifold (W, \bar \nu) such that \partial(W, \bar \nu) = (M_0 \sqcup M_1, \bar \nu_0 \sqcup -\bar \nu_1). We write [M, \bar \nu] for the bordism class of (M, \bar \nu). {{endthm}} {{beginthm|Proposition|{{cite|Stong1968|p 17}}}} The set of B-bordism classes of closed n-manifolds with B-structure, \Omega_n^B := \{ [M, \bar \nu ] \}, forms an abelian group under the operation of disjoint union with inverse -[M,\bar \nu] = [M, -\bar \nu]. {{endthm}} == Singular bordism == ; B-bordism gives rise to a generalised homology theory. If X is a space then the n-cycles of this homology theory are pairs ((M, \bar \nu),~ f: M \to X) where (M, \bar \nu) is a closed n-dimensional B-manifold and f is any continuous map. Two cycles ((M_0, \bar \nu_0), f_0) and ((M_1, \bar \nu_1), f_1) are homologous if there is a pair ((W, \bar \nu),~ g : W \to X) where (W, \bar \nu) is a B-bordism from (M_0, \bar \nu_0) to (M_1, \bar \nu_1) and g : W \to X is a continuous map extending f_0 \sqcup f_1. Writing [(M, \bar \nu), f] for the equivalence class of ((M, \bar \nu) ,f) we obtain an abelian group \Omega_n^B(X) : = \{ [(M, \bar \nu), f] \} with group operation disjoint union and inverse -[(M, \bar \nu), f] = [(M, - \bar \nu), f]. {{beginthm|Proposition}} The mapping X \to \Omega_n^B(X) defines a generalised homology theory with coefficients \Omega_n^B(\text{pt}) = \Omega_n^B. {{endthm}} Given a stable vector bundle B = (B_r, \gamma_r, g_r) we can form the stable vector bundle B \times X := (B_r \times X, \gamma_r \times X, g_r \times \id_X). The following simple lemma is clear but often useful. {{beginthm|Lemma}} For any space X there is an isomorphism \Omega_n^B(X) \cong \Omega_n^{B \times X}. {{endthm}} == The orientation homomorphism == ; We fix a local orientation at the base-point of BO. It then follows that every closed B-manifold (M, \bar \nu) is given a local orientation. This amounts to a choice of fundamental class of M which is a generator [M] \in H_n(M; \underline{\Zz}) where \underline{\Zz} denotes the local coefficient system defined by the [[Wikipedia:Orientation_character|orientation character]] of M. Given a closed B-manifold (M, \bar \nu) we can use \bar \nu to push the fundamental class of [M] to \bar \nu_*[M] \in H_n(B; \underline{\Zz}). Now the local coefficient system is defined by the orientation character of the stable bundle B. It is easy to check that \bar \nu_*[M] depends only on the B-bordism class of (M, \bar \nu) and is additive with respect to the operations +/- on \Omega_n^B. {{beginthm|Definition}} Let B be a fibred stable vector bundle. The orientation homomorphism is defined as follows: \rho : \Omega_n^B \to H_n(B; \underline{\Zz}), ~~~[M, \bar \nu] \mapsto \bar \nu_*[M]. {{endthm}} For the singular bordism groups \Omega_n^B(X) we have no bundle over X so in general there is only a \Zz/2-valued orientation homomorphism. However, if the first [[Wikipedia:Stiefel-Whitney_class|Stiefel-Whitney class]] of B vanishes, w_1(B) = 0, then all B-manifolds are oriented in the usual sense and the orientation homomorphism can be lifted to \Zz. {{beginthm|Definition}} Let B be a fibred stable vector bundle. The orientation homomorphism in singular bordism is defined as follows: \rho : \Omega_n^B(X) \to H_n(X; \Zz/2), ~~~ [(M, \bar \nu), f] \mapsto f_*[M]. If w_1(B) = 0 then for all closed B-manifolds [M] \in H_n(M; \Zz) and we can replace the \Zz/2-coefficients with \Zz-coefficients above. {{endthm}} == The Pontrjagin-Thom isomorphism == ; If E is a vector bundle, let T(E) denote its [[Wikipedia:Thom_space|Thom space]]. Recall that that a fibred stable vector bundle B = (B_r, \gamma_r, g_r) defines a stable vector bundle (E_r, \gamma_r, \bar g_r) where E_r = \gamma_r^*(EO(r)). This stable vector bundle defines a Thom [[Wikipedia:Spectrum_(homotopy_theory)|spectrum]] which we denote MB. The r-th space of MB is T(E_r). By definition a B-manifold, (M, \bar \nu), is an equivalence class of B_r-structures on \nu(i), the normal bundle of an embedding i : M \to \Rr^{n+r}. Hence (M, \bar \nu) gives rise to the collapse map c(M, \bar \nu) : S^{n+r} \to T(E_r) where we identify S^{n+r} with the [[Wikipedia:One-point_compactification|one-point compatificiation]] of \Rr^{n+r}, we map via \bar \nu_r on a tubular neighbourhood of i(M) \subset \Rr^{n+r} and we map all other points to the base-point of T(E_r). As r increases these maps are compatibly related by suspension and the structure maps of the spectrum MB. Hence we obtain a homotopy class [c(M, \bar \nu)] =: P((M, \bar \nu)) \in \text{lim}_{r \to \infty}(\pi_{n+r}(T(E_r)) = \pi_n(MB). The celebrated theorem of Pontrjagin and Thom states in part that P((M, \bar \nu)) depends only on the bordism class of (M, \bar \nu). {{beginthm|Theorem}} \label{thm:PT-iso} There is an isomorphism of abelian groups P : \Omega_n^B \cong \pi_n^S(MB), ~~~[M, \bar \nu] \longmapsto P([M, \bar \nu]). {{endthm}} For the proof see {{cite|Bröcker&tom Dieck1970|Satz 3.1 and Satz 4.9}}. For example, if B = PBO is the path fibration over BO, then MB is homotopic to the sphere spectrum S and \pi_n(S) = \pi_n^S is the [[Wikipedia:Stable_homotopy_groups_of_spheres|n-th stable homotopy group]]. On the other hand, in this case \Omega_n^B = \Omega_n^{fr} is the framed bordism group and as a special case of Theorem \ref{thm:PT-iso} we have {{beginthm|Theorem}} There is an isomorphism P : \Omega_n^{fr} \cong \pi_n^S. {{endthm}} The Pontrjagin-Thom isomorphism generalises to singular bordism. {{beginthm|Theorem}} For any space X there is an isomorphism of abelian groups P : \Omega_n^B(X) \cong \pi_n^S(MB \wedge X_+) where MB \wedge X_+ denotes the smash produce of the specturm MB and the space X with a disjoint basepoint added. {{endthm}} == Spectral sequences == ; For any generalised homology theory h_* there is a spectral sequence, called the [[Wikipedia:Atiyah-Hirzebruch_spectral_sequence|Atiyah-Hirzebruch spectral sequence]] (AHSS) which can be used to compute h_*(X). The E_2 term of the AHSS is H_p(X; h_q(\text{pt})) and one writes \bigoplus_{p+q = n} H_p(X; h_q(\text{pt})) \Longrightarrow h_{n}(X). The Pontrjagin-Thom isomorphisms above therefore give rise to the following theorems. For the first we recall that stable homotopy defines a generalised homology theory, and we use the Thom isomorphism with local coefficients: H_*(MB;A)\cong H_*(B;A_\omega). {{beginthm|Theorem}} Let B be a fibred stable vector bundle. There is a spectral sequence \bigoplus_{p+q = n} H_p(B;\underline{\pi_q^S}) \Longrightarrow \Omega_{n}^B. {{endthm}} {{beginthm|Theorem}} Let B be a fibred stable vector bundle and X a space. There is a spectral sequence \bigoplus_{p+q = n} H_p(X; \Omega_q^B) \Longrightarrow \Omega_n^B(X). {{endthm}} Next recall [[Wikipedia:Stable_homotopy_groups_of_spheres#Finiteness_and_torsion|Serre's theorem]] {{cite|Serre1951}} that \pi_i^S \otimes \Qq vanishes unless i=0 in which case \pi_0^S \otimes \Qq \cong \Qq. From the above spectral sequences of Theorems \ref{SS1} and \ref{SS2} we deduce the following {{beginthm|Theorem|Cf. {{cite|Kreck&Lück2005|Thm 2.1}}}} If w_1(B) = 0 then the orientation homomorphism induces an isomorphism \rho \otimes \id_{\Qq} : \Omega_n^B \otimes \Qq \cong H_n(B; \Qq). Moreover for any space X, \Omega_n^B(X) \otimes \Qq \cong \bigoplus_{p+q = n} H_p(X; H_q(B; \Qq)) and if B is connected, the rationalised orientation homomorphism \rho \otimes \id_{\Qq} : \Omega_n^B(X) \otimes \Qq \to H_n(X; \Qq) may be identified with the projection \bigoplus_{p+q = n} H_p(X; H_q(B; \Qq)) \to H_n(B; H_0(B; \Qq)) = H_n(B; \Qq). {{endthm}} == Piecewise linear and topological bordism == ; Let BPL and BTOP denote respectively the classifying spaces for stable piecewise linear homeomorphisms of Euclidean space and origin-preserving homeomorphisms of Euclidean space. Note that while there are honest groups TOP(n) = \text{Homeo}(\Rr^n, *) and TOP = \text{lim}_{n \to\infty} TOP(n), the piecewise linear case requires more care. If CAT = PL or TOP, and \gamma : B \to BCAT is a fibration, and M is a compact CAT manifold then just as above, we can define an B-structure on M to be an equivalence class of lifts of of the classifying map of the stable normal bundle of M: \xymatrix{ & B \ar[d]^{\gamma} \ M \ar[r]^{\nu_M} \ar[ur]^{\bar \nu} & BCAT.} Note that CAT manifolds have stable normal CAT bundles classified by \nu_M \to BCAT. Just as before we obtain bordism groups \Omega_n^B of closed n-dimensional CAT-manifolds with B structure \Omega_n^B : = \{ [M, \bar \nu ]\}. The fibration B again defines a Thom spectrum MB and one asks if there is a Pontrjagin-Thom isomorphism. The proof of the Pontrjagin-Thom theorem relies on transversality for manifolds and while this is comparatively easy in the PL-category, it is was a major breakthrough to achieve this for topological manifolds: achieved in {{cite|Kirby&Siebenmann1977}} for dimensions other than 4 and then in {{cite|Freedman&Quinn1990}} in dimension 4. Thus one has {{beginthm|Theorem}} There is an isomorphism \Omega_n^B \cong \pi_n^S(MB). {{endthm}} The basic bordism groups for PL and TOP manifolds, B = (BCAT = BCAT) and B = (BSCAT \to BCAT), are denoted by \Omega_*^{PL}, \Omega_*^{SPL}, \Omega_*^{TOP} and \Omega_*^{STOP}. Their computation is significantly more difficult than the corresponding bordism groups of smooth manifolds: there is no analogue of Bott periodicity for \pi_i(PL) and \pi_i(TOP) and so the spectra MPL and MTOP are far more complicated. For now we simply refer the reader to {{cite|Madsen&Milgram1979|Chapters 5 & 14}} and {{cite|Brumfiel&Madsen&Milgram1973}}. However, working rationally, the natural maps O \to PL and O \to TOP induce isomorphisms \pi_i(O) \otimes \Qq \cong \pi_i(PL) \otimes \Qq ~~ \text{and} ~~ \pi_i(O) \otimes \Qq \cong \pi_i(TOP) \otimes \Qq ~~\forall i. As a consequence one has {{beginthm|Theorem}} There are isomorphisms \Omega_i^{SO} \otimes \Qq \cong \Omega_i^{SPL} \otimes \Qq \cong \Omega_i^{STOP} \otimes \Qq ~~ \forall i. {{endthm}} == References == {{#RefList:}} == External links == * The Encyclopedia of Mathematics article on [http://www.encyclopediaofmath.org/index.php/Bordism bordism]. * The Wikipedia page on [[Wikipedia:Cobordism|cobordism]]. [[Category:Theory]] [[Category:Bordism]]\gamma : B \to BO$ where $BO$$BO$ denotes the classifying space of the stable orthogonal group and $B$$B$ is homotopy equivalent to a CW complex of finite type. Abusing notation, one writes $B$$B$ for the fibration $\gamma$$\gamma$. Speaking somewhat imprecisely (precise details are below) a $B$$B$-manifold is a compact manifold $M$$M$ together with a lift to $B$$B$ of a classifying map for the stable normal bundle of $M$$M$: $\displaystyle \xymatrix{ & B \ar[d]^{\gamma} \\ W \ar[r]^{\nu_W} \ar[ur]^{\bar \nu} & BO.}$ The $n$$n$-dimensional $B$$B$-bordism group is defined to be the set of closed $B$$B$-manifolds modulo the relation of bordism via compact $B$$B$-manifolds. Addition is given by disjoint union and in fact for each $n \geq 0$$n \geq 0$ there is a group $\displaystyle \Omega_n^B := \{ (M, \bar \nu) \}/\equiv.$ Alternative notations are $\Omega_n(B)$$\Omega_n(B)$ and also $\Omega_n^G$$\Omega_n^G$ when $(B \to BO) = (BG \to BO)$$(B \to BO) = (BG \to BO)$ for $G \to O$$G \to O$ a stable representation of a topological group $G$$G$. Details of the definition and some important theorems for computing $\Omega_n^B$$\Omega_n^B$ follow. ### 1.1 Examples We list some fundamental examples with common notation and also indicate the fibration B. • Unoriented bordism: $\mathcal{N}_*$$\mathcal{N}_*$; $B = (BO = BO)$$B = (BO = BO)$. • Oriented bordism: $\Omega_*$$\Omega_*$, $\Omega_*^{SO}$$\Omega_*^{SO}$; $B = (BSO \to BO)$$B = (BSO \to BO)$. • Spin bordism: $\Omega_*^{Spin}$$\Omega_*^{Spin}$; $B = (BSpin \to BO)$$B = (BSpin \to BO)$. • Spin$c$$c$ bordism: $\Omega_*^{Spin^{c}}$$\Omega_*^{Spin^{c}}$; $B = (BSpin^{c} \to BO)$$B = (BSpin^{c} \to BO)$. • String bodism : $\Omega_*^{String}, \Omega_*^{BO\langle 8 \rangle}$$\Omega_*^{String}, \Omega_*^{BO\langle 8 \rangle}$; $B = (BO\langle 8 \rangle \to BO)$$B = (BO\langle 8 \rangle \to BO)$. • Complex bordism : $\Omega_*^U$$\Omega_*^U$; $B = (BU \to BO)$$B = (BU \to BO)$. • Special unitary bordism : $\Omega_*^{SU}$$\Omega_*^{SU}$; $B = (BSU \to BO)$$B = (BSU \to BO)$. • Framed bordism : $\Omega_*^{fr}$$\Omega_*^{fr}$; $B = (PBO \to BO)$$B = (PBO \to BO)$, the path space fibration. ## 2 B-structures and bordisms In this section we give a compressed accont of parts of [Stong1968, Chapter II]. Let $G_{r, m}$$G_{r, m}$ denote the Grassmann manifold of unoriented $r$$r$-planes in $\Rr^m$$\Rr^m$ and let $BO(r) = \text{lim}_{m \to \infty} G_{r, m}$$BO(r) = \text{lim}_{m \to \infty} G_{r, m}$ be the infinite Grassmannian and fix a fibration $\gamma_r : B_r \to BO(r)$$\gamma_r : B_r \to BO(r)$. Definition 2.1. Let $\xi: E \to X$$\xi: E \to X$ be a rank r vector bundle classified by $\xi : X \to BO(r)$$\xi : X \to BO(r)$. A $B_r$$B_r$-structure on $\xi$$\xi$ is a vertical homotopy class of maps $\bar \xi : X \to B_r$$\bar \xi : X \to B_r$ such that $\gamma_r \circ \bar \xi = \xi$$\gamma_r \circ \bar \xi = \xi$. Note that if $\xi_0$$\xi_0$ and $\xi_1$$\xi_1$ are isomorphic vector bundles over $X$$X$ then the sets of $B_r$$B_r$-structures on each are in bijective equivalence. However $B_r$$B_r$-structures are defined on specific bundles, not isomorphism classes of bundles: a specific isomorphism, up to appropriate equivalence, is required to give a bijection between the sets of $B_r$$B_r$ structures. Happily this is the case for the normal bundle of an embedding as we now explain. Let $M$$M$ be a compact manifold and let $i_0 : M \to \Rr^{n+r}$$i_0 : M \to \Rr^{n+r}$ be an embedding. Equipping $\Rr^{n+r}$$\Rr^{n+r}$ with the standard metric, the normal bundle of $i_0$$i_0$ is a rank r vector bundle over $M$$M$ classified by its normal Gauss map $\nu(i_0) : M \to G_{r, n+r} \subset BO(r)$$\nu(i_0) : M \to G_{r, n+r} \subset BO(r)$. If $i_1$$i_1$ is another such embedding and $r >> n$$r >> n$, then $i_1$$i_1$ is regularly homotopic to $i_0$$i_0$ and all regular homotopies are regularly homotopic relative to their endpoints (see [Hirsch1959]). A regular homotopy $H$$H$ defines an isomorphism $\alpha_H :\nu(i_0) \cong \nu(i_1)$$\alpha_H :\nu(i_0) \cong \nu(i_1)$ and a regular homotopy of regular homotopies gives a homotopy between these isomorphisms. Taking care one proves the following Lemma 2.2 [Stong1968, p 15]. For r sufficiently large, (depending only on n) there is a 1-1 correspondence between the set of $B_r$$B_r$ structures of the normal bundles of any two embeddings $i_0, i_1 : M \to \Rr^{n+r}$$i_0, i_1 : M \to \Rr^{n+r}$. This lemma is one motivation for the useful but subtle notion of a fibred stable vector bundle. Definition 2.3. A fibred stable vector bundle $B = (B_r, \gamma_r, g_r)$$B = (B_r, \gamma_r, g_r)$ consists of the following data: a sequence of fibrations $\gamma_r : B_r \to BO(r)$$\gamma_r : B_r \to BO(r)$ together with a sequence of maps $g_r : B_r \to B_{r+1}$$g_r : B_r \to B_{r+1}$ fitting into the following commutative diagram $\displaystyle \xymatrix{ B_r \ar[r]^{g_r} \ar[d]^{\gamma_r} & B_{r+1} \ar[d]^{\gamma_{r+1}} \\ BO(r) \ar[r]^{j_r} & BO(r+1) }$ where $j_r$$j_r$ is the standard inclusion. We let $B = \text{lim}_{r \to \infty}(B_r)$$B = \text{lim}_{r \to \infty}(B_r)$. Remark 2.4. A fibred stable vector bundle $B$$B$ gives rise to a stable vector bundle as defined in [Kreck&Lück2005, 18.10]. One defines $E_r \to B_r$$E_r \to B_r$ to be the pullback bundle $\gamma_r^*(EO(r))$$\gamma_r^*(EO(r))$ where $EO(r)$$EO(r)$ is the universal r-plane bundle over $BO(r)$$BO(r)$. The diagram above gives rise to bundle maps $\bar g_r : E_r \oplus \underline{\Rr} \to E_{r+1}$$\bar g_r : E_r \oplus \underline{\Rr} \to E_{r+1}$ covering the maps $g_r$$g_r$; where $\underline{\Rr}$$\underline{\Rr}$ denotes the trivial rank 1 bundle over $B_r$$B_r$. Now a $B_r$$B_r$-structure on the normal bundle of an embedding $i: M \to \Rr^{n+r}$$i: M \to \Rr^{n+r}$ defines a unique $B_{r+1}$$B_{r+1}$-structure on the composition of $i$$i$ with the standard inclusion $\Rr^{n+r} \to \Rr^{n+r+1}$$\Rr^{n+r} \to \Rr^{n+r+1}$. Hence we can make the following Definition 2.5 [Stong1968, p 15]. Let $B$$B$ be a fibred stable vector bundle. A $B$$B$-structure on $M$$M$ is an equivalence class of $B_r$$B_r$-structure on $M$$M$ where two such structures are equivalent if they become equivalent for r sufficiently large. A $B$$B$-manifold is a pair $(M, \bar \nu)$$(M, \bar \nu)$ where $M$$M$ is a compact manifold and $\bar \nu$$\bar \nu$ is a $B$$B$-structure on $M$$M$. If $W$$W$ is a compact manifold with boundary $\partial W$$\partial W$ then by choosing the inward-pointing normal vector along $\partial W$$\partial W$, a $B$$B$-structure on $W$$W$ restricts to a $B$$B$-structure on $\partial W$$\partial W$. In particular, if $(M, \bar \nu_M)$$(M, \bar \nu_M)$ is a closed $B$$B$ manifold then $W = M \times [0, 1]$$W = M \times [0, 1]$ has a canonical $B$$B$-structure $\bar \nu_{M \times [0, 1]}$$\bar \nu_{M \times [0, 1]}$ which restricts to $(M, \bar \nu_M)$$(M, \bar \nu_M)$ on $M \times \{ 0 \}$$M \times \{ 0 \}$. The restriction of this $B$$B$-structure to $M \times \{ 1 \}$$M \times \{ 1 \}$ is denoted $-\bar \nu$$-\bar \nu$: by construction $(M \sqcup M, \bar \nu \sqcup - \bar \nu)$$(M \sqcup M, \bar \nu \sqcup - \bar \nu)$ is the boundary of $(M \times [0, 1], \bar \nu_{M \times [0, 1]})$$(M \times [0, 1], \bar \nu_{M \times [0, 1]})$. Definition 2.6. Closed $B$$B$-manifolds $(M_0, \bar \nu_0)$$(M_0, \bar \nu_0)$ and $(M_1, \bar \nu_1)$$(M_1, \bar \nu_1)$ are $B$$B$-bordant if there is a compact $B$$B$-manifold $(W, \bar \nu)$$(W, \bar \nu)$ such that $\partial(W, \bar \nu) = (M_0 \sqcup M_1, \bar \nu_0 \sqcup -\bar \nu_1)$$\partial(W, \bar \nu) = (M_0 \sqcup M_1, \bar \nu_0 \sqcup -\bar \nu_1)$. We write $[M, \bar \nu]$$[M, \bar \nu]$ for the bordism class of $(M, \bar \nu)$$(M, \bar \nu)$. Proposition 2.7 [Stong1968, p 17]. The set of $B$$B$-bordism classes of closed n-manifolds with $B$$B$-structure, $\displaystyle \Omega_n^B := \{ [M, \bar \nu ] \},$ forms an abelian group under the operation of disjoint union with inverse $-[M,\bar \nu] = [M, -\bar \nu]$$-[M,\bar \nu] = [M, -\bar \nu]$. ## 3 Singular bordism $B$$B$-bordism gives rise to a generalised homology theory. If $X$$X$ is a space then the $n$$n$-cycles of this homology theory are pairs $\displaystyle ((M, \bar \nu),~ f: M \to X)$ where $(M, \bar \nu)$$(M, \bar \nu)$ is a closed $n$$n$-dimensional $B$$B$-manifold and $f$$f$ is any continuous map. Two cycles $((M_0, \bar \nu_0), f_0)$$((M_0, \bar \nu_0), f_0)$ and $((M_1, \bar \nu_1), f_1)$$((M_1, \bar \nu_1), f_1)$ are homologous if there is a pair $\displaystyle ((W, \bar \nu),~ g : W \to X)$ where $(W, \bar \nu)$$(W, \bar \nu)$ is a $B$$B$-bordism from $(M_0, \bar \nu_0)$$(M_0, \bar \nu_0)$ to $(M_1, \bar \nu_1)$$(M_1, \bar \nu_1)$ and $g : W \to X$$g : W \to X$ is a continuous map extending $f_0 \sqcup f_1$$f_0 \sqcup f_1$. Writing $[(M, \bar \nu), f]$$[(M, \bar \nu), f]$ for the equivalence class of $((M, \bar \nu) ,f)$$((M, \bar \nu) ,f)$ we obtain an abelian group $\displaystyle \Omega_n^B(X) : = \{ [(M, \bar \nu), f] \}$ with group operation disjoint union and inverse $-[(M, \bar \nu), f] = [(M, - \bar \nu), f]$$-[(M, \bar \nu), f] = [(M, - \bar \nu), f]$. Proposition 3.1. The mapping $X \to \Omega_n^B(X)$$X \to \Omega_n^B(X)$ defines a generalised homology theory with coefficients $\Omega_n^B(\text{pt}) = \Omega_n^B$$\Omega_n^B(\text{pt}) = \Omega_n^B$. Given a stable vector bundle $B = (B_r, \gamma_r, g_r)$$B = (B_r, \gamma_r, g_r)$ we can form the stable vector bundle $B \times X := (B_r \times X, \gamma_r \times X, g_r \times \id_X)$$B \times X := (B_r \times X, \gamma_r \times X, g_r \times \id_X)$. The following simple lemma is clear but often useful. Lemma 3.2. For any space $X$$X$ there is an isomorphism $\Omega_n^B(X) \cong \Omega_n^{B \times X}$$\Omega_n^B(X) \cong \Omega_n^{B \times X}$. ## 4 The orientation homomorphism We fix a local orientation at the base-point of $BO$$BO$. It then follows that every closed $B$$B$-manifold $(M, \bar \nu)$$(M, \bar \nu)$ is given a local orientation. This amounts to a choice of fundamental class of $M$$M$ which is a generator $\displaystyle [M] \in H_n(M; \underline{\Zz})$ where $\underline{\Zz}$$\underline{\Zz}$ denotes the local coefficient system defined by the orientation character of $M$$M$. Given a closed $B$$B$-manifold $(M, \bar \nu)$$(M, \bar \nu)$ we can use $\bar \nu$$\bar \nu$ to push the fundamental class of $[M]$$[M]$ to $\bar \nu_*[M] \in H_n(B; \underline{\Zz})$$\bar \nu_*[M] \in H_n(B; \underline{\Zz})$. Now the local coefficient system is defined by the orientation character of the stable bundle $B$$B$. It is easy to check that $\bar \nu_*[M]$$\bar \nu_*[M]$ depends only on the $B$$B$-bordism class of $(M, \bar \nu)$$(M, \bar \nu)$ and is additive with respect to the operations $+/-$$+/-$ on $\Omega_n^B$$\Omega_n^B$. Definition 4.1. Let $B$$B$ be a fibred stable vector bundle. The orientation homomorphism is defined as follows: $\displaystyle \rho : \Omega_n^B \to H_n(B; \underline{\Zz}), ~~~[M, \bar \nu] \mapsto \bar \nu_*[M].$ For the singular bordism groups $\Omega_n^B(X)$$\Omega_n^B(X)$ we have no bundle over $X$$X$ so in general there is only a $\Zz/2$$\Zz/2$-valued orientation homomorphism. However, if the first Stiefel-Whitney class of $B$$B$ vanishes, $w_1(B) = 0$$w_1(B) = 0$, then all $B$$B$-manifolds are oriented in the usual sense and the orientation homomorphism can be lifted to $\Zz$$\Zz$. Definition 4.2. Let $B$$B$ be a fibred stable vector bundle. The orientation homomorphism in singular bordism is defined as follows: $\displaystyle \rho : \Omega_n^B(X) \to H_n(X; \Zz/2), ~~~ [(M, \bar \nu), f] \mapsto f_*[M].$ If $w_1(B) = 0$$w_1(B) = 0$ then for all closed $B$$B$-manifolds $[M] \in H_n(M; \Zz)$$[M] \in H_n(M; \Zz)$ and we can replace the $\Zz/2$$\Zz/2$-coefficients with $\Zz$$\Zz$-coefficients above. ## 5 The Pontrjagin-Thom isomorphism If $E$$E$ is a vector bundle, let $T(E)$$T(E)$ denote its Thom space. Recall that that a fibred stable vector bundle $B = (B_r, \gamma_r, g_r)$$B = (B_r, \gamma_r, g_r)$ defines a stable vector bundle $(E_r, \gamma_r, \bar g_r)$$(E_r, \gamma_r, \bar g_r)$ where $E_r = \gamma_r^*(EO(r))$$E_r = \gamma_r^*(EO(r))$. This stable vector bundle defines a Thom spectrum which we denote $MB$$MB$. The $r$$r$-th space of $MB$$MB$ is $T(E_r)$$T(E_r)$. By definition a $B$$B$-manifold, $(M, \bar \nu)$$(M, \bar \nu)$, is an equivalence class of $B_r$$B_r$-structures on $\nu(i)$$\nu(i)$, the normal bundle of an embedding $i : M \to \Rr^{n+r}$$i : M \to \Rr^{n+r}$. Hence $(M, \bar \nu)$$(M, \bar \nu)$ gives rise to the collapse map $\displaystyle c(M, \bar \nu) : S^{n+r} \to T(E_r)$ where we identify $S^{n+r}$$S^{n+r}$ with the one-point compatificiation of $\Rr^{n+r}$$\Rr^{n+r}$, we map via $\bar \nu_r$$\bar \nu_r$ on a tubular neighbourhood of $i(M) \subset \Rr^{n+r}$$i(M) \subset \Rr^{n+r}$ and we map all other points to the base-point of $T(E_r)$$T(E_r)$. As r increases these maps are compatibly related by suspension and the structure maps of the spectrum $MB$$MB$. Hence we obtain a homotopy class $\displaystyle [c(M, \bar \nu)] =: P((M, \bar \nu)) \in \text{lim}_{r \to \infty}(\pi_{n+r}(T(E_r)) = \pi_n(MB).$ The celebrated theorem of Pontrjagin and Thom states in part that $P((M, \bar \nu))$$P((M, \bar \nu))$ depends only on the bordism class of $(M, \bar \nu)$$(M, \bar \nu)$. Theorem 5.1. There is an isomorphism of abelian groups $\displaystyle P : \Omega_n^B \cong \pi_n^S(MB), ~~~[M, \bar \nu] \longmapsto P([M, \bar \nu]).$ For the proof see [Bröcker&tom Dieck1970, Satz 3.1 and Satz 4.9]. For example, if $B = PBO$$B = PBO$ is the path fibration over $BO$$BO$, then $MB$$MB$ is homotopic to the sphere spectrum $S$$S$ and $\pi_n(S) = \pi_n^S$$\pi_n(S) = \pi_n^S$ is the $n$$n$-th stable homotopy group. On the other hand, in this case $\Omega_n^B = \Omega_n^{fr}$$\Omega_n^B = \Omega_n^{fr}$ is the framed bordism group and as a special case of Theorem 5.1 we have Theorem 5.2. There is an isomorphism $P : \Omega_n^{fr} \cong \pi_n^S$$P : \Omega_n^{fr} \cong \pi_n^S$. The Pontrjagin-Thom isomorphism generalises to singular bordism. Theorem 5.3. For any space $X$$X$ there is an isomorphism of abelian groups $\displaystyle P : \Omega_n^B(X) \cong \pi_n^S(MB \wedge X_+)$ where $MB \wedge X_+$$MB \wedge X_+$ denotes the smash produce of the specturm $MB$$MB$ and the space $X$$X$ with a disjoint basepoint added. ## 6 Spectral sequences For any generalised homology theory $h_*$$h_*$ there is a spectral sequence, called the Atiyah-Hirzebruch spectral sequence (AHSS) which can be used to compute $h_*(X)$$h_*(X)$. The $E_2$$E_2$ term of the AHSS is $H_p(X; h_q(\text{pt}))$$H_p(X; h_q(\text{pt}))$ and one writes $\displaystyle \bigoplus_{p+q = n} H_p(X; h_q(\text{pt})) \Longrightarrow h_{n}(X).$ The Pontrjagin-Thom isomorphisms above therefore give rise to the following theorems. For the first we recall that stable homotopy defines a generalised homology theory, and we use the Thom isomorphism with local coefficients: $H_*(MB;A)\cong H_*(B;A_\omega)$$H_*(MB;A)\cong H_*(B;A_\omega)$. Theorem 6.1. Let $B$$B$ be a fibred stable vector bundle. There is a spectral sequence $\displaystyle \bigoplus_{p+q = n} H_p(B;\underline{\pi_q^S}) \Longrightarrow \Omega_{n}^B.$ Theorem 6.2. Let $B$$B$ be a fibred stable vector bundle and $X$$X$ a space. There is a spectral sequence $\displaystyle \bigoplus_{p+q = n} H_p(X; \Omega_q^B) \Longrightarrow \Omega_n^B(X).$ Next recall Serre's theorem [Serre1951] that $\pi_i^S \otimes \Qq$$\pi_i^S \otimes \Qq$ vanishes unless $i=0$$i=0$ in which case $\pi_0^S \otimes \Qq \cong \Qq$$\pi_0^S \otimes \Qq \cong \Qq$. From the above spectral sequences of Theorems 6.1 and 6.2 we deduce the following Theorem 6.3 Cf. [Kreck&Lück2005, Thm 2.1]. If $w_1(B) = 0$$w_1(B) = 0$ then the orientation homomorphism induces an isomorphism $\displaystyle \rho \otimes \id_{\Qq} : \Omega_n^B \otimes \Qq \cong H_n(B; \Qq).$ Moreover for any space $X$$X$, $\Omega_n^B(X) \otimes \Qq \cong \bigoplus_{p+q = n} H_p(X; H_q(B; \Qq))$$\Omega_n^B(X) \otimes \Qq \cong \bigoplus_{p+q = n} H_p(X; H_q(B; \Qq))$ and if $B$$B$ is connected, the rationalised orientation homomorphism $\rho \otimes \id_{\Qq} : \Omega_n^B(X) \otimes \Qq \to H_n(X; \Qq)$$\rho \otimes \id_{\Qq} : \Omega_n^B(X) \otimes \Qq \to H_n(X; \Qq)$ may be identified with the projection $\displaystyle \bigoplus_{p+q = n} H_p(X; H_q(B; \Qq)) \to H_n(B; H_0(B; \Qq)) = H_n(B; \Qq).$ ## 7 Piecewise linear and topological bordism Let $BPL$$BPL$ and $BTOP$$BTOP$ denote respectively the classifying spaces for stable piecewise linear homeomorphisms of Euclidean space and origin-preserving homeomorphisms of Euclidean space. Note that while there are honest groups $TOP(n) = \text{Homeo}(\Rr^n, *)$$TOP(n) = \text{Homeo}(\Rr^n, *)$ and $TOP = \text{lim}_{n \to\infty} TOP(n)$$TOP = \text{lim}_{n \to\infty} TOP(n)$, the piecewise linear case requires more care. If $CAT = PL$$CAT = PL$ or $TOP$$TOP$, and $\gamma : B \to BCAT$$\gamma : B \to BCAT$ is a fibration, and $M$$M$ is a compact $CAT$$CAT$ manifold then just as above, we can define an $B$$B$-structure on $M$$M$ to be an equivalence class of lifts of of the classifying map of the stable normal bundle of $M$$M$: $\displaystyle \xymatrix{ & B \ar[d]^{\gamma} \\ M \ar[r]^{\nu_M} \ar[ur]^{\bar \nu} & BCAT.}$ Note that $CAT$$CAT$ manifolds have stable normal $CAT$$CAT$ bundles classified by $\nu_M \to BCAT$$\nu_M \to BCAT$. Just as before we obtain bordism groups $\Omega_n^B$$\Omega_n^B$ of closed n-dimensional $CAT$$CAT$-manifolds with $B$$B$ structure $\displaystyle \Omega_n^B : = \{ [M, \bar \nu ]\}.$ The fibration $B$$B$ again defines a Thom spectrum $MB$$MB$ and one asks if there is a Pontrjagin-Thom isomorphism. The proof of the Pontrjagin-Thom theorem relies on transversality for manifolds and while this is comparatively easy in the $PL$$PL$-category, it is was a major breakthrough to achieve this for topological manifolds: achieved in [Kirby&Siebenmann1977] for dimensions other than 4 and then in [Freedman&Quinn1990] in dimension 4. Thus one has Theorem 7.1. There is an isomorphism $\Omega_n^B \cong \pi_n^S(MB)$$\Omega_n^B \cong \pi_n^S(MB)$. The basic bordism groups for $PL$$PL$ and $TOP$$TOP$ manifolds, $B = (BCAT = BCAT)$$B = (BCAT = BCAT)$ and $B = (BSCAT \to BCAT)$$B = (BSCAT \to BCAT)$, are denoted by $\Omega_*^{PL}$$\Omega_*^{PL}$, $\Omega_*^{SPL}$$\Omega_*^{SPL}$, $\Omega_*^{TOP}$$\Omega_*^{TOP}$ and $\Omega_*^{STOP}$$\Omega_*^{STOP}$. Their computation is significantly more difficult than the corresponding bordism groups of smooth manifolds: there is no analogue of Bott periodicity for $\pi_i(PL)$$\pi_i(PL)$ and $\pi_i(TOP)$$\pi_i(TOP)$ and so the spectra $MPL$$MPL$ and $MTOP$$MTOP$ are far more complicated. For now we simply refer the reader to [Madsen&Milgram1979, Chapters 5 & 14] and [Brumfiel&Madsen&Milgram1973]. However, working rationally, the natural maps $O \to PL$$O \to PL$ and $O \to TOP$$O \to TOP$ induce isomorphisms $\displaystyle \pi_i(O) \otimes \Qq \cong \pi_i(PL) \otimes \Qq ~~ \text{and} ~~ \pi_i(O) \otimes \Qq \cong \pi_i(TOP) \otimes \Qq ~~\forall i.$ As a consequence one has Theorem 7.2. There are isomorphisms $\displaystyle \Omega_i^{SO} \otimes \Qq \cong \Omega_i^{SPL} \otimes \Qq \cong \Omega_i^{STOP} \otimes \Qq ~~ \forall i.$
2021-10-22 07:25:28
{"extraction_info": {"found_math": true, "script_math_tex": 273, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 296, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9991105794906616, "perplexity": 5293.574992020892}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585460.87/warc/CC-MAIN-20211022052742-20211022082742-00635.warc.gz"}
https://stacks.math.columbia.edu/tag/0G9C
Lemma 15.84.1. Let $R$ be a ring. Let $K \in D(R)$ with $H^ i(K) = 0$ for $i \not\in \{ -1, 0\}$. The following are equivalent 1. $H^{-1}(K) = 0$ and $H^0(K)$ is a projective module and 2. $\mathop{\mathrm{Ext}}\nolimits ^1_ R(K, M) = 0$ for every $R$-module $M$. If $R$ is Noetherian and $H^ i(K)$ is a finite $R$-module for $i = -1, 0$, then these are also equivalent to 1. $\mathop{\mathrm{Ext}}\nolimits ^1_ R(K, M) = 0$ for every finite $R$-module $M$. Proof. The equivalence of (1) and (2) follows from Lemma 15.68.2. If $R$ is Noetherian and $H^ i(K)$ is a finite $R$-module for $i = -1, 0$, then $K$ is pseudo-coherent, see Lemma 15.64.17. Thus the equivalence of (1) and (3) follows from Lemma 15.77.4. $\square$ In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
2022-08-14 21:48:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9928159117698669, "perplexity": 215.9328216610365}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572077.62/warc/CC-MAIN-20220814204141-20220814234141-00488.warc.gz"}
https://math.stackexchange.com/questions/3491978/x-d-be-a-compact-metric-space-for-every-open-cover-show-there-exists-%ce%b5-0-s
# (X,d) be a compact metric space. For every open cover, show there exists ε > 0 such that ∀𝑥 ∈ X, B(x,ε) is contained in some member of the cover. Let (X,d) be a compact metric space. For every open cover, show there exists ε > 0 such that for every x ∈ X, B(x,ε) is contained in some member of the cover. My attempt: (X,d) is compact. Therefore there exists a finite subcover of X. Any element x in X must lie in some member of the cover, say x ∈ Ui. Otherwise they would not constitute a cover. Since Ui is open, by definition every point is interior, so there exists ε > 0 such that B(x,ε) is contained in Ui. I haven't used the fact the subcover is finite, or the fact X is a metric space rather than just topological space, so I feel my reasoning is flawed. Any help is greatly appreciated! • You did not state compactness as a hypothesis either in the question or the title. Where did compactness come from? Dec 30 '19 at 11:55 • See en.wikipedia.org/wiki/Lebesgue%27s_number_lemma for a proof when the space is compact. Dec 30 '19 at 11:56 • @KaviRamaMurthy my mistake, it should be there. I've edited it now. – srh Dec 30 '19 at 11:56 • Without compactness this is false. Dec 30 '19 at 11:57 • Then your proof is not okay. You proved that for every $x\in X$ there is some $\epsilon_x>0$ such that... Here $\epsilon_x$ depends on $x$. See the link of @KaviRamaMurthy for the other option which is more difficult to prove. Dec 30 '19 at 12:09 The Wiki proof linked in the comments uses the fact that a continuous function on a compact set reaches its extrema. If you want a proof from scratch and closer to what you are trying to do, here are a few hints: $$1).\$$ Let $$\mathcal A$$ be an open cover of $$X$$. For each $$x\in X$$ there is an open neighborhood $$B_{\epsilon_x}(x)$$ such that each such $$B$$ is contained in an element of $$\mathcal A$$. $$2).\$$ The $$B's$$ give you $$\textit{another}$$ open cover of $$X$$. $$3).\$$ Take a finite subcover of the cover from $$2)$$ and note that you also get a finite number of $$\epsilon_x's$$. $$4).\$$ Using the conclusion in $$3)$$, define $$\delta>0$$ appropriately to conclude. The problem with that proof is you have an $$\epsilon$$ defined for each $$x$$. You need to prove there is an $$\epsilon$$ that works for all $$x$$ but is independent of the value of $$x$$ You are give an open cover $$U$$ consisting of open sets $$U_\alpha$$. We know this has a finite subcover but we should resist assuming we know it actually work on finding it in terms of the $$x \in X$$. As you point out for each $$x\in X$$ there is a $$U_\zeta \in U$$ so $$x\in U_\zeta$$. And there is an $$\epsilon_x$$ (for that $$x$$, not nesc for all $$x$$) so that $$B(x,\epsilon_x) \subset U_\zeta$$. Fine. (Hi... I'm a time traveler from a the future. I'm sticking in an extra step right now. We also have $$B(x,\frac {\epsilon_x}2) \subset B(x,\epsilon)\subset U_\zeta$$. I'll explain why I'm doing that later.) If we collect these neighborhoods, $$B(x,\epsilon_x)$$ into a collection, $$\mathcal B=\{B(x,\epsilon_x)|x\in X\}$$. Well.... it's pretty easy to show that $$\mathcal B$$ is an open cover of $$X$$! So $$\mathcal B$$ has a finite subcover. (Hi.... Time traveler again. We also have $$\mathcal C=\{B(x,\frac {\epsilon_x}2)|x\in X\}$$ is also a open cover with a finite subcover.) Which if we put it in other words.... there is a finite subset $$\{x_1,........, x_n\}\subset X$$ so that $$\{B(x_i \epsilon_{x_i})|x_i \in \{x_1,.....,x_n\}\}\subset \mathcal B$$ and $$X\subset \cup_{i=1}^n B(x_i \epsilon_{x_i})$$ (Hi... remember me? The time traveler? Just noting that there is also subset of $$\mathcal C$$ that will cover $$X$$ and a subset $$\{w_1,......., w_m\}\subset X$$ that acts as an index.) Now there are a finite number of $$\epsilon_x$$ so there must exist a $$E=\min\{\epsilon_{x_i}\} > 0$$ (Hi.... there also must be a $$E'=\min\{\frac {\epsilon_{w_j}}2\}$$) and thus as every $$x$$ must be in some $$B(x_i\epsilon_{x_i})$$. So $$B(x, E) \subset B(x_i\epsilon_{x_i}) \subset U_\zeta$$ for some $$U_\zeta$$ and we are done and... Oh SHHHHHugar!!!!!! Although $$d(x,x_i) < \epsilon_{x_i}$$ and $$E< \epsilon_{x_i}$$ that doesn't mean that for any $$y\in B(x,E)$$ that $$y \in B(x,\epsilon_{x_i})$$ as $$d(y,x_i) \le d(y,x)+d(x,x_i)< E + \epsilon_{x_i} \not < \epsilon_{x_i}$$. If only there were some way I could travel back in time and fix my mistake. warping weird music. Hi. For every $$x\in X$$ then $$x\in B(w_j, \frac {\epsilon_{w_j}}2)$$ for some $$w_j$$. Thus for any $$y\in B(x, E')$$ then $$d(y, w_j) \le d(y,x)+ d(x,w_j) < E' + \frac {\epsilon_{w_j}}2 < \frac {\epsilon_{w_j}}2+\frac {\epsilon_{w_j}}2 < \epsilon_{w_j}$$ and so .... $$B(x, E')\subset B(w_j, \epsilon_{w_j})\subset U_\zeta$$ for some $$U_\zeta\in U$$. And we are done. • Thank you, this is a fantastic explanation. – srh Jan 6 '20 at 19:50 Let $$\mathcal{A}$$ be an open cover of $$X$$. For each $$x \in X$$ we can find $$A_x \in \mathcal{A}$$ such that $$x \in A_x$$, and as $$A_x$$ is open there is some $$r(x)>0$$ such that $$B(x,2r(x)) \subseteq A_x\tag{1}$$ (Note that we use $$2r(x)$$, to take some space; openness guarantees us some $$s>0$$ and we just use half of it..) Then $$\{B(x, r(x)): x \in X\}$$ is an open cover for $$X$$, so by compactness we have a finite subcover $$\{B(x_1, r(x_1), \ldots, B(x_n, r(x_n)\}$$. Now define $$\delta=\min_{i=1}^n r(x_i) > 0$$ (a minimum of finitely many positive reals) and I claim this $$\delta$$ is as required: let $$x \in X$$, and we have to show $$B(x, \delta)$$ is some member of $$\mathcal{A}$$. Firstly, note that $$x \in B(x_i, r(x_i))$$ for some $$i \in \{1,\ldots, n\}$$ as the subcover is a cover. So $$d(x,x_i) < r(x_i)$$ and if now $$y \in B(x, r(x_i))$$ is arbitrary, $$d(y,x) < r(x_i)$$ and the triangle inequality then tells us that $$d(y,x_i) \le d(y,x)+d(x,x-i) < r(x_i) + r(x_i)=2r(x_i)\text{, so } y \in B(x_i, 2r(x_i)$$ And as $$y$$ was arbitrary, $$B(x, r(x_i)) \subseteq B(x_i, 2r(x_i))$$. Now it's obvious that $$\delta \le r(x_i)$$ and so $$B(x, \delta) \subseteq B(x, r(x_i)) \subseteq B(x_i, 2r(x_i)) \subseteq A_{x_i} \in \mathcal{A}$$ and this finishes the proof. You have proved that for all $$x$$, there exists a $$\varepsilon$$ so that $$B(x,\varepsilon)$$ is contained in some member of the cover - which does not rely on the fact that the chosen subcover is finite or on compactness. This is a much weaker statement than the one you wished to prove. Indeed, you can convince yourself that a proof has to be more intricate by considering that, even if your cover were finite, that does not mean it has the desired property without compactness; for instance, in the space $$[0,1/2)\cup (1/2,1]$$, the cover consisting of the two components does not satisfy the desired property. Since your proof apparently would apply to any finite cover, you can see that something must be wrong with it; more generally, we can see that taking a finite subcover of the cover we were given is probably not going to help us. A very fast way to do this, however, is to start with your open cover $$\mathscr U$$. For each $$x\in X$$, you can define $$f(x)=\max \{\varepsilon > 0 : B(x,\varepsilon) \subseteq U\text{ for some }U\in \mathscr U\}$$ You are trying to show that $$f(x)\geq \varepsilon$$ for some $$\varepsilon$$ for all $$x$$. It's worth taking a moment to consider why this can be assumed to be a maximum rather than supremum and to convince yourself that this function is continuous. You can also convince yourself that this function is positive everywhere, since every $$x$$ has some $$B(x,\varepsilon)$$ contained in some $$U$$. A standard way to finish this proof is to apply the extreme value theorem to $$f$$ and find a minimum - but there's an easier way that uses the definition of compactness directly. In particular, let $$A_{n}$$ be the set of $$x\in X$$ that satisfy $$f(x)>1/n$$. Clearly, every $$x$$ is in some $$A_n$$ - so the set $$\{A_1,A_2,\ldots\}$$ is a cover of $$X$$. Note also that $$A_1\subseteq A_2\subseteq A_3 \subseteq \ldots$$, so the union of any finite subset of this cover is an element of the cover. By compactness, this has a finite subcover - and by the prior note, this means that $$A_n=X$$ for some $$n$$. Then you have the desired statement for $$\varepsilon = 1/n$$. Generally, this is a good approach to have to such questions: if you're trying to prove a quantity can be taken to be uniform across the space (i.e. to turn "$$\forall\exists$$" into $$"\exists\forall"$$), you can often proceed by looking at sets where some fixed constant suffices, and seeing if you can use compactness (or whatever other property you have) to show that one of those sets is the whole space. Note also that here we are taking a finite subcover of a cover we constructed - it would not help us to take a finite subcover of $$\mathscr U$$, even though it seems like a good idea.
2021-10-18 20:41:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 120, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9220118522644043, "perplexity": 111.1626035089512}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585209.43/warc/CC-MAIN-20211018190451-20211018220451-00492.warc.gz"}
http://mathhelpforum.com/number-theory/170018-z-not-field.html
# Thread: Z is not a field. 1. ## Z is not a field. Why is $\mathbb{Z}$ not a field? 2. Originally Posted by VonNemo19 Why is $\mathbb{Z}$ not a field? A field requires that every nonzero element has an inverse. The only invertable element of the integers are plus or minus one. For example $2$ is not invertible in $\mathbb{Z}$ $2x=1$ does not have any integer solutions as $\frac{1}{2} \notin \mathbb{Z}$ 3. Oh, OK. I was looking and looking at the definition of a field and comparing it to the properties of the integers and I couldn't find the missing ingredient. So, this little property of the integers, namely that of not having a multiplicitive inverse, is the only condition of the definition of a field that is not satitsfied, correct? 4. Yes. , , , , , , , , , , , , , , # (z, ,•) is field Click on a term to search for related topics.
2017-05-29 00:15:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7266429662704468, "perplexity": 410.2818646221751}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463612003.36/warc/CC-MAIN-20170528235437-20170529015437-00337.warc.gz"}
https://mathematica.stackexchange.com/questions/57638/how-can-i-plot-a-circular-region-and-lines
# How can I plot a circular region and lines? I am having difficulty plotting something along these lines in Mathematica. I will explain what is going on here. Basically, I am solving an interval equation $ax=b$, in the first example $a = [1, 20]$ and $b =[3, 4]$ and therefore $b/a = [3/20, 4]$. As you can see from the graph I want a circle representing the projective line. A straight line will come off the two end points of the interval which represents the solutions and those lines will go through the center of the circle and the region bounded by the lines should be shaded in. Similarly the other example below is an unbounded one where $a = [-1,2]$ and $b = [5,100]$ and hence $b/a = (-\infty, -5] \cup [5/2, \infty]$. Same type of plot though, the lines come off $-5$ and $5/2$ on the x axis and go through the center of the circle and the bounded regions are shaded. • How are the circles constructed? Are they unit circles centered at {0, -1}. Aug 18, 2014 at 18:54 • Yes they are :) Aug 18, 2014 at 19:17 • Oh also letting you know that my solution to the interval equation will be given as a system of inequalities so for the first example above I will be given the following: x1 >= 0 && 1*x1 <= 4 && 20*x1 >= 3 || x1 <= 0 && 20*x1 <= 4 && 1*x1 >= 3 Aug 18, 2014 at 19:20 • It would help to know what you have tried, to avoid unnecessarily spending time explaining things you already did. – Jens Aug 18, 2014 at 19:30 • I am completely new to mathematica I have no clue to be honest and need this desperately as soon as possible. Could someone please help? My idea is to figure out the points from the inequalities somehow then make a function that computes a straight line that goes through that point on the x axis and (0,-1) then fill in the region :/ Aug 18, 2014 at 19:39 This is not a complete solution but it may help. Manipulate[ Graphics[{{Circle[{0, -1}, 1]}, {Blue, Disk[{0, -1}, 1, {ArcTan[1/b], ArcTan[1/a]}]}, {Blue, Disk[{0, -1}, 1, \[Pi] + {ArcTan[1/b], If[b < 0, \[Pi], 0] + ArcTan[1/a]}]}, {Green, Line[{{{a, 0}, {0, -1}}, {{b, 0}, {0, -1}}}]}}, Frame -> True, PlotRange -> {{-5, 5}, {-3, 1}}, Axes -> True], {a, .1, 3}, {b, -5, 5}] • This is incredible thanks! How do I modify it so I can see the Circle itself as well rather than just the shaded in areas? Aug 18, 2014 at 20:06 • If i substitute in -5, and 5/2 for a and b respectively, it doesn't exactly plot what I am looking for. Doesn't shade the bottom part? Looks really close though. Any idea how to fix that? Thanks] Aug 18, 2014 at 20:13 • @Kadir (1) add Circle[{0, -1}] to the list of directives. Aug 18, 2014 at 20:26 • @Kadir I have already add the circle. for the range -5 and 5/2, you may change the plot range to include the far distances of a or b. I have already did that. Aug 18, 2014 at 20:30 • You are a brilliant brilliant man, just one last little thing, which value of the intervals have you referred to as a and b, I am getting tad confused when plotting. In the bounded case the interval is of the form [a,b], in the unbounded form its of the form (-infinity, a] UNION [b, +infinity) ? I think you may have swapped the two by mistake but I am not sure? Aug 18, 2014 at 20:37
2022-05-19 08:46:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2743893265724182, "perplexity": 603.7014558397972}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662526009.35/warc/CC-MAIN-20220519074217-20220519104217-00103.warc.gz"}
http://javatpointtutorial.com/paper-help-org-educational-and-professional-objectives-paper
# Paper-Help.org – Educational And Professional Objectives paper This is a way of turning your personalized web site or site as an on the internet writer’s portfolio. No matter what style of venture you might be on the lookout at, 1 of the most significant factors to take into account is the form of paper you happen to be starting up out with. Of course you should produce the greatest paper you can, always. Each time period paper writer calls for a good topic on which his or her researched resources ought to be dependent. Placing a significant object trough your paper shredder will bring about it to jam. • Help Writing Core Measure Paper • Help With Writing Paper On Ghost Dance Religion • How Does The Brainstorming Technique Of Mapping Help In Writing A Research Paper? • Need Help Legal Writing Paper • Help Writing College Research Paper • Need Help Writing Essay Paper • List Website That Help With Writing A Concept Paper People aren’t absolutely sure no matter if to employ a information author or a copywriter to compose their white paper. Dyes are afterwards Consumer Mathhelp paper University of Illinois at Urbana-Champaignapplied to shade the paper for people specifications. These products and solutions are suggested for use in environments where by the expectation on the variety of bathroom paper currently being provided is minimal. They could typically received $five hundred to$1,000 for every article they compose. There are quite a few situations where widespread phrase papers and assignments are duplicated for thousands of diverse college students. Five Actions To Organizing And Cleansing The Paper Piles From Your Desk And Your OfficernIt generates a benchmark for tracking our lifetime progress in addition to giving a process of reflective pondering or self-observation. Noting the resource of your notes, as you go along, will conserve you time later in the panicky stage of writing your references. ### Help With Writing A Paper On Anxiety Disorders Maintain these tales in folders by publisher, with sub-folders categorized by month and year of publication. We’ve all been there, its 11 PM and you have a paper owing for your class at 8AM. and you have not even began. I was taught National University of Singapore (NUS) https://www.paper-help.org/ Track and Field this several years ago and it is even now true right now with my tablet. It is essential that you have all of the ideal provides in the rest room. A person of which is using A4 paper for each and every significant report, doc or presentation that you would have to make. Just since you have the prospect to study your shoppers doesn’t mean you need to question every feasible concern. It will be uncomplicated for an instructor who understands you nicely to detect this and your expression paper will never be specified its due honor simply because of plagiarism. Just oiling it consistently can really assistance you to ensure that it performs successfully. If they have these needs you can be confident of them manufacturing a white paper that works. Allow your kid to assist combine up the bread, and make a stage of permitting her include the yeast. Creating items can make people feel very good, and this is undoubtedly genuine for little ones. It is actually vital for you so you do need to have to be concerned in acquiring a challenge when shredding paper with your shredder. It is pretty a lot the identical with white paper crafting. General, then, obtaining wholesale paper baggage is the greatest choice for your small business. No, it is not the serious detail, but if you are a novice investor it provides you the arena to hone your trading abilities. So, if you only want a dim mild in that room, you will almost certainly opt for a extended sq. or round form for that spot nevertheless, you ought to be careful to have plenty of house close to it, if not it will appear awkward and feel crowded. One of the main positive aspects of printing these kinds of paper from internet sites is that you can edit them in accordance to your requirement. Using the time to contemplate what you would like to have in a bathroom you are working with will aid you to opt for what you must be which includes in your bogs for others. Your objective in this segment will be to satisfy the inquisitive reader who seeks to validate the authenticity of your references and who will want to realize your references additional. You can go to diverse retail retailers to check out at the high quality of their items if you want. You need to review genuinely well the position where by you want to put the lanterns.
2021-03-01 13:35:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18425056338310242, "perplexity": 1541.4042397462588}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178362513.50/warc/CC-MAIN-20210301121225-20210301151225-00586.warc.gz"}
http://www.helpteaching.com/questions/CCSS.Math.Content.HSN-CN.B.4
Tweet # Common Core Standard HSN-CN.B.4 Questions (+) Represent complex numbers on the complex plane in rectangular and polar form (including real and imaginary numbers), and explain why the rectangular and polar forms of a given complex number represent the same number. You can create printable tests and worksheets from these questions on Common Core standard HSN-CN.B.4! Select one or more questions using the checkboxes above each question. Then click the add selected questions to a test button before moving to another page. Grade 11 Complex Numbers CCSS: HSN-CN.B.4 The point $2-5i$ is in which quadrant? 1. $I$ 2. $II$ 3. $III$ 4. $IV$ Grade 11 Complex Numbers CCSS: HSN-CN.B.4 What are the polar coordinates of $2-5i$ to one decimal place? 1. $(5, -68deg)$ 2. $(5.4, 68.2deg)$ 3. $(5.4, -68.2deg)$ 4. $(5, 68deg)$ Grade 11 Complex Numbers CCSS: HSN-CN.B.4 The point $5+7i$ is in which quadrant? 1. $I$ 2. $II$ 3. $III$ 4. $IV$ Grade 11 Complex Numbers CCSS: HSN-CN.B.4 What are the polar coordinates of $-3+7i$ to one decimal place? 1. $(3.7, -10deg)$ 2. $(-7.6, -113.1deg)$ 3. $(76, 113deg)$ 4. $(7.6, 113.2deg)$ Grade 11 Complex Numbers CCSS: HSN-CN.B.4 The point $1+i$ is in which quadrant? 1. $I$ 2. $II$ 3. $III$ 4. $IV$ You need to have at least 5 reputation to vote a question down. Learn How To Earn Badges.
2016-10-27 10:57:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18277208507061005, "perplexity": 2230.3433091536176}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721268.95/warc/CC-MAIN-20161020183841-00449-ip-10-171-6-4.ec2.internal.warc.gz"}
http://mathhelpforum.com/differential-geometry/125326-boundedness-removed-print.html
# Boundedness Removed • January 24th 2010, 09:25 PM frenchguy87 Boundedness Removed Given the following theorem, Thm: Let X be a bounded sequence of reals and let x have the property that every convergent subsequence of X converges to x. Then the sequence x converges to x. Give an example to show that the theorem fails if the hypothesis that X is bounded is removed • January 24th 2010, 10:09 PM Drexel28 Quote: Originally Posted by frenchguy87 Given the following theorem, Thm: Let X be a bounded sequence of reals and let x have the property that every convergent subsequence of X converges to x. Then the sequence x converges to x. Give an example to show that the theorem fails if the hypothesis that X is bounded is removed What do you think? If we removed boundedness, what do you think is the obvious place to look? • January 25th 2010, 04:08 AM frenchguy87 I was thinking outside the original bound M • January 25th 2010, 05:21 AM HallsofIvy 1, 1/2, 2, 1/3, 3, 1/4, 4, 1/5, 5, ... • January 25th 2010, 06:49 AM frenchguy87 I'm not sure that works since every subsequence has to converge to the same limit x. I might be understanding it wrong though • January 25th 2010, 07:03 AM Defunkt Quote: Originally Posted by frenchguy87 I'm not sure that works since every subsequence has to converge to the same limit x. I might be understanding it wrong though Every convergent subsequence will converge to the same limit. In HallsofIvy's example, any convergent subsequence will be of the form $\frac{1}{n_k}, n_k \to \infty$ starting some $N\in \mathbb{N}$, and will thus converge to 0.
2016-05-29 01:15:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9789596796035767, "perplexity": 430.80655818476214}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049278244.7/warc/CC-MAIN-20160524002118-00146-ip-10-185-217-139.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/question-about-entalpy-change.619157/
1. Jul 7, 2012 ### Carcul Consider a closed thermodynamic system capable of exchanging energy in the form of work only as PV work. Under these conditions, for an isobaric process we find that the heat exchanged equals the enthalpy change. Now what about the reverse? For such a system, if for some process the heat exchange equals the change in enthalpy can we conclude that the pressure has remained constant? If not can you find a counter example? 2. Jul 10, 2012 ### Andrew Mason Yes, but only if ∂W = PdV. You can prove this from the definition of enthalpy: H = U + PV dH = dU + PdV + VdP If dH = ∂Q = dU + ∂W then ∂W = PdV + VdP If ∂W = PdV then VdP = 0 which implies that dP = 0 (isobaric) AM 3. Jul 10, 2012 ### Carcul Thank you very much. But why does ΔH = Q implies dH = δQ? Last edited: Jul 10, 2012 4. Jul 10, 2012 ### Studiot It doesn't. Neither the heat nor the work exchanged are true differentials, they are actual values so it is wrong talk of delta (of any sort) q or w. Some people prefer capitals, some prefer lower case some use the (as Andrew has done) Greek delta to show this. But the bottom line is that the heat exchanged is the heat exchanged it is not a small change in the heat exchanged.
2018-06-22 13:36:51
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8522754311561584, "perplexity": 1675.0857209669239}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864482.90/warc/CC-MAIN-20180622123642-20180622143642-00065.warc.gz"}
https://www.hackmath.net/en/math-problem/39961
# If you 5 If you have 0.08 what is the form in thousandths? x =  0.08 ### Step-by-step explanation: $x=\frac{80}{1000}=\frac{2}{25}=0.08$ We will be pleased if You send us any improvements to this math problem. Thank you! Tips to related online calculators Need help to calculate sum, simplify or multiply fractions? Try our fraction calculator. ## Related math problems and questions: • Write 2 Write 791 thousandths as fraction in expanded form. • Compare Compare fractions (34)/(3) and (12)/(4). Which fraction of the lower? • Comparing and sorting Arrange in descending order this fractions: 2/7, 7/10 & 1/2 • One sixth How many sixths are two thirds? • Regrouping Subtract mixed number with regrouping: 11 17/20- 6 19/20 • Expanded form What is the expanded form of 0.21? • Expression If it is true that ? is: • Between two mixed What is the rational number between 2 1/4 and 2 4/5? • Fraction Find for what x fraction (-4x -6)/(x) equals: • Fractions Sort fractions z1 = (6)/(11); z2 = (10)/(21); z3 = (19)/(22) by its size. Result write as three serial numbers 1,2,3. • What is 11 What is the quotient of Three-fifths and 1 Over 10? • Day What part of the day are 23 hours 22 minutes? Express as a decimal number. • Guess a fraction Tom was asked to guess a fraction. The sum of 1/2 the numerator and 1/3 of its denominator is 30. If Tom subtracts 36 from its denominator, the fraction becomes 1/3. What is the fraction that Tom was asked to guess? (Leave your answer in simplest form) • Equation 11 Solve equation: ? • Company After increasing the number of employees by 15% company has 253 employees. How many employees take? • Clock What time is now if the time elapsed afternoon is 2/10 of time that elapses before midnight? • Find the 11 Find the quotient of 229.12 and 12.32
2021-05-09 05:17:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7858901619911194, "perplexity": 5774.192196140467}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988955.89/warc/CC-MAIN-20210509032519-20210509062519-00591.warc.gz"}
http://www.khronos.org/registry/vulkan/specs/1.0/man/html/VkClearAttachment.html
## C Specification The VkClearAttachment structure is defined as: typedef struct VkClearAttachment { uint32_t colorAttachment; VkClearValue clearValue; } VkClearAttachment; ## Members • aspectMask is a mask selecting the color, depth and/or stencil aspects of the attachment to be cleared. aspectMask can include VK_IMAGE_ASPECT_COLOR_BIT for color attachments, VK_IMAGE_ASPECT_DEPTH_BIT for depth/stencil attachments with a depth component, and VK_IMAGE_ASPECT_STENCIL_BIT for depth/stencil attachments with a stencil component. If the subpass’s depth/stencil attachment is VK_ATTACHMENT_UNUSED, then the clear has no effect. • colorAttachment is only meaningful if VK_IMAGE_ASPECT_COLOR_BIT is set in aspectMask, in which case it is an index to the pColorAttachments array in the VkSubpassDescription structure of the current subpass which selects the color attachment to clear. If colorAttachment is VK_ATTACHMENT_UNUSED then the clear has no effect. • clearValue is the color or depth/stencil value to clear the attachment to, as described in Clear Values below. ## Description No memory barriers are needed between vkCmdClearAttachments and preceding or subsequent draw or attachment clear commands in the same subpass. The vkCmdClearAttachments command is not affected by the bound pipeline state. Attachments can also be cleared at the beginning of a render pass instance by setting loadOp (or stencilLoadOp) of VkAttachmentDescription to VK_ATTACHMENT_LOAD_OP_CLEAR, as described for vkCreateRenderPass. Valid Usage • If aspectMask includes VK_IMAGE_ASPECT_COLOR_BIT, it must not include VK_IMAGE_ASPECT_DEPTH_BIT or VK_IMAGE_ASPECT_STENCIL_BIT • aspectMask must not include VK_IMAGE_ASPECT_METADATA_BIT • clearValue must be a valid VkClearValue union Valid Usage (Implicit) • aspectMask must be a valid combination of VkImageAspectFlagBits values • aspectMask must not be 0
2018-03-21 10:42:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5396763682365417, "perplexity": 13877.918964558487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647612.53/warc/CC-MAIN-20180321102234-20180321122234-00351.warc.gz"}
https://gmat.la/question/OG2018Q-PS-88
What is the thousandths digit in the decimal equivalent of ~$53 \over 5000$~ 0 1 3 5 6 ##### 考题讲解 ~$\frac{53}{5000}=\frac{106}{10000}=0.0106$~. 千分位就是 0.
2021-12-03 01:18:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22573302686214447, "perplexity": 6081.286890024464}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362571.17/warc/CC-MAIN-20211203000401-20211203030401-00093.warc.gz"}
http://mathoverflow.net/questions/142775/le-haut-commissariat-qui-surveille-rigoureusement-lalignement-de-ses-grandes-py
# Le Haut Commissariat qui surveille rigoureusement l'alignement de ses Grandes Pyramides Yesterday I came across the following one-paragraph summary of the history of the Law of Quadratic Reciprocity in Roger Godement's Analyse mathématique, IV, p.313 (perhaps the only treatise on Analysis which contains a statement of the Law in question). Legendre a deviné la formule et Gauss est devenu instatanément célèbre en la prouvant. En trouver des généralisations, par exemple aux anneaux d'entiers algébriques, ou d'autres démonstrations a constitué un sport national pour la dynastie allemande suscité par Gauss jusqu'à ce que le reste du monde, à commencer par le Japonais Takagi en 1920 et à continuer par Chevalley une dizaine d'années plus tard, découvre le sujet et, après 1945, le fasse exploser. Gouverné par un Haut Commissariat qui surveille rigoureusement l'alignement de ses Grandes Pyramides, c'est aujourd'hui l'un des domaines les plus respectés des Mathématiques. Which Haut Commissariat is he referring to ? Or is it just a joke ? - I would guess it's a joke, although Godement certainly must have something in mind when he used these words; perhaps he was simply referring to Langlands' program. –  Franz Lemmermeyer Sep 21 '13 at 7:05 BTW there are quite a few textbooks on complex analysis that state and prove the quadratic reciprocity law, –  Franz Lemmermeyer Sep 21 '13 at 7:06 Yes, he's surely referring to the Langlands Programme, but it is somewhat funny to call it the Haut Commissariat of something or the other. Godement is a good friend of Langlands, by the way, and one of the three people who are gratefully mentioned in the acceptance speech when Langlands received the Grande Médaille d'Or of the Académie des sciences : publications.ias.edu/sites/default/files/discours-ps.pdf –  Chandan Singh Dalawat Sep 21 '13 at 7:57 Langlands writes "... je veux nommer trois mathématiciens qui se donnèrent la peine de persuader le jeune homme [the young Langlands], bien plus modeste que moi, qu’il valait quelque chose: Salomon Bochner, né je crois à Cracovie en Pologne, Harish-Chandra, né à Kanpur en Inde, tous les deux devenus mathématiciens américains, et Roger Godement, mathématicien français. Il me serait impossible d’exprimer en quelques phrases courtes combien lui, il leur devait, et combien moi, je leur dois toujours." –  Chandan Singh Dalawat Sep 21 '13 at 8:05 For those who might have missed it, Roger Godement has graced MathOverflow once (thanks to Anton for helping find the link) : mathoverflow.net/questions/91385/… –  Chandan Singh Dalawat Sep 22 '13 at 4:48 I disagree with Michael Grünewald's interpretation, which by the way doesn't answer the initial question: who Godement is he referring too? I think this is a joke made without acrimony. "Thought police", "innovation preventing", are much too strong phrases to translate Godement's light ironical quotation. To a french-spaking ear, "Haut Commissariat" in this context evokes the "Commissariat Général au Plan", created by the administration led by de Gaulle in 1946 (and including a large political spectrum, from right wing to communists). It was an institution without real power but which was supposed to prepare non-compulsory "plans" to develop the economy for the next five years, the idea being to take advantage of whatever was thought efficient in soviet-like planning while staying essentially a free-market economy. (Of course there are other institutions with that name, like UN's "haut-commissariat aux réfugiés", but really that the plan one that comes to mind). So back to quadratic reciprocity, I may be completely wrong but I imagine that the Haut-Commissaire in question might be R.P. Langlands and his huge program that has provided a non-compulsory, but hugely influential, planning for the research in "higher class field theory" since more than 40 years. - I read after writing my answer the comments under the questions, to which I agree. Godement was very close to this "haut-commissariat", perhaps he even considered himself a member of it :-). After all he was Jacquet's advisor. There was a touching fear in the groups of mathematicians to which Godement belonged: the fear to become what they called a "mandarin", an installed mathematician detaining (and this detained by) a large power on the developments. Self-irony (or irony aimed at friends and students) was seen as a way to protect oneself against such an evolution. –  Joël Sep 21 '13 at 14:40 I think this is a joke made without acrimony. I also do! "Thought police", "innovation preventing", are much too strong phrases to translate Godement's light ironical quotation. You are definitely right, but irony is one of the hardest thing to deal with for non native speakers. This is why I choosed to rephrase the exceirpt without any subtlety. –  Michael Grünewald Sep 21 '13 at 16:10 This “Haut Commissariat” is not a formal organisation, but a fictional organisation he invented to make an ironical statement. Here is how I would rephrase his statment in a non-ironical way: This subject [the legacy of Legendre, Gauss, Takagi and Chevalley] felt under control of a Thought Police preventing any innovation in the field. (Thought Police is referring to Orwell's novel 1984, but is probably clear enough by itself.) One could almost understand that participants of this “Thought Police” take care of pushing new comers apart, to make sure that old respectable problems are not resolved by anybody bu themselves—if some—but this would really be one step further. I base my lecture on the usual opposition between innovation, imagination and freedom on the one hand, conservatism, respect and police on the other hand. I also read several books by Godement, so I may hope I do not abuse his statement too much! - I'm sure he didn't mean any such thing, and it is not true at all that there is a "Thought Police preventing any innovation in the field". The new ideas in this field have no parallel in the history of mankind. –  Chandan Singh Dalawat Sep 21 '13 at 7:26 I think "thought police" is putting it a little bit too strong, but I would rather agree with the idea that he meant the remark as somewhat caustic towards the "establishment" of number theory and its inclination to defend a certain orthodox view of (pure) mathematics.I don't see how it could be viewed otherwise since "Haut Commissariat" refers to a high-level official agency of the French government, which he certainly didn't put close to his heart. –  Jean Raimbault Sep 21 '13 at 11:58 Comments under the question, posted 6 and 7 hours ago now, make these comments posted under this answer in the past 6 hours seem unlikely, as they suggest Godement is personally friendly to Langlands (who would surely be in any "establishment" of number theory) and involved in promoting his work. And it is hard to guess what "establishment" Godement would say is merely preserving old ideas. –  Colin McLarty Sep 21 '13 at 14:26 @ColinMcLarty I always will love and help my friends, which does not forbid me to sometimes disagree with or make fun of their positions or deeds. –  Michael Grünewald Sep 21 '13 at 16:23 @MichaelGrünewald You have a broad and warm sense of friendship, but that by itself does not make a persuasive argument for your reading of Godement. –  Colin McLarty Sep 22 '13 at 1:20
2014-10-25 19:20:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.614898145198822, "perplexity": 3004.965209025006}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119649133.8/warc/CC-MAIN-20141024030049-00208-ip-10-16-133-185.ec2.internal.warc.gz"}
https://zbmath.org/?q=an:1264.62087&format=complete
# zbMATH — the first resource for mathematics Combining forecasts using the least trimmed squares. (English) Zbl 1264.62087 Summary: Employing a recently derived asymptotic representation of the least trimmed squares estimator, the combinations of the forecasts with constraints are studied. Under the assumption of the unbiasedness of individual forecasts it is shown that the combination without intercept and with constraints imposed on the estimate of the regression coefficients that they sum to one, is better than others. A numerical example is included to support theoretical conclusions. ##### MSC: 62M20 Inference from stochastic processes and prediction 62F30 Parametric inference under constraints 62M10 Time series, auto-correlation, regression, etc. in statistics (GARCH) 65C60 Computational problems in statistics (MSC2010) ##### Keywords: regression coefficients Full Text: ##### References: [1] Bates J. M., Granger C. W. J.: The combination of forecasts. Oper. Res. Quarterly 20 (1969), 451-468 [2] Bickel P. J.: One-step Huber estimates in the linear model. J. Amer. Statist. Assoc. 70 (1975), 428-433 · Zbl 0322.62038 [3] Clemen R. T.: Linear constraints and efficiency of combined forecasts. J. of Forecasting 6 (1986), 31-38 [4] Hampel F. R., Ronchetti E. M., Rousseeuw P. J., Stahel W. A.: Robust Statistics - The Approach Based on Influence Functions. Wiley, New York 1986 · Zbl 0733.62038 [5] Peel K. Holdenand D. A.: Unbiasedness, efficiency and the combination of economic forecasts. J. of Forecasting 8 (1989), 175-188 · Zbl 04550514 [6] Huber P. J.: Robust Statistics. Wiley, New York 1981 · Zbl 0536.62025 [7] Jurečková J., Sen P. K.: Regression rank scores scale statistics and studentization in linear models. Proceedings of the Fifth Prague Symposium on Asymptotic Statistics, Physica Verlag, Heidelberg 1993, pp. 111-121 [8] Rao R. C.: Linear Statistical Inference and Its Applications. Wiley, New York 1973 · Zbl 0256.62002 [9] Rubio A. M., Aguilar L. Z., Víšek J. Á.: Combining the forecasts using constrained $$M$$-estimators. Bull. Czech Econometric Society 4 (1996), 61-72 [10] Rubio A. M., Víšek J. Á.: Estimating the contamination level of data in the framework of linear regression analysis. Qűestiió 21 (1997), 9-36 · Zbl 1167.62388 [11] Varadarajan V. S.: A useful convergence theorem. Sankhyã 20 (1958), 221-222 · Zbl 0088.11303 [12] Víšek J. Á.: Stability of regression model estimates with respect to subsamples. Computational Statistics 7 (1992), 183-203 · Zbl 0775.62182 [13] Víšek J. Á.: Statistická analýza dat. (Statistical Data Analysis - a textbook in Czech.) Publishing House of the Czech Technical University Prague 1997 [14] Víšek J. Á.: Robust constrained combination of forecasts. Bull. Czech Econometric Society 5 (1998), 8, 53-80 [15] Víšek J. Á.: Robust instruments. Robust’98 (J. Antoch and G. Dohnal, Union of the Czech Mathematicians and Physicists, Prague 1998, pp. 195-224 [16] Víšek J. Á.: Robust specification test. Proceedings of Prague Stochastics’98 (M. Hušková, P. Lachout and J. Á. Víšek, Union of Czech Mathematicians and Physicists 1998, pp. 581-586 [17] Víšek J. Á.: Robust estimation of regression model. Bull. Czech Econometric Society 9 (1999), 57-79 [18] Víšek J. Á.: The least trimmed squares - random carriers. Bull. Czech Econometric Society 10 (1999), 1-30 [19] Víšek J. Á.: Robust instrumental variables and specification test. PRASTAN 2000, Proceedings of the conference “Mathematical Statistics and Numerical Mathematics and Their Applications”, (M. Kalina, J. Kalická, O. Nanásiová and A. Handlovičová, Comenius University, pp. 133-164 [20] Víšek J. Á.: Regression with high breakdown point. Proceedings of ROBUST 2000, Nečtiny, Union of the Czech Mathematicians and Physicists and The Czech Statistical Society. Submitted [21] Víšek J. Á.: A new paradigm of point estimation. Proceedings of seminar “Data Processing”, TRYLOBITE, Pardubice 2000. Submitted [22] Wald A., Wolfowitz J.: Statistical tests based on permutations of the observations. Ann. Math. Statist. 15 (1944), 358-372 · Zbl 0063.08124 [23] Yohai V. J., Maronna R. A.: Asymptotic behaviour of $$M$$-estimators for the linear model. Ann. Statist. 7 (1979), 258-268 · Zbl 0408.62027 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2021-12-02 12:16:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6092561483383179, "perplexity": 7652.071067929425}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362219.5/warc/CC-MAIN-20211202114856-20211202144856-00440.warc.gz"}
https://civil.gateoverflow.in/1907/gate-civil-2022-set-1-question-31
As per Rankine's theory of earth pressure, the inclination of failure planes is $\left ( 45 + \frac{\phi }{2} \right )^{\circ}$ with respect to the direction of the minor principal stress. The above statement is correct for which one of the following options? 1. Only the active state and not the passive state 2. Only the passive state and not the active state 3. Both active as well as passive states 4. Neither active nor passive state Answer:
2022-09-28 18:22:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6238763332366943, "perplexity": 1811.7917833421213}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00191.warc.gz"}
https://moitverhedd.cf/vanilla-options-vs-binary-options-268799.html
July 14, 2020 ### Binary Options vs Forex Which One Is Better? 2019/03/22 · A binary option is a financial product where the buyer receives a payout or loses their investment, based on if the option expires in the money. Difference Between Binary and Vanilla Options . ### Binary Options vs. Vanilla Options 2018/08/03 · And includes some built-in options that are not available in many other wallet apps. He has Jaxx, Bread, Lumi, Coinomi? Je kan deze wallet in een kluis of een andere veilige plek bewaren. To answer that question, Yes, we are all having issues with our … ### Binary Option Definition and Example - Investopedia Forwards, swaps and vanilla options . Pricing Measures and Applications to Exotic Options Exotic Options eFinanceManagement.com Exotic Options: Vanilla Options Vs Binary Option System K4!In this case, you should buy a binary options contract that predicts that the value of the USD/xxx exchange rate increases (meaning the USD will perform poor). ### Concept :: Dukascopy Bank SA | Swiss Forex Bank 2019/09/23 · One should fully understand the advantages of vanilla options vs binary options before they begin making even small trades. Options depend on predictions about future prices. Both ordinary options and binary options are predictions about the value of a stock in the future. To succeed at trading options, one must be able to predict the price of ### IQ Option Binary Options and Digital Options, what are the Learn All About Simple And Effective Binary Options Trading Strategies To Help You Get Vanilla Options Vs Binary Option System K4! Our reviews contain more detail about each brokers mobile app, but most are fully aware that this is a growing area of trading. Sell Bitcoin For Skrill ### Binary Options vs. Vanilla Options in Forex Trading 2017/11/03 · Conclusion: Binary Options vs Forex. As you may have came across many times in this article already, my personal recommendation, especially for novice investors is definitely binary options. You get to compete against other beginners from the same starting line, and you will also might find that investing can be really fun. ### Binary option - Wikipedia 2017/01/04 · LCG now offers the opportunity to trade vanilla options. New to trading options? This video will take you through the basics of put and call vanilla options. ### Binary Options Vs. Forex - BabyPips.com Binary Options vs. Vanilla Options. If you ask a professional trader about currency or other asset options, he/she will assume you are talking about a vanilla option or some exotic form of it (e.g. one touch, no touch, etc.). On the other hand, if you ask a retail trader about options, he/she will likely assume you are talking about binary options. ### Binary Options vs. Options: What is the Difference? Is the delta of a binary option the same as the delta for a regular European option? The payoff of this call spread will dominate and approach exactly the payoff of the binary option (theoretically) in the limit as $\epsilon \to 0$. (S,K)}\epsilon = \frac\partial C\partial K(S,K), and we see the distinction between the delta of ### What is the difference between options and binary options 2013/12/12 · What Is The Difference Between Binary Options And Day Trading? Find Out What The Differences Are Binary Options VS Forex Day Trader Answer: Binary Options options can be From Minutes To Hours. Day Trading is trading stocks or currency in one day.. Related search: Tell me the Difference Between Binary Options And Day Trading? What Is The Difference Between Day Trading And Binary Options ### Iq Option Vs Binary - Was Ist Ota Software Update Binary options once bought cannot be resold before the expiry time is reached. Binary Options Trading in Detail Binary options are a simple and rewarding financial trading product. Binary options deliver a fixed return on every trade which is made, depending on whether the trade was "In The Money", "Out Of The Money" or a "Tie". ### Difference between Binary options and Forex? 2018/10/29 · When binary options expire, there can only be two possible outcomes, either 100 or 0. It is for this reason why binary options are at times referred to as digital or binary options. In the case of vanilla options, on the other hand, the expected payoff is variable. In the illustration below, we have the expiry and payout matrix for an option. ### What Is The Difference Between Binary Options And Day Binary.com is an award-winning online trading provider that helps its clients to trade on financial markets through binary options and CFDs. Trading binary options and CFDs on Synthetic Indices is classified as a gambling activity. Remember that gambling can be addictive – please play responsibly. Learn more about Responsible Trading. Some ### What is "Digital Options" and Why It's Better Than Binary 2013/10/09 · Increases in implied volatility will increase the price of an option whether it’s a standard vanilla option or a Binary Option. The price of a Binary Option changes with two factors. The first is the payout, and the second is the amount of capital returned to the investor on a losing trade. Given the payout profile for Binary Options ### Is the delta of a binary option the same as the delta for This option behaves in every way like a vanilla European call, except if the spot price ever moves above $120, the option "knocks out" and the contract is null and void. Note that the option does not reactivate if the spot price falls below$120 again. Once it is out, it's out for good. In-out parity is the barrier option's answer to put-call ### AvaOptions - Vanilla Options Trading Platform Introduction Binary Options vs Digital Options. Jul 20 2017 By Jonathan Smith. The binary options industry continues moving forward. One of the rather new products which is available are the so called “digital options”. Though they may appear similar in nature to the classic binary options, there are several differences worth pointing out. ### What Is The Difference Between Binary Options And Day 2019/07/14 · Vanilla Options vs Binary Options: What You Should Know. July 14, 2019 by Steve. In investing, an option gives you the right to buy or sell a security by a specific date. Options have different payouts, which means you can speculate on a security in different ways. ### Bread Wallet Api - Vanilla Option Vs Binary Option Which brings us to the next difference between binary options trading and real options trading Differences Between Binary Options Trading and Real Options Trading - Cannot be traded vs Can be traded. Binary options "trading" technically isn't trading at all. Trading means being able to buy AND sell. ### What are Vanilla Options? - YouTube Binary options wiki Q&A. IQ Option is one of the few online brokers that has managed to attract millions of traders from across the globe over a short amount of time. The main reason for this is their innovation and introduction of new features and instruments. One of their latest introduction is digital options … ### IQ Option vs Olymp Trade: Which One is Better? (November 2019) Vanilla Option brokers | Elite TraderLorenz V. says:What vanilla options vs. binary options is a 'Binary Option' How to use vanilla options vs. binary options options to your advantage?Regulation. How to Trade – Step by Step GuideQualcomm Inc. (USTECH100) vanilla options vs. binary options : End of China Tension Brings Dutch Prize ### Vanilla Options vs Binary Options: What You Should Know 2018/12/27 · A long binary option can be approximated by a bull spread. The tighter you make the spread, the more it will look like a binary option's payoff. So if you wanted to replicate a binary option portfolio struck at x with payoff y, you'd buy n calls with strike x and sell n calls with strike x+arbitrarily small amount, such that the payoff is y. ### Forex Options Trading vs. Spot Trading: What's The Difference? Vanilla Options Vs. Binary Options – A Closer Look ### Traditional Options Versus Binaries - Binary Options Trading Risk Warning – “Investors can lose all their capital by trading binary options” The luxury to be able to make this choice is not free. There is a contract price that you must pay, usually determined by how many individual units of the asset you are buying and how far away the expiry is.Because you may execute your option at any time prior to the expiry, the further away the expiration ### Vanilla Option Definition - Investopedia The main difference between the binary and the vanilla options is the fixed outcome of the former: you get a fixed ROI on the contract’s price if option ends in-the-money. This means that binary options have a fixed gain in addition to a fixed loss, which intrinsic to vanilla options. ### Binary Options VS. Vanilla Options - What are the Differences? 2016/08/09 · A quick word on vanilla options and binary options. If you are interested in trading forex options, you might have come across the words "vanilla options" and "binary options". Vanilla options
2021-07-25 02:58:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22479520738124847, "perplexity": 2765.1758396377295}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151563.91/warc/CC-MAIN-20210725014052-20210725044052-00441.warc.gz"}
https://ncertmcq.com/rs-aggarwal-class-10-solutions-chapter-1-real-numbers-ex-1e/
## RS Aggarwal Class 10 Solutions Chapter 1 Real Numbers Ex 1E These Solutions are part of RS Aggarwal Solutions Class 10. Here we have given RS Aggarwal Solutions Class 10 Chapter 1 Real Numbers Ex 1E. Other Exercises Question 1. Solution: For any two given positive integers a and b there exist unique whole numbers q and r such that a = bq + r, where 0 ≤ r < b. Here, we call ‘a’ as dividend, b as divisor, q is quotient and r as remainder. Dividend = (Divisor x Quotient) + Remainder Question 2. Solution: Every composite number can be uniquely expressed as a product of two primes, except for the order in which these prime factors occurs. Question 3. Solution: 360 = 2 x 2 x 2 x 3 x 3 x 5 = 23 x 3² x 5 Question 4. Solution: We know that HCF of two primes is HCF (a, b) = 1 Question 5. Solution: a and b are two prime numbers then their LCM = Product of these two numbers LCM(a, b) = a x b = ab. Question 6. Solution: We know that product of two numbers is equal to their HCF x LCM LCM = $$\frac { Product of two numbers }{ HCF }$$ = $$\frac { 1050 }{ 25 }$$ = 42 LCM of two numbers = 42 Question 7. Solution: A composite number is a number which is not a prime. In other words, a composite number has more than two factors. Question 8. Solution: a and b are two primes, then their HCF will be 1 HCF of a and b = 1 Question 9. Solution: $$\frac { a }{ b }$$ is a rational number and it has terminating decimal b will in the form 2m x 5n where m and n are some non-negative integers. Question 10. Solution: Question 11. Solution: Question 12. Solution: 2n x 5n = (2 x 5)n = (10)n Which always ends in a zero There is no value of n for which (2n x 5n) ends in 5 Question 13. Solution: We know that HCF is always a factor is its LCM But 25 is not a factor of 520 It is not possible to have two numbers having HCF = 25 and LCM = 520 Question 14. Solution: Let two irrational number be (5 + √3) and (5 – √3). Now their sum = (5 + √3) + (5 – √3) = 5 + √3 + 5 – √3 = 10 Which is a rational number. Question 15. Solution: Let the two irrational number be (3 + √2) and (3 – √2) Now, their product = (3 + √2) (3 – √2) = (3)² – (√2)² {(a + b) (a – b) = a² – b²} = 9 – 2 = 7 Which is a rational number. Question 16. Solution: a and b are relative primes their HCF = 1 Question 17. Solution: LCM of two numbers = 1200 and HCF = 500 But we know that HCF of two numbers divides their LCM. But 500 does not divide 1200 exactly Hence, 500 is not their HCF whose LCM is 1200. Question 18. Solution: Let x = 0.4 = 0.444 Then 10x = 4.444…. Subtracting, we get 9x = 4 => x = $$\frac { 4 }{ 9 }$$ $$\bar { 0.4 }$$ = $$\frac { 1 }{ 2 }$$ which is in the simplest form. Question 19. Solution: $$\bar { 0.23 }$$ Let x = $$\bar { 0.23 }$$ = 0.232323……. and 100x = 23.232323…… Subtracting, we get 99x = 23 => x = $$\frac { 23 }{ 99 }$$ $$\bar { 0.23 }$$ = $$\frac { 23 }{ 99 }$$ which is in the simplest form. Question 20. Solution: 0.15015001500015 It is non-terminating non-repeating decimal. It is an irrational number. Question 21. Solution: $$\frac { \surd 2 }{ 3 }$$ = $$\frac { 1 }{ 3 }$$ √2 Let $$\frac { 1 }{ 3 }$$ √2 is a rational number Product of two rational numbers is a rational $$\frac { 1 }{ 3 }$$ is rational and √2 is rational contradicts $$\frac { \surd 2 }{ 3 }$$ or $$\frac { 1 }{ 3 }$$ √2 is irrational. Question 22. Solution: √3 and 2. √3 = 1.732 and 2.000 A rational number between 1.732 and 2.000 can be 1.8 or 1.9 Hence, 1.8 or 1.9 is a required rational. Question 23. Solution: $$\bar { 3.1416 }$$ It is non-terminating repeating decimal. It is a rational number. Hope given RS Aggarwal Solutions Class 10 Chapter 1 Real Numbers Ex 1E are helpful to complete your math homework. If you have any doubts, please comment below. Learn Insta try to provide online math tutoring for you.
2023-01-29 10:17:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6222374439239502, "perplexity": 1693.275019006685}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499710.49/warc/CC-MAIN-20230129080341-20230129110341-00639.warc.gz"}
https://docs.zhinst.com/shfqa_user_manual/labone_overview.html
User Interface Overview UI Nomenclature This section provides an overview of the LabOne User Interface, its main elements and naming conventions. The LabOne User Interface is a browser-based UI provided as the primary interface to the SHFQA instrument. Multiple browser sessions can access the instrument simultaneously and the user can have displays on multiple computer screens. Parallel to the UI the instrument can be controlled and read out by custom programs written in any of the supported languages (e.g. LabVIEW, MATLAB, Python, C) connecting through the LabOne APIs. Figure 1. LabOne User Interface (default view) The LabOne User Interface automatically opens some tabs by default after a new UI session has been started. The UI is by default divided into two tab rows, each containing a tab structure that gives access to the different LabOne tools. Depending on display size and application, tab rows can be freely added and deleted with the control elements on the right-hand side of each tab bar. Similarly, the individual tabs can be deleted or added by selecting app icons from the side bar on the left. A click on an icon adds the corresponding tab to the display - alternatively, the icon can be dragged and dropped into one of the tab rows. Moreover, tabs can be displaced by drag-and-drop within a row or across rows. For brief descriptions and naming conventions of the most important UI items, see below. Table 1. LabOne User Interface features Item name Position Description Contains side bar left-hand side of the UI contains app icons for each of the available tabs - a click on an icon adds or activates the corresponding tab in the active tab row app icons status bar bottom of the UI contains important status indicators, warning lamps, device and session information and access to the command log status indicators main area center of the UI accommodates all active tabs – new rows can be added and removed by using the control elements in the top right corner of each tab row tab rows, each consisting of tab bar and the active tab area tab area inside of each tab provides the active part of each tab consisting of settings, controls and measurement tools sections, plots, sub-tabs, unit selections Further items are highlighted in Figure 2. Figure 2. LabOne User Interface (more items) Unique Set of Analysis Tools All SHFQA instruments feature a comprehensive tool set for signal generation, sequence programming, qubit state analysis and spectroscopy applications. The app icons on the left side of the UI can be roughly divided into two categories: settings and tools. • Settings-related tabs are in direct connection of the instrument hardware allowing the user to control all the settings and instrument states. • Tools-related tabs place a focus on the display and analysis of gathered measurement data. There is no strict distinction between settings and tools, e.g. will the Sweeper change certain digital Oscillator settings while performing a frequency sweep. Within the tools one can often further discriminate between time domain and frequency domain analysis. The following table gives the overview of all app icons. Note that the selection of app icons may depend on the upgrade options installed on a given instrument. Table 2. Overview of app icons and short description Control/Tool Option/Range Description Config Device Provides instrument specific settings. Files Access settings and measurement data files on the host computer. QA Setup Configure the Qubit Measurement Unit DIO Gives access to all controls relevant for the digital inputs and outputs including the Ref/Trigger connectors. ZI Labs Experimental settings and controls. The status bar description provides a quick overview over the different status bar elements along with a short description. Table 3. Status bar description Control/Tool Option/Range Description OVI grey/yellow/red Signal Input Overload - Red: present overload condition on the signal input also shown by the red front panel LED. Yellow: indicates an overload occurred in the past. OVO grey/yellow/red Overload Signal Output - Red: present overload condition on the signal output. Yellow: indicates an overload occurred in the past. Command log last command Shows the last command. A different formatting (Matlab, Python, ..) can be set in the config tab. The log is also saved in [User]\Documents\Zurich Instruments\LabOne\WebServer\Log Show Log Show the command log history in a separate browser window. Errors Errors Display system errors in separate browser tab. Device devXXX Indicates the device serial number. Identify Device MDS grey/green/red/yellow Multiple device synchronization indicator. Grey: Nothing to synchronize - single device on the UI. Green: All devices on the UI are correctly synchronized. Yellow: MDS sync in progress or only a subset of the connected devices is synchronized. Red: Devices not synchronized or error during MDS sync. REC grey/red A blinking red indicator shows ongoing data recording (related to global recording settings in the Config tab). CF grey/yellow/red Clock Failure - Red: present malfunction of the external 10 MHz reference oscillator. Yellow: indicates a malfunction occurred in the past. COM grey/yellow/red Packet Loss - Red: present loss of data between the device and the host PC. Yellow: indicates a loss occurred in the past. COM grey/yellow/red Sample Loss - Red: present loss of sample data between the device and the host PC. Yellow: indicates a loss occurred in the past. C Reset status flags: Clear the current state of the status flags Full Screen Toggles the browser between full screen and normal mode. Plot Functionality Several tools provide a graphical display of measurement data in the form of plots. These are multi-functional tools with zooming, panning and cursor capability. This section introduces some of the highlights for data acquisition and visualization. Plot Area Elements Plots consist of the plot area, the X range and the range controls. The X range (above the plot area) indicates which section of the wave is displayed by means of the blue zoom region indicators. The two ranges show the full scale of the plot which does not change when the plot area displays a zoomed view. The two axes of the plot area instead do change when zoom is applied. The X range and Y range plot controls are described in the plot control description. Table 4. Plot control description Control/Tool Option/Range Description Axis scaling mode Selects between automatic, full scale and manual axis scaling. Axis mapping mode Select between linear, logarithmic and decibel axis mapping. Axis zoom in Zooms the respective axis in by a factor of 2. Axis zoom out Zooms the respective axis out by a factor of 2. Rescale axis to data Rescale the foreground Y axis in the selected zoom area. Save figure Generates PNG, JPG or SVG of the plot area or areas for dual plots to the local download folder. Save data Generates a CSV file consisting of the displayed wave or histogram data (when histogram math operation is enabled). Select full scale to save the complete wave. The save data function only saves one shot at a time (the last displayed wave). Cursor control Cursors can be switch On/Off and set to be moved both independently or one bound to the other one. Provides a LabOne Net Link to use displayed wave data in tools like Excel, Matlab, etc. The mouse functionality inside of a plot greatly simplifies and speeds up data viewing and navigation. Table 5. Mouse functionality inside plots Name Action Description Performed inside Panning left click on any location and move around moves the waveforms plot area Zoom X axis mouse wheel zooms in and out the X axis plot area Zoom Y axis shift + mouse wheel zooms in and out the Y axis plot area Window zoom shift and left mouse area select selects the area of the waveform to be zoomed in plot area Absolute jump of zoom area left mouse click moves the blue zoom range indicators X and Y range, but outside of the blue zoom range indicators Absolute move of zoom area left mouse drag-and-drop moves the blue zoom range indicators X and Y range, inside of the blue range indicators Full Scale double click set X and Y axis to full scale plot area Each plot area contains a legend that lists all the shown signals in the respective color. The legend can be moved to any desired position by means of drag-and-drop. Plot data can be conveniently exported to other applications such as Excel or Matlab by using LabOne’s Net Link functionality, see LabOne Net Link for more information. Cursors and Math The plot area provides two X and two Y cursors which appear as dashed lines inside of the plot area. The four cursors are selected and moved by means of the blue handles individually by means of drag-and-drop. For each axis, there is a primary cursor indicating its absolute position and a secondary cursor indicating both absolute and relative position to the primary cursor. Cursors have an absolute position which does not change upon pan or zoom events. In case a cursor position moves out of the plot area, the corresponding handle is displayed at the edge of the plot area. Unless the handle is moved, the cursor keeps the current position. This functionality is very effective to measure large deltas with high precision (as the absolute position of the other cursors does not move). The cursor data can also be used to define the input data for the mathematical operations performed on plotted data. This functionality is available in the Math sub-tab of each tool. The plot-math table gives an overview of all the elements and their functionality. The chosen Signals and Operations are applied to the currently active trace only. Cursor data can be conveniently exported to other applications such as Excel or MATLAB by using LabOne’s Net Link functionality, see LabOne Net Link for more information. Table 6. Plot math description Control/Tool Option/Range Description Source Select Select from a list of input sources for math operations. Cursor Loc Cursor coordinates as input data. Cursor Area Consider all data of the active trace inside the rectangle defined by the cursor positions as input for statistical functions (Min, Max, Avg, Std). Tracking Display the value of the active trace at the position of the horizontal axis cursor X1 or X2. Plot Area Consider all data of the active trace currently displayed in the plot as input for statistical functions (Min, Max, Avg, Std). Peak Find positions and levels of up to 5 highest peaks in the data. Trough Find positions and levels of up to 5 lowest troughs in the data. Histogram Display a histogram of the active trace data within the x-axis range. The histogram is used as input to statistical functions (Avg, Std). Because of binning, the statistical functions typically yield different results than those under the selection Plot Area. Resonance Display a curve fitted to a resonance. Linear Fit Display a linear regression curve. Operation Select Select from a list of mathematical operations to be performed on the selected source. Choice offered depends on the selected source. Cursor Loc: X1, X2, X2-X1, Y1, Y2, Y2-Y1, Y2 / Y1 Cursors positions, their difference and ratio. Cursor Area: Min, Max, Avg, Std Minimum, maximum value, average, and bias-corrected sample standard deviation for all samples between cursor X1 and X2. All values are shown in the plot as well. Tracking: Y(X1), Y(X2), ratioY, deltaY Trace value at cursor positions X1 and X2, the ratio between these two Y values and their difference. Plot Area: Min, Max, Pk Pk, Avg, Std Minimum, maximum value, difference between min and max, average, and bias-corrected sample standard deviation for all samples in the x axis range. Peak: Pos, Level Position and level of the peak, starting with the highest one. The values are also shown in the plot to identify the peak. Histogram: Avg, Std, Bin Size, (Plotter tab only: SNR, Norm Fit, Rice Fit) A histogram is generated from all samples within the x-axis range. The bin size is given by the resolution of the screen: 1 pixel = 1 bin. From this histogram, the average and bias-corrected sample standard deviation is calculated, essentially assuming all data points in a bin lie in the center of their respective bin. When used in the plotter tab with demodulator or boxcar signals, there additionally are the options of SNR estimation and fitting statistical distributions to the histogram (normal and rice distribution). Resonance: Q, BW, Center, Amp, Phase, Fit Error A curve is fitted to a resonator. The fit boundaries are determined by the two cursors X1 and X2. Depending on the type of trace (Demod R or Demod Phase) either a Lorentzian or an inverse tangent function is fitted to the trace. The Q is the quality factor of the fitted curve. BW is the 3dB bandwidth (FWHM) of the fitted curve. Center is the center frequency. Amp gives the amplitude (Demod R only), whereas Phase returns the phase at the center frequency of the resonance (demod Phase only). The fit error is given by the normalized root-mean-square deviation. It is normalized by the range of the measured data. Linear Fit: Intercept, Slope, R² A simple linear least squares regression is performed using a QR decomposition routine. The fit boundaries are determined by the two cursors X1 and X2. The parameter outputs are the Y-axis intercept, slope and the R²-value, which is the coefficient of determination to determine the goodness-of-fit. Add the selected math function to the result table below. Add all operations for the selected signal to the result table below. Clear Selected Clear selected lines from the result table above. Clear All Clear all lines from the result table above. Copy Copy selected row(s) to Clipboard as CSV Unit Prefix Adds a suitable prefix to the SI units to allow for better readability and increase of significant digits displayed. CSV Values of the current result table are saved as a text file into the download folder. Provides a LabOne Net Link to use the data in tools like Excel, Matlab, etc. Help Opens the LabOne User Interface help. The standard deviation is calculated using the formula $\sqrt \frac{1}{N-1}\sum_{i=1}^{N}(x_i-\bar{x})^2$ for the unbiased estimator of the sample standard deviation with a total of N samples $x_i$ and an arithmetic average $\bar{x}$. The above formula is used as is to calculate the standard deviation for the Histogram Plot Math tool. For large number of points (Cursor Area and Plot Area tools), the more accurate pairwise algorithm is used (Chan et al., "Algorithms for Computing the Sample Variance: Analysis and Recommendations", The American Statistician 37 (1983), 242-247). Tree Selector The Tree selector allows one to access streamed measurement data in a hierarchical structure by checking the boxes of the signal that should be displayed. The tree selector also supports data selection from multiple instruments, where available. Depending on the tool, the Tree selector is either displayed in a separate Tree sub-tab, or it is accessible by a click on the button. Figure 3. Tree selector with Display drop-down menu Vertical Axis Groups Vertical Axis groups are available as part of the plot functionality in many of the LabOne tools. Their purpose is to handle signals with different axis properties within the same plot. Signals with different units naturally have independent vertical scales even if they are displayed in the same plot. However, signals with the same unit should preferably share one scaling to enable quantitative comparison. To this end, the signals are assigned to specific axis group. Each axis group has its own axis system. This default behavior can be changed by moving one or more signals into a new group. The tick labels of only one axis group can be shown at once. This is the foreground axis group. To define the foreground group click on one of the group names in the Vertical Axis Groups box. The current foreground group gets a high contrast color. Select foreground group Click on a signal name or group name inside the Vertical Axis Groups. If a group is empty the selection is not performed. Split the default vertical axis group Use drag-and-drop to move one signal on the field [Drop signal here to add a new group]. This signal will now have its own axis system. Change vertical axis group of a signal Use drag-and-drop to move a signal from one group into another group that has the same unit. Group separation In case a group hosts multiple signals and the unit of some of these signals changes, the group will be split in several groups according to the different new units. Remove a signal from the group In order to remove a signal from a group drag-and-drop the signal to a place outside of the Vertical Axis Groups box. Remove a vertical axis group A group is removed as soon as the last signal of a custom group is removed. Default groups will remain active until they are explicitly removed by drag-and-drop. If a new signal is added that match the group properties it will be added again to this default group. This ensures that settings of default groups are not lost, unless explicitly removed. Rename a vertical axis group New groups get a default name "Group of …​". This name can be changed by double-clicking on the group name. Hide/show a signal Uncheck/check the check box of the signal. This is faster than fetching a signal from a tree again. Figure 4. Vertical Axis Group typical drag and drop moves. Demodulator data are only available when using a Zurich Instruments lock-in amplifier from the UHF, HF, or MF series. Table 7. Vertical Axis Groups description Control/Tool Option/Range Description Vertical Axis Group Manages signal groups sharing a common vertical axis. Show or hide signals by changing the check box state. Split a group by dropping signals to the field [Drop signal here to add new group]. Remove signals by dragging them on a free area. Rename group names by editing the group label. Axis tick labels of the selected group are shown in the plot. Cursor elements of the active wave (selected) are added in the cursor math tab. Signal Type Select signal types for the Vertical Axis Group. Channel integer value Selects a channel to be added. Adds a signal to the plot. The signal will be added to its default group. It may be moved by drag and drop to its own group. All signals within a group share a common y-axis. Select a group to bring its axis to the foreground and display its labels. Window Length 2 s to 12 h Window memory depth. Values larger than 10 s may cause excessive memory consumption for signals with high sampling rates. Auto scale or pan causes a refresh of the display for which only data within the defined window length are considered. Trends The Trends tool lets the user monitor the temporal evolution of signal features such as minimum and maximum values, or mean and standard deviation. This feature is available for the Monitor Scope and Sweeper tab. Using the Trends feature, one can monitor all the parameters obtained in the Math sub-tab of the corresponding tab. The Trends tool allows the user to analyze recorded data on a different and adjustable time scale much longer than the fast acquisition of measured signals. It saves time by avoiding post-processing of recorded signals and it facilitates fine-tuning of experimental parameters as it extracts and shows the measurement outcome in real time. To activate the Trends plot, enable the Trends button in the Control sub-tab of the corresponding main tab. Various signal features can be added to the plot from the Trends sub-tab in the Vertical Axis Groups . The vertical axis group of Trends has its own Run/Stop button and Length setting independent from the main plot of the tab. Since the Math quantities are derived from the raw signals in the main plot, the Trends plot is only shown together with the main plot. The Trends feature is only available in the LabOne user interface and not at the API level.
2021-04-16 22:22:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23019883036613464, "perplexity": 2762.083281482728}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038092961.47/warc/CC-MAIN-20210416221552-20210417011552-00274.warc.gz"}
https://kullabs.com/classes/subjects/units/lessons/notes/note-detail/9538
### Optical properties of metal 1. The optical properties of metal can be explained by considering interaction of electron and incident electromagnetic radiation. 2. Metals are opaque because incident radiation having frequencies within the visible range excite electron into unoccupied energy state above Fermi energy as shown in figure. 1. Total absorption is within a very thin outer layer usually less than 0.1 $$\mu m$$ thus so only metallic film thinner than 0.1$$\mu$$m are capable of transmitting visible lights. 2. In facts metals are opaque to all electromagnetic radiation on the low end of frequency spectrum from radio waves through infrared, visible and about middle of the ultraviolet radiation. 3. Metals are transparent to high frequency radiation i.e. X-rays and gamma rays. 4. All frequencies of visible light are absorbed by metal because of the continuously available empty electron state which permits electron transition. 5. Most of the absorbed radiation is reemitted from the surface in the form of visible light of same wavelength which appears as reflected light. 6. Aluminum and silver are two metals that exhibit this reflective behavior. 7. Cooper and gold appears red-orange and yellow respectively because some of the energy associated with light photons having short wavelength is not reemitted as visible light. 8. Why metals are transparent to high frequency X-ray and gamma ray photon? The energy band structure of metals are such that empty and available electron state are adjacent to field states. Electron excitation from filled to empty states are possible with the absorption of electromagnetic radiation having the frequency within the visible region. The light energy is totally absorbed or reflected. None of the light energy is transmitted so the metals appear opaque. #### Optical properties of non-metal: Due to the electronic energy band structure non-metallic materials may be transparent to visible light. In addition to reflection and absorption refraction and transmission phenomena are also need to be considered. #### Refraction: The change of speed of light as it travels from on medium to another is known as refraction. The refractive index of material or index of refraction is defined as the ratio of the velocity of light in vacuum to velocity of light in medium. It is denoted be ‘n’. $$n=\frac{c}{v}\dotsm(1)$$Here ‘n’ is also known as degree of bending. It depends upon the wavelength of light. This effect is graphically explain by dispersion of light when it passes through a glass prism. In medium, speed of light is given by, $$v=\frac{1}{\sqrt{\epsilon\mu}}\dotsm(2)$$where $$\epsilon=\epsilon_\circ \epsilon_r$$=permittivity $$\epsilon_r$$=dielectric constant or relative permittivity $$V=\frac{1}{\sqrt {\epsilon_r \epsilon_\circ \mu_\circ \mu_r}}$$ $$V=\frac{1}{\sqrt{\mu_\circ \epsilon_\circ}} \frac{1}{\sqrt{\mu_r \epsilon_r}}$$ $$\mu$$=permeability of material=$$\mu_\circ \mu_r$$ $$V=\frac{C}{\sqrt{\mu_r \epsilon_r}}\dotsm(3)$$ From (1) and (3) $$\frac{c}{n}=\frac{c}{\sqrt{\mu_r \epsilon_r}}\dotsm(4)$$ for non-magnetic material(non-metals), $$\mu_r=1$$,$$\therefore n=\sqrt{\epsilon_r}\dotsm(5)$$ Thus for the transparent material, the refractive index of material is square root of dielectric constant or relative permeability. Equation (5) is valid for time-dependent electric field. #### Snell’s law of refraction: $$\frac{n}{n’}=\frac{sin \theta}{sin\theta’}\dotsm(6)$$ Where, n=refractive index of first medium n’=refractive index of second medium $$\theta$$=angle of incident $$\theta’$$=angle of refraction #### Reflection: The coefficient of reflection or reflectivity is defined as the ratio of intensity of reflected light to intensity of incident light. It is denoted by R. $$R=\frac{I_R}{I_\circ}\dotsm(1)$$where $$I_R$$=intensity of reflected light $$I_\circ$$=intensity of incident light The reflection or reflectivity on the case of normal incidence i.e. related to index of refraction as, $$R=\biggl(\frac{n_2-n_1}{n_2+n_1}\biggr)^2\dotsm(2)$$where $$n_2$$=refractive index of second medium $$n_1$$=refractive index of first medium If the first medium is vacuum or air then, $$R=\biggl(\frac{n_s-1}{n_s+1}\biggr)^2\dotsm(3)$$where $$n_s$$=refractive index of second medium #### References: Callister, W.D and D.G Rethwisch. Material Science and Engineering. 2nd. New Delhi: Wiley India, 2014. Lindsay, S.M. Introduction of Nanoscience . New York : Oxford University Press, 2010. Patton, W.J. Materials in industry . New Delhi : Prentice hall of India, 1975. Poole, C.P. and F.J. Owens. Introduction To Nanotechnology. New Delhi: Wiley India , 2006. Raghavan, V. Material Science and Engineering. 4th . New Delhi: Pretence-Hall of India, 2003. Tiley, R.J.D. Understanding solids: The science of Materials. Engalnd : John wiley & Sons , 2004. 1. Some relations : $$v=\frac{1}{\sqrt{\epsilon\mu}}$$ $$V=\frac{C}{\sqrt{\mu_r \epsilon_r}}$$ $$\mu_r=1$$,$$\therefore n=\sqrt{\epsilon_r}$$ $$\frac{n}{n’}=\frac{sin \theta}{sin\theta’}$$ $$R=\biggl(\frac{n_s-1}{n_s+1}\biggr)^2$$ 0%
2017-02-25 15:50:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7248799204826355, "perplexity": 1833.6987502817788}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171781.5/warc/CC-MAIN-20170219104611-00625-ip-10-171-10-108.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/1345635/limit-points-of-subset-of-real-numbers
# limit points of subset of real numbers Let $$A=\{ \frac{\sqrt{m} -\sqrt{n}}{\sqrt{m}+\sqrt{n}} | m,n\in \Bbb{N} \}$$ I think that we must find sequence of $A$ and find limit of sequence,let $a_m =\frac{\sqrt{ k ^2 m^2} -\sqrt{ m^2}}{\sqrt{k ^ 2 m^2}+\sqrt{ m^2}}=\frac{(k-1)m}{(k+1)m}$ that $k\in\Bbb{N}$ the limit of $a_m$ is $\frac{k-1}{k+1}$ and let $B=\{\frac{k-1}{k+1} |k\in\Bbb{N}\}$ then $B\subseteq A^\prime$,($A^\prime$ is set of limit points of$A$),the answer is interval $[-1,1]$, • I find sequence. – amir bahadory Jul 1 '15 at 10:39 Some ideas: First: for all $\;m,n\in\Bbb N\;$ : $$-1=\frac{-\sqrt n}{\sqrt n}\le\frac{-\sqrt n}{\sqrt m+\sqrt n}\le\frac{\sqrt m-\sqrt n}{\sqrt m+\sqrt n}\le\frac{\sqrt m}{\sqrt m}=1$$ so any limit point of $\;A\;$ indeed has to be in $\;[-1,1]\;$ . Now, if $\;\alpha\in[-1,1]\;$ take a peek at $$\frac{\sqrt m-\sqrt n}{\sqrt m+\sqrt n}-\alpha=\frac{\sqrt m(1-\alpha)-\sqrt n(1+\alpha)}{\sqrt m+\sqrt n}\le\frac{\sqrt m}{\sqrt n}(1-\alpha)$$ In order to make the last part above less than some predetermined $\;\epsilon >0\;$ it is then enough to take $$\sqrt\frac nm>\frac{1-\alpha}\epsilon$$ Hint: Note that $$\frac{\sqrt{m} -\sqrt{n}}{\sqrt{m}+\sqrt{n}}=1-\frac{2}{1+\sqrt{\frac{m}{n}}},$$ and that $\mathbb{Q}$ is dense in $\mathbb{R}$.
2019-09-23 11:17:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9714904427528381, "perplexity": 404.8860981181354}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514576355.92/warc/CC-MAIN-20190923105314-20190923131314-00103.warc.gz"}
https://www.gradesaver.com/textbooks/math/applied-mathematics/elementary-technical-mathematics/chapter-2-section-2-4-signed-fractions-exercises-page-119/38
## Elementary Technical Mathematics $\displaystyle \frac{2}{9}$ $\displaystyle \left(\frac{-4}{9} \right)\div\left(-2 \right)=$ Apply the equivalent fractions rule, $\quad \displaystyle \frac{a}{-b}=\frac{-a}{b}=-\frac{a}{b}$ $\displaystyle \left(-\frac{4}{9} \right)\div\left(-2 \right)=$ Dividing two numbers with like signs: Divide their absolute values; the sign of the result is $"+"$. Also, write $2$ as $\displaystyle \frac{2}{1}.$ $=+\displaystyle \left(\frac{4}{9} \div \frac{2}{1} \right)$ Dividing with a fraction $\displaystyle \frac{a}{b}$ equals multiplying with the reciprocal, $\displaystyle \frac{b}{a}$. $=\displaystyle \frac{4}{9} \cdot \frac{1}{2}$ Multiply fractions: Reduce by the common factor, 2. $=\displaystyle \frac{2}{9} \cdot \frac{1}{1}$ Now, multiply the numerators and place the product over the product of the denominators. $=\displaystyle \frac{2}{9}$
2020-02-27 18:48:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8465272188186646, "perplexity": 789.2013240051986}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146744.74/warc/CC-MAIN-20200227160355-20200227190355-00180.warc.gz"}
http://openstudy.com/updates/5091d512e4b0ad620537efee
## lilsis76 Group Title graph each point in a polar coordinate system then convert the given polar coordinates to rectangluar coordinates. can someone help me do this step by step so i understand please. 1) a) (3, 2pi/3) one year ago one year ago 1. ByteMe Group Title |dw:1351734596209:dw| notice that the point will have a negative x, positive y coordinates... 2. lilsis76 Group Title why would it be negative? 3. ByteMe Group Title use... $$\large x=rcos\theta$$ $$\large y=rsin\theta$$ 4. ByteMe Group Title because the point is in the second quadrant.... 5. ByteMe Group Title |dw:1351734804583:dw| 6. lilsis76 Group Title 7. lilsis76 Group Title |dw:1351734929718:dw|2pi/3....i dont see how that can be at that angle. 8. ByteMe Group Title yes... r=3; $$\theta=\frac{2\pi}{3}$$ 9. lilsis76 Group Title okay i see that ..... 10. ByteMe Group Title |dw:1351735004666:dw| 11. lilsis76 Group Title but shouldnt the 2pi/3 go on the bottom like in the 270 degree? ugh...or do i use calcutore to solve? 12. lilsis76 Group Title oh lol sorry, i got them mixed up 13. lilsis76 Group Title and why is it to the left of the graph? arent they positive? 14. ByteMe Group Title |dw:1351735188538:dw| 15. ByteMe Group Title 16. lilsis76 Group Title okay, u see how u found the point in the left of the graph chart, why is it to the left. isnt it ( - , +) we have a (+,+) 17. lilsis76 Group Title do u get what i mean? cuz i see a positive point 18. ByteMe Group Title oh... you're referring to the point $$\large (3, \frac{2\pi}{3})$$..... that point is represented in POLAR form, $$\large (r, \theta)$$ and not cartesian form (x, y) 19. lilsis76 Group Title |dw:1351735526253:dw| 20. lilsis76 Group Title okay but why doesnt the 3 go to the right? 21. ByteMe Group Title 22. ByteMe Group Title here... click on this link... http://www.mathwords.com/p/polar_rectangular_conversion_formulas.htm 23. lilsis76 Group Title okay then, so looking at the unit circle its the point then, and like u said the 3 is the radius. so thats the reason why its to the left. It says now to convert the given polar coordinates to rectangular coordinates 24. lilsis76 Group Title how would i start this one? 25. ByteMe Group Title no... the reason why it's on the left of the y-axis is because the angle theta, 2pi/3 resides in the second quadrant. 26. ByteMe Group Title here... this is a better explanation of polar coordinates: http://www.mathsisfun.com/polar-cartesian-coordinates.html 27. lilsis76 Group Title okay ill look at it 28. ByteMe Group Title so those formulas i gave you converts the given point in POLAR form to RECTANGULAR form... 29. lilsis76 Group Title okay. let me try on here and u let me know if i do it wrong. please. 30. ByteMe Group Title ok... 31. lilsis76 Group Title x= r cos theta --> 3 cos 2pi/6 --> 3(1/2) --> 3/2 y= r sin theta --> 3 sin 2pi/6 --> 3(sqrt.3 /2) --> 3/2 sqrt3 32. ByteMe Group Title why is the angle 2pi/6 ??? i thought it was 2pi/3 ??? 33. lilsis76 Group Title AH! sorry, haha i was looking at a 6. let me try 34. lilsis76 Group Title x= r cos theta --> 3 cos 2pi/3 --> 3(1/2) --> 3/2 y= r sin theta --> 3 sin 2pi/3 --> 3(sqrt.3 /2) --> 3/2 sqrt3 35. ByteMe Group Title careful.... $$\large cos(\frac{2\pi}{3})=-\frac{1}{2}$$ 36. ByteMe Group Title 37. lilsis76 Group Title oops, thanks, okay so then x= r cos theta --> 3 cos 2pi/6 --> 3(- 1/2) --> - 3/2 y= r sin theta --> 3 sin 2pi/6 --> 3(sqrt.3 /2) --> 3/2 sqrt3 38. ByteMe Group Title yes... so the x y coordinate for the point is $$\large (-\frac{3}{2},\frac{3\sqrt3}{2})$$ 39. lilsis76 Group Title |dw:1351736659036:dw| then the coordinate - 3/2, 3 sqrt 3 /2 would be in the same area right? 40. ByteMe Group Title it is the SAME point.... only expressed in cartesian form 41. lilsis76 Group Title oh...okay, let me try the other problems and ill be back online if I need help. THANK YOU!!! 42. ByteMe Group Title
2014-08-21 06:18:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8238886594772339, "perplexity": 9055.048834754647}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500815050.22/warc/CC-MAIN-20140820021335-00110-ip-10-180-136-8.ec2.internal.warc.gz"}
https://gmatclub.com/forum/a-line-is-graphed-on-a-coordinate-plane-how-many-times-less-is-the-di-192914.html
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 17 Jun 2018, 22:50 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # A line is graphed on a coordinate plane. How many times less is the di Author Message TAGS: ### Hide Tags Math Expert Joined: 02 Sep 2009 Posts: 46051 A line is graphed on a coordinate plane. How many times less is the di [#permalink] ### Show Tags 09 Feb 2015, 06:44 1 4 00:00 Difficulty: 25% (medium) Question Stats: 68% (01:08) correct 32% (04:30) wrong based on 193 sessions ### HideShow timer Statistics A line is graphed on a coordinate plane. How many times less is the distance between the y-intercept and the x-axis than the distance between the x-intercept and the y-axis? (1) The slope of the line is −9/13. (2) The y-intercept is located at (0, 26). Kudos for a correct solution. _________________ Manager Joined: 14 Sep 2014 Posts: 106 Concentration: Technology, Finance WE: Analyst (Other) Re: A line is graphed on a coordinate plane. How many times less is the di [#permalink] ### Show Tags 10 Feb 2015, 06:22 1 1 To solve this problem we need the x and y intercepts. We can find these intercepts with the $$y = mx + b$$ equation of the line. So if we can find the line's slope (m) and the y-intercept (b), we will have sufficient information. Each of the two statements gives us one piece of the equation, so we need to take them together. The correct answer is C. Math Expert Joined: 02 Aug 2009 Posts: 5875 Re: A line is graphed on a coordinate plane. How many times less is the di [#permalink] ### Show Tags 10 Feb 2015, 08:01 sterling19 wrote: To solve this problem we need the x and y intercepts. We can find these intercepts with the $$y = mx + b$$ equation of the line. So if we can find the line's slope (m) and the y-intercept (b), we will have sufficient information. Each of the two statements gives us one piece of the equation, so we need to take them together. The correct answer is C. hi sterling, i think you have answered generally all question correctly but this may be wrong although approach has been correct.. i think ans should be A.. tht is slope itself is sufficient in this case.. _________________ Absolute modulus :http://gmatclub.com/forum/absolute-modulus-a-better-understanding-210849.html#p1622372 Combination of similar and dissimilar things : http://gmatclub.com/forum/topic215915.html GMAT online Tutor Math Expert Joined: 02 Aug 2009 Posts: 5875 Re: A line is graphed on a coordinate plane. How many times less is the di [#permalink] ### Show Tags 10 Feb 2015, 08:11 ans A.. the question gives following info.. Let the y intercept be (0,y) and x intercept, (x,0)... distance between the y intercept and x axis is y-0=y.. distance between the x intercept and y axis is x-0=x.. now the question asks us.." How many times less is the distance between the y-intercept and the x-axis than the distance between the x-intercept and the y-axis?". It basically means (x-y)/x = 1-y/x or ( 1- slope ).. now lets take the statements.. 1) statement one gives us slope... sufficient as we required slope itself 2) statement two tells us about y intercept... x intercept required.. insufficient ans A _________________ Absolute modulus :http://gmatclub.com/forum/absolute-modulus-a-better-understanding-210849.html#p1622372 Combination of similar and dissimilar things : http://gmatclub.com/forum/topic215915.html GMAT online Tutor Manager Joined: 27 Oct 2013 Posts: 238 Location: India Concentration: General Management, Technology GMAT Date: 03-02-2015 GPA: 3.88 Re: A line is graphed on a coordinate plane. How many times less is the di [#permalink] ### Show Tags 10 Feb 2015, 08:29 Here we go : Basically, the question is asking the relationship between the x-intercept and y-intercep. So let the equation of the line be -> y = mx +c ----(1) St1: The slope of the line is −9/13. m = -9/13 ->Substitute in (1) So y = -9/13 * (x) + c To find y-intercept, put x = 0 y = c ---(2) To find x-intercept, put y = 0 0 = -9/13 * (x) + c x = 13/9 * c From (2) x = 13/9 * y Hence St1 is sufficient. St2: The y-intercept is located at (0, 26). From (1) y = 26 and x = -26 / m Clearly insufficient Option A is correct Manager Joined: 17 Dec 2013 Posts: 58 GMAT Date: 01-08-2015 Re: A line is graphed on a coordinate plane. How many times less is the di [#permalink] ### Show Tags 10 Feb 2015, 12:48 Quote: A line is graphed on a coordinate plane. How many times less is the distance between the y-intercept and the x-axis than the distance between the x-intercept and the y-axis? (1) The slope of the line is −9/13. (2) The y-intercept is located at (0, 26). since we need to know HOW MANY TIMES LESS is the distance, we do not need to find a value for the distance, but for a number which multiplies the y intercept distance from the x-ax. (2) does not provide us with the x intercept. insuff (1) does provide us with the slope. if, for example the y intercept is 9, we know that the x intercept is at x=13. if the y intercept is 4,5, we know that the x intercept is at 6,5. its the sample multiple. since all parallels of THIS exact line contain the same multiple, the parallel going through 0 also has the same multiple. so A is suff Math Expert Joined: 02 Sep 2009 Posts: 46051 Re: A line is graphed on a coordinate plane. How many times less is the di [#permalink] ### Show Tags 16 Feb 2015, 05:35 2 4 Bunuel wrote: A line is graphed on a coordinate plane. How many times less is the distance between the y-intercept and the x-axis than the distance between the x-intercept and the y-axis? (1) The slope of the line is −9/13. (2) The y-intercept is located at (0, 26). Kudos for a correct solution. VERITAS PREP OFFICIAL SOLUTION C. This problem is a classic "Why Are You Here?" data sufficiency problem related to Statement 2. The slope of a line (with one exception) will provide the ratio of x-intercept to y-intercept. Say that the line were in the form y = (-9/13)x + 9. The y-intercept would be 9, and the x-intercept would be 13; if you double the y-intercept just to see the ratio pattern, you'd see that the ratio stays exactly the same: y = (-9/13)x + 18 would leave a y-intercept of 18 and an x-intercept of 26, again for a ratio of y-int : x-int of 9:13. But here is why you need to consider what statement 2 is telling you - it's clearly not sufficient on its own, but what it does tell you is that the line does not pass through the origin (0,0). If it were to pass through the origin, the ratio of intercepts would be 0:0, since the intercept points would be at exactly that (0, 0) point. That's the only point on the coordinate plane for which the ratio of x-intercept to y-intercept is not defined by the slope. _________________ Manager Joined: 09 Aug 2016 Posts: 68 A line is graphed on a coordinate plane. How many times less is the di [#permalink] ### Show Tags 26 Dec 2016, 13:04 1 Bunuel wrote: Bunuel wrote: ..... The slope of a line (with one exception) will provide the ratio of x-intercept to y-intercept.... ... If it were to pass through the origin, the ratio of intercepts would be 0:0 Can somebody clarify the two statements above? Firstly there is no a ratio 0:0 in the world of Maths because 0/0 cannot be defined. So something else means by 0:0. Also for the first statement lets say that we have the line y = 2x + 10 x inter. will be derived by 2x = -10 then x = -5 y inter. will be derived by y = 10 then the ratio x:y = -5/10 = - 1/2 which is not equal to 2:1 Manager Joined: 26 Sep 2016 Posts: 66 Re: A line is graphed on a coordinate plane. How many times less is the di [#permalink] ### Show Tags 27 Sep 2017, 05:50 chetan2u wrote: ans A.. the question gives following info.. Let the y intercept be (0,y) and x intercept, (x,0)... distance between the y intercept and x axis is y-0=y.. distance between the x intercept and y axis is x-0=x.. now the question asks us.." How many times less is the distance between the y-intercept and the x-axis than the distance between the x-intercept and the y-axis?". It basically means (x-y)/x = 1-y/x or ( 1- slope ).. now lets take the statements.. 1) statement one gives us slope... sufficient as we required slope itself 2) statement two tells us about y intercept... x intercept required.. insufficient ans A I am confused about the OA. Can an expert have a look at this question? I also thought the answer to be A? Thanks. Math Expert Joined: 02 Sep 2009 Posts: 46051 Re: A line is graphed on a coordinate plane. How many times less is the di [#permalink] ### Show Tags 27 Sep 2017, 05:55 deucebigalow wrote: chetan2u wrote: ans A.. the question gives following info.. Let the y intercept be (0,y) and x intercept, (x,0)... distance between the y intercept and x axis is y-0=y.. distance between the x intercept and y axis is x-0=x.. now the question asks us.." How many times less is the distance between the y-intercept and the x-axis than the distance between the x-intercept and the y-axis?". It basically means (x-y)/x = 1-y/x or ( 1- slope ).. now lets take the statements.. 1) statement one gives us slope... sufficient as we required slope itself 2) statement two tells us about y intercept... x intercept required.. insufficient ans A I am confused about the OA. Can an expert have a look at this question? I also thought the answer to be A? Thanks. Check here: https://gmatclub.com/forum/a-line-is-gr ... l#p1485415 _________________ Re: A line is graphed on a coordinate plane. How many times less is the di   [#permalink] 27 Sep 2017, 05:55 Display posts from previous: Sort by
2018-06-18 05:50:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7661628127098083, "perplexity": 1687.7577602237159}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267860089.11/warc/CC-MAIN-20180618051104-20180618071104-00213.warc.gz"}
https://www.greencarcongress.com/2013/04/audi-20130411.html
## EU validates Audi LED headlight technology as fuel saving ##### 11 April 2013 The EU Commission has measured the fuel savings achieved by LED headlights from Audi—testing the low-beam headlights, high-beam headlights and license plate light in dynamometer testing. In the ten NEDC cycles that the Audi A6 ran through, CO2 savings were found to be above one gram per km (1.61 g/mile). As a result, the EU Commission has officially identified the LED headlights as an innovative technology for reducing CO2 emissions. Audi is the first car manufacturer to be certified for this technology. Conventional halogen units consume more than 135 watts of power in their low-beam headlight mode. By comparison, LED headlights from Audi operate with significantly better energy efficiency—the low-beam lights only consume around 80 watts. LED daytime running lights made their debut in the Audi A8 W12 back in 2004. Then, in 2008, the R8 sports car became the world’s first car to feature full-LED headlights. Today, this high-end lighting system is available in five model series: the R8, A8, A6, A7 Sportback and A3. Audi designs the LED headlights very differently for different models. On the A8, for example, 76 light-emitting diodes are used per unit. On the Audi A3, 19 LEDs operate in each headlight to generate the low-beam and high-beam lights; they are supplemented by a module for the all-weather and cornering lights as well as a light guide for the daytime running lights, side lights and turn signals. Besides improving energy efficiency, LED headlights also offer safety and comfort benefits. With a color temperature of around 5,500 Kelvin, their light resembles daylight and hardly causes any eye fatigue. The LEDs are maintenance-free and designed to last the life of the car. Very near future mass produced 200+ lm/watt LEDs will more than double the efficiency of 80 lm/Watt current LEDs and reduce energy consumption below 40 Watts. Mid term 300+ lm/watt LEDs will further reduce energy consumption. LEDs can be timed to further reduce energy consumption. Ultra high efficiency on-board heat pumps could reduce energy consumption even more for all HEVs-PHEVs and BEVs and increase e-range. New cars are a good application for LEDs: low voltage DC power source, need for a beam rather than diffuse light and can incorporate good thermal control. There is also less compromise as no need to be compatible with old designs (ie bulbs) given that the light will last the lifetime of the vehicle. Should also be one of the first places to make financial sense as the cost of electric power is higher than in the home. But there is just not that much fuel to be saved via more efficient lighting and so touting the 1 gCO2/km seems pedantic. Is the point that other manufacturers LED solutions are there for fashion but don't achieve the possible efficiency gains? EVs have been very slow to adopt LED lightning (as part of the ongoing big oil conspiracy by GM, Toyota, and Fisker, well documented at various times here, by the demented). Truth is, they are of insignificant worth for adding EV range - and obviously less useful for ICE vehicles, much less a V10, 525 bhp, $150k "car for the masses". BUT, they should definitely use them on EVs; the more green eyewash the better. Anyone who has ever built a low weight bicycle knows that you should never dismiss any weight saving out of hand, no matter how small. 5 grams on a shifter paddle, 12 grams on a rim, 4 grams on a sprocket, 0.5 grams on a spoke and so on. All these grams are unimportant on their own, but together add up to kilograms. Now apply this wisdom to building a high efficiency EV you'll see the same exercise in finding as much 50W-here-and-25W-there-savings to add up to something significant. TT, I expected you to have learned by now that small, incremental changes is what drives 99% of technological progress, not Hollywood style silver bullets or *magical* technological breakthroughs. Too many posters forget that incandescent light bulbs have a very low efficiency of 15 lm/Watt and LEDs have already reached 231 lm/Watt and will reach 300 lm/Watt (or 20X longer than incandescent) by 2015/2020. LEDs with 360 degrees beam angle exist. With their extra long life (30,000 to 100,000 hours) LED lights will not have to be changed for the duration of the vehicle. Vehicles LEDs will have a good trade-in value after 20+ years of use. Changing a single incandescent light in a car can cost xx$. That's why so many users drive with one+ incandescent head/rear light out. The comments to this entry are closed.
2023-03-20 18:58:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25210633873939514, "perplexity": 4095.5108859442453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943555.25/warc/CC-MAIN-20230320175948-20230320205948-00329.warc.gz"}
https://math.stackexchange.com/questions/3804291/show-that-there-exists-x-0-ina-b-such-that-fx-0-frac1nfx-1fx-2
# Show that there exists $x_0\in(a,b)$ such that $f(x_0)=\frac{1}{n}(f(x_1)+f(x_2)+\cdots+f(x_n)).$ Question: Suppose that $$f:[a,b]\to\mathbb{R}$$ is continuous. Let $$x_1,x_2,\cdots, x_n$$ be any $$n$$ points in $$(a,b).$$ Show that there exists $$x_0\in(a,b)$$ such that $$f(x_0)=\frac{1}{n}(f(x_1)+f(x_2)+\cdots+f(x_n)).$$ Solution: Let $$g:[a,b]\to\mathbb{R}$$ be such that $$g(x)=nf(x)-\sum_{k=1}^nf(x_k), \forall x\in[a,b].$$ Observe that to prove the statement of the problem it is enough to show that $$g(x_0)=0$$ for some $$x_0\in(a,b)$$. Now note that by the 3rd form of the Pigeon Hole principle we can conclude that there exists $$1\le i,j\le n$$ such that $$f(x_i)\le \frac{1}{n}\sum_{k=1}^nf(x_k)\le f(x_j)\\\implies nf(x_i)\le \sum_{k=1}^nf(x_k)\le nf(x_j).$$ Thus, $$g(x_i)=nf(x_i)-\sum_{k=1}^nf(x_k)\le 0$$ and $$g(x_j)=nf(x_j)-\sum_{k=1}^nf(x_k)\ge 0.$$ Now if $$g(x_i)=0$$ or $$g(x_j)=0$$, then we are done. Thus, let us assume that $$g(x_i)<0$$ and $$g(x_j)>0$$. Now since $$f$$ is continuous on $$[a,b]$$, implies that $$g$$ is continuous on $$[a,b]$$. Therefore, by IVT we can conclude that there exists $$x_0\in(x_i,x_j)$$ or $$x_0\in(x_j,x_i)$$ such that $$g(x_0)=0$$. This completes the proof. Is this solution correct and rigorous enough and is there any other way to solve the problem? • What is the PHP? Aug 26, 2020 at 19:22 • @MartinR, it's the "Pigeon Hole Principle". Sorry, I will expand that term in a bit. Aug 26, 2020 at 19:24 Your proof looks fine to me. There is no need however to introduce the function $$g$$. You know that $$f(x_i)\le \frac{1}{n}\sum_{k=1}^nf(x_k)\le f(x_j)$$ for some indices $$i, j$$, so you can just apply the intermediate value theorem to $$f$$ on the interval $$I = [\min(x_i, x_j), \max(x_i, x_j)]$$ and conclude that $$\frac{1}{n}\sum_{k=1}^nf(x_k) = f(x)$$ for some $$x \in I$$. Instead of using the pigeon hole principle you can also apply the mean value theorem to $$f$$ on the interval $$J= [\min_k x_k, \max_k x_k] \subset (a, b)$$ because $$m\le \frac{1}{n}\sum_{k=1}^nf(x_k)\le M$$ with $$m = \min_J f(x)$$ and $$M = \max_J f(x)$$. Given a continuous $$f(x)$$, an iterated application of the Intermediate value Theorem gives \eqalign{ & \exists x_{1,2} \in \left[ {x_1 ,x_2 } \right]:f(x_{1,2} ) = t\;f(x_1 ) + \left( {1 - t} \right)f(x_2 )\quad \left| {\,0 \le t \le 1} \right. \cr & \exists x_{2,3} \in \left[ {x_2 ,x_3 } \right]:f(x_{2,3} ) = u\;f(x_2 ) + \left( {1 - u} \right)f(x_3 )\quad \left| {\,0 \le u \le 1} \right. \cr} which express the possibility of finding a point corresponding to the weighted mean within each interval. Putting $$t=2/3, \, u=1/3$$, we can write \eqalign{ & \exists x_{1,2} \in \left[ {x_1 ,x_2 } \right]:f(x_{1,2} ) = {2 \over 3}\;f(x_1 ) + {1 \over 3}f(x_2 )\quad \left| {\,0 \le t \le 1} \right. \cr & \exists x_{2,3} \in \left[ {x_2 ,x_3 } \right]:f(x_{2,3} ) = {1 \over 3}\;f(x_2 ) + {2 \over 3}f(x_3 )\quad \left| {\,0 \le u \le 1} \right. \cr & \exists x_{1,3} \in \left[ {x_1 ,x_2 } \right] \cup \left[ {x_2 ,x_3 } \right]:f(x_{1,3} ) = {1 \over 2}\,f(x_{1,2} ) + {1 \over 2}f(x_{2,3} ) = \cr & = {{f(x_1 ) + f(x_2 ) + f(x_3 )} \over 3} \cr} and the extension to n points is clear. Pick $$i$$ with \begin{align} f(x_i) &\le f(x_k) & \text{for all k = 1, \ldots, n.} \tag{1} \end{align} Pick $$j$$ with \begin{align} f(x_j) &\ge f(x_k) & \text{for all k = 1, \ldots, n.} \tag{2} \end{align} If $$i = j$$, then all $$x_k$$ are equal, and $$x_0 = x_i$$ solves the problem. Consider the case $$i < j$$; the $$i > j$$ case is almost identical. But equation $$1$$, we have $$n f(x_i) \le \sum_k f(x_k)$$ By equation 2, similarly $$n f(x_j) \ge \sum_k f(x_k)$$. Then by the Intermediate value theorem, there's an $$x_0 \in [x_i, x_j]$$ such that $$f(x_0) = \frac{1}{n} \sum_k f(x_k).$$
2022-08-12 12:28:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 55, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9993987083435059, "perplexity": 285.87319756916617}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571692.3/warc/CC-MAIN-20220812105810-20220812135810-00743.warc.gz"}
https://optimization-online.org/2004/07/
## On cost matrices with two and three distinct values of Hamiltonian paths and cycles Polynomially testable characterization of cost matrices associated with a complete digraph on $n$ nodes such that all the Hamiltonian cycles (tours) have the same cost is well known. Tarasov~\cite{TARA81} obtained a characterization of cost matrices where tour costs take two distinct values. We provide a simple alternative characterization of such cost matrices that can be … Read more ## Symmetry Points of Convex Set: Basic Properties and Computational Complexity Given a convex body S and a point x \in S, let sym(x,S) denote the symmetry value of x in S: sym(x,S):= max{t : x + t(x – y) \in S for every y \in S}, which essentially measures how symmetric S is about the point x, and define sym(S):=\max{sym(x,S) : x \in S }. … Read more ## Subspace trust-region methods for large bound-constrained nonlinear equations Trust-region methods for solving large bound-constrained nonlinear systems are considered. They allow for spherical or elliptical trust-regions where the search of an approximate solution is restricted to a low dimensional space. A general formulation for these methods is introduced and global and superlinear/quadratic convergence is shown under standard assumptions. Viable approaches for implementation in conjunction … Read more ## Recovering Risk-Neutral Probability Density Functions from Options Prices using Cubic Splines We present a new approach to estimate the risk-neutral probability density function (pdf) of the future prices of an underlying asset from the prices of options written on the asset. The estimation is carried out in the space of cubic spline functions, yielding appropriate smoothness. The resulting optimization problem, used to invert the data and … Read more ## Time Offset Optimization in Digital Broadcasting We investigate a planning problem arising in the forthcoming Digital Video Broadcasting (DVB-T) system. Unlike current analog systems, the DVB-T standard allows a mitigation of the interference by means of a suitable synchronization of the received signals. The problem we describe in this paper is that of finding a time offset to impose to the … Read more ## A primal-dual nonlinear rescaling method with dynamic scaling parameter update In this paper we developed a general primal-dual nonlinear rescaling method with dynamic scaling parameter update (PDNRD) for convex optimization. We proved the global convergence, established 1.5-Q-superlinear rate of convergence under the standard second order optimality conditions. The PDNRD was numerically implemented and tested on a number of nonlinear problems from COPS and CUTE sets. … Read more ## Convergence analysis of a primal-dual interior-point method for nonlinear programming We analyze a primal-dual interior-point method for nonlinear programming. We prove the global convergence for a wide class of problems under the standard assumptions on the problem. Citation Technical Report ORFE-04-07, Department of ORFE, Princeton University, Princeton, NJ 08544 Article Download View Convergence analysis of a primal-dual interior-point method for nonlinear programming ## Numerical experiments with an interior-exterior point method for nonlinear programming The paper presents an algorithm for solving nonlinear programming problems. The algorithm is based on the combination of interior and exterior point methods. The latter is also known as the primal-dual nonlinear rescaling method. The paper shows that in certain cases when the interior point method fails to achieve the solution with the high level … Read more ## On exploiting structure induced when modelling an intersection of cones in conic optimization Conic optimization is the problem of optimizing a linear function over an intersection of an affine linear manifold with the Cartesian product of convex cones. However, many real world conic models involves an intersection rather than the product of two or more cones. It is easy to deal with an intersection of one or more … Read more ## Steered sequential projections for the inconsistent convex feasibility problem We study a steered sequential gradient algorithm which minimizes the sum of convex functions by proceeding cyclically in the directions of the negative gradients of the functions and using steered step-sizes. This algorithm is applied to the convex feasibility problem by minimizing a proximity function which measures the sum of the Bregman distances to the … Read more
2023-04-01 04:55:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7144237756729126, "perplexity": 599.4265679290241}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949701.0/warc/CC-MAIN-20230401032604-20230401062604-00433.warc.gz"}
https://calendar.math.cas.cz/logic-seminar?date_filter%5Bvalue%5D%5Byear%5D=0&page=3
# Logic seminar usually takes place each Monday at 16:00 in IM, rear building, ground floor Chair: Pavel Pudlak, Neil Thapen, Jan Krajíček More information on the old seminar web page. The programme is announced via the mailing list. ### A Walk with Goodstein ##### David Fernández-Duque Ghent University and Czech Academy of Sciences Monday, 21. March 2022 - 16:00 to 17:30 The classical Goodstein process is based on writing numbers in "normal form" in terms of addition and exponentiation with some base k. By iteratively changing base and subtracting one, one obtains very long sequences of natural numbers which eventually terminate. The latter is proven by comparing base-k normal forms with Cantor normal forms for ordinals, and in fact this proof relies heavily on the notion of normal form. The question then naturally arises: what if we write natural numbers in an arbitrary fashion, not necessarily using normal forms? What if we allow not only addition and exponentiation, but also multiplication for writing numbers? A "Goodstein walk" is any sequence obtained by following the standard Goodstein process but arbitrarily choosing how each element of the sequence is represented. As it turns out, any Goodstein walk is finite, and indeed the longest possible Goodstein walk is given by the standard normal forms. In this talk we sketch a proof of this fact... more ### Nisan-Wigderson generators in Proof Complexity: New lower bounds ##### Erfan Khaniki Institute of Mathematics Monday, 14. March 2022 - 16:00 to 17:30 A map g:{0,1}^n --> {0,1}^m (m>n) is a hard proof complexity generator for a proof system P iff for every string b in {0,1}^m\Rng(g) the formula \tau_b(g), naturally expressing b \not \in Rng(g), requires superpolynomial size P-proofs. One of the well-studied maps in the theory of proof complexity generators is the Nisan-Wigderson generator. Razborov (Annals of Mathematics 2015) conjectured that if A is a suitable matrix and f is a NP \cap CoNP function hard-on-average for P/poly, then NW_{f, A} is a hard proof complexity generator for Extended Frege. In this talk, we prove a form of Razborov's conjecture for AC0-Frege. We show that for any symmetric NP \cap CoNP function f that is exponentially hard for depth two AC0 circuits, NW_{f, A} is a hard proof complexity generator for AC0-Frege in a natural setting. ### On Semi-Algebraic Proofs and Algorithms ##### Robert Robere McGill University Monday, 7. March 2022 - 16:00 to 17:30 We discuss a new characterization of the Sherali-Adams proof system, a standard propositional proof system considered in both proof complexity and combinatorial optimization, showing that there is a degree-d Sherali-Adams refutation of an unsatisfiable CNF formula F if and only if there is an ε > 0 and a degree-d conical junta J such that viol(x) − ε = J, where viol(x) counts the number of falsified clauses of F on an input x. This result implies that the linear separation complexity, a complexity measure recently studied by Hrubes (and independently by de Oliveira Oliveira and Pudlak under the name of weak monotone linear programming gates), monotone feasibly interpolates Sherali-Adams proofs, sharpening a feasible interpolation result of Hakoniemi. On the lower-bound side, we prove a separation between the conical junta degree of viol(x) - 1 and Resolution width; since Sherali-Adams can simulate Resolution this also separates the conical junta degree of viol(x) - 1 and viol(x) -... more ### Resolution, Heavy Width and Pseudorandom Generators ##### Dmitry Sokolov St Petersburg State University and PDMI RAS Monday, 21. February 2022 - 16:00 to 17:30 Following the paper of Alekhnovich, Ben-Sasson, Razborov, Wigderson we call a pseudorandom generator hard for a propositional proof system P if P cannot efficiently prove the (properly encoded) statement that b is outside of the image for any string b \in {0, 1}^m. In ABRW04 the authors suggested the "functional encoding" of the considered statement for Nisan--Wigderson generator that allows the introduction of "local" extension variables and gave a lower bound on the length of Resolution proofs if the number of extension variables is bounded by the n^2 (where n is the number of inputs of the PRG). In this talk, we discuss a "heavy width" measure for Resolution that allows us to show a lower bound on the length of Resolution proofs of the considered statement for the Nisan--Wigderson generator with a superpolynomial number of local extension variables. It is a solution to one of the open problems from ABRW04.
2022-12-01 02:22:26
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8544884324073792, "perplexity": 1918.713324875803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710789.95/warc/CC-MAIN-20221201021257-20221201051257-00593.warc.gz"}
https://help.altair.com/hwsolvers/os/topics/solvers/os/faq_data_management_r.htm
# Data Management ## How much memory should I use for the checkrun? The memory necessary for a check run is automatically allocated. ## What is the maximum amount of memory that can be used by OptiStruct? Refer to Memory Handling. ## Can OptiStruct use more memory than I actually have installed on my system (more than installed RAM)? Refer to Memory Handling. ## What if OptiStruct fails to solve my job, reporting that there is not enough memory available, but my computer has much more memory than required? Refer to Memory Handling. ## Why did OptiStruct start to run, but fail after some time with a memory allocation error when I used the -fixlen options? Refer to Memory Handling. ## I have a Windows machine with 3GB (or more) RAM. Can OptiStruct use it? Does it make sense to have more than 2GB RAM installed? Refer to Memory Handling. ## Will the PC page file size affect the elapsed time for an OptiStruct job? Yes, the page file size recommended by the vendor is not necessarily the optimal size to run big jobs in OptiStruct. Moreover, if more RAM is installed in a machine, it can process larger page files more efficiently. Changing the page file size is usually recommended to decrease the elapsed processing time. The elapsed time does not necessarily increase with an increased page file. If too many applications are opened, or if RAM-intensive applications are being used, OptiStruct could be pushed into the swap space, slowing the processing time considerably. The preferable page file size is: page file size=2*RAM+32 Mb. For example: If the available RAM is 512 MB, the page file size would be 2*512+32=1056Mb. The maximum page file size is 1500Mb for the 512 Mb RAM. The page file size can be adjusted as follows: • Windows 7: Start > Control Panel > System > Advanced system settings > Settings > Change (on the Advanced tab, under Virtual memory) Furthermore, it may be beneficial to break up the page file between two disks on a multiprocessor machine (between drives C and D, for example). Another solution is to put the page file on an SCSI drive rather than an IDE drive. SCSI drives are faster, and will speed up the processing. ## OptiStruct is not recognized as a command or batch file when launched from the Solver Panel within HyperMesh. What do I do? There are three things you should check in this scenario: 1. Verify that the optistruct.bat file in the <install_directory>\hwdesktop\hm\bin directory is not zero bytes. If it is, replace it with a copy of the file optistruct.bat from the <install_directory>\hwsolvers\optistruct\bin directory. 2. Verify the existence of the system environment variable ComSpec; its value should be C:\WINNT\system32\cmd.exe or equivalent. 3. Verify the existence of the system environment variable PATH; its value should include C:\WINNT\system32\ or equivalent.
2023-03-29 17:37:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3720547556877136, "perplexity": 2698.0544456748316}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949009.11/warc/CC-MAIN-20230329151629-20230329181629-00245.warc.gz"}
https://docs.galpy.org/en/v1.7.1/reference/potentialmasses.html
galpy.potential.mass¶ galpy.potential.mass(Pot, R, z=None, t=0.0, forceint=False)[source] NAME: mass PURPOSE: convenience function to evaluate a possible sum of masses INPUT: Pot - potential or list of potentials (dissipative forces in such a list are ignored) R - cylindrical Galactocentric distance (can be Quantity) z= (None) vertical height up to which to integrate (can be Quantity) t - time (can be Quantity) forceint= if True, calculate the mass through integration of the density, even if an explicit expression for the mass exists OUTPUT: Mass enclosed within the spherical shell with radius R if z is None else mass in the slab <R and between -z and z HISTORY: 2021-02-07 - Written - Bovy (UofT) 2021-03-15 - Changed to integrate to spherical shell for z is None slab otherwise - Bovy (UofT)
2021-12-01 03:10:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.795489490032196, "perplexity": 3993.7810845987306}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359082.78/warc/CC-MAIN-20211201022332-20211201052332-00000.warc.gz"}
http://experiment-ufa.ru/derivative_of_Sin(4x-1)
# Derivative of Sin(4x-1) ## Derivative of Sin(4x-1). Simple step by step solution, to learn. Simple, and easy to understand, so dont hesitate to use it as a solution of your homework. If it's not what You are looking for type in the derivative calculator your own function and let us solve it. ## Derivative of Sin(4x-1): (sin(4*x-1))'cos(4*x-1)*(4*x-1)'cos(4*x-1)*((4*x)'+(-1)')cos(4*x-1)*(4*(x)'+(4)'*x+(-1)')cos(4*x-1)*(4*(x)'+0*x+(-1)')cos(4*x-1)*(0*x+4*1+(-1)')(0+4)*cos(4*x-1)4*cos(4*x-1)4*cos(4*x-1)` The calculation above is a derivative of the function f (x) ## Related pages fraction times a fraction calculator8x squaredwrite each percent as a decimal calculatorwhat is 7x96.25 fractionwhat is the gcf of 72 and 96prime factorization of 2101000000000 dollarshcf of 72graph the equation 2x 5y 10y ln x solve for xln u derivativederivative cos3x1.3125 as a fractionmulti step equations calculator with stepsfactoring x 3-1common multiples of 5 and 85x 7 2xderivative of cosine squaredderivative of 5x1967 roman numeralsleast common denominator calculator online6yi5 000 pounds to dollarsdifferentiate cos squared999 roman numeralmdt solutionlcm caculatorcomplete the square x 2 6xsquare root of 448x 3 y 3 3xyconvert a percent to a decimal calculatorquad equation solvercalculator for multiplying fractionsprime factorization 441319-1001994 roman numeralsgcf of 56 and 72what is prime factorization of 96derivative of ln x 3greatest common factor of 120percentages to fractions calculatorderivative of 5lnx2720 x 12f x sinxderivative of 2x 11938 in roman numerals99 prime factorizationprime factorization of 735ln45what is the square root of 1089solve 5x 250750 numbersleast common multiple lcm3.25 as a fraction490.52x 5y 8prime numbers of 2502x 5ybycxquad equation solverprime factorization of 220ctg tgmultiple mixed fractions calculatorderivitive of lnxlcm of 1445x 4ysolve 2x squaredprime factors of 294what is the prime factorization of 38552-109hcf of 75what is the prime factorization for 54what is the hcf of 90 and 120graph sinx4000 dollars in poundsalgebraic expression calculator step by step3x 5y16 qt to galleast common denominator solverthe gcf calculator3c & 5s
2017-09-22 04:23:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6281521916389465, "perplexity": 14323.74385141948}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818688208.1/warc/CC-MAIN-20170922041015-20170922061015-00002.warc.gz"}
https://www.doubtnut.com/question-answer-physics/inside-a-neutral-hollow-conducting-sphere-of-radius-x-and-centre-c-a-point-charge-q-is-placed-as-sho-644384094
Home > English > Class 12 > Physics > Chapter > Test 1 > Inside a neutral hollow conduc... Updated On: 27-06-2022 Get Answer to any question, just click a photo and upload the photo and get the answer completely free, Text Solution Zero(Kq)/(d+X)^2 q_1(kqq_1)/X^2(kqq_1)/d^2 Step by step solution by experts to help you in doubt clearance & scoring excellent marks in exams. Transcript hello point inside a neutral hollow conducting sphere of radius X and centre at c a point charge Q is placed as shown another point charges Q1 please outside the year at a distance D from the centre the net electrostatic force on the charge Q is placed at the centre is so if you see what will happen the first of all this is a conducting shell trade so if you take any point here net electric field to this conductor point this point should be zero if you take a gorgeous and surface which is which is made in between the outer and inner it is a spherical Gaussian surface charge enclosed herewith the charge inside to outside induced will be minus Q charge at the inner surface of the seller will be minus q and what about the distribution of charge whether it would be uniformly distributed non uniformly distributed this minus Q it will be uniformly distributed on non uniformly distributed so basically if you see here if I talk about the distribution of the charge hair electrostatic shielding will happen and have nothing to do with the stars which is present outside the cell distribution of discharge - you will be uniformly uniform throughout the inner cell and it will emerge from this positive charge and terminate to the negative charge emerge from the positive and terminate to the negative like this all happened here so if you see if the charge distribution at the inside of the shell uniform net electric field or you can also send net force on Q it is the charge distribution is uniform h distance the charge distribution charged uniformly distributed at the inner surface office tell if I say that the charge is uniformly distributed at the inner surface of the cell net force on the charge is kept at the centre will be
2022-07-07 16:11:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6536055207252502, "perplexity": 349.0933273603405}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104495692.77/warc/CC-MAIN-20220707154329-20220707184329-00593.warc.gz"}
https://elki-project.github.io/releases/current/javadoc/elki/clustering/kmedoids/package-summary.html
# Package elki.clustering.kmedoids K-medoids clustering (PAM). • Interface Summary Interface Description KMedoidsClustering<O> Interface for clustering algorithms that produce medoids. • Class Summary Class Description AlternatingKMedoids<O> A k-medoids clustering algorithm, implemented as EM-style batch algorithm; known in literature as the "alternate" method. AlternatingKMedoids.Par<V> Parameterization class. CLARA<V> Clustering Large Applications (CLARA) is a clustering method for large data sets based on PAM, partitioning around medoids (PAM) based on sampling. CLARA.CachedDistanceQuery<V> Cached distance query. CLARA.Par<V> Parameterization class. CLARANS<O> CLARANS: a method for clustering objects for spatial data mining is inspired by PAM (partitioning around medoids, PAM) and CLARA and also based on sampling. CLARANS.Assignment Assignment state. CLARANS.Par<V> Parameterization class. EagerPAM<O> Variation of PAM that eagerly performs all swaps that yield an improvement during an iteration. EagerPAM.Instance Instance for a single dataset. EagerPAM.Par<O> Parameterization class. FastCLARA<V> Clustering Large Applications (CLARA) with the FastPAM improvements, to increase scalability in the number of clusters. FastCLARA.Par<V> Parameterization class. FastCLARANS<V> A faster variation of CLARANS, that can explore O(k) as many swaps at a similar cost by considering all medoids for each candidate non-medoid. FastCLARANS.Assignment Assignment state. FastCLARANS.Par<V> Parameterization class. FasterCLARA<O> Clustering Large Applications (CLARA) with the FastPAM improvements, to increase scalability in the number of clusters. FasterCLARA.Par<V> Parameterization class. FasterPAM<O> Variation of FastPAM that eagerly performs any swap that yields an improvement during an iteration. FasterPAM.Instance Instance for a single dataset. FasterPAM.Par<O> Parameterization class. FastPAM<O> FastPAM: An improved version of PAM, that is usually O(k) times faster. FastPAM.Instance Instance for a single dataset. FastPAM.Par<V> Parameterization class. FastPAM1<O> FastPAM1: A version of PAM that is O(k) times faster, i.e., now in O((n-k)²). FastPAM1.Instance Instance for a single dataset. FastPAM1.Par<V> Parameterization class. PAM<O> The original Partitioning Around Medoids (PAM) algorithm or k-medoids clustering, as proposed by Kaufman and Rousseeuw; a largely equivalent method was also proposed by Whitaker in the operations research domain, and is well known by the name "fast interchange" there. PAM.Instance Instance for a single dataset. PAM.Par<O> Parameterization class. ReynoldsPAM<O> The Partitioning Around Medoids (PAM) algorithm with some additional optimizations proposed by Reynolds et al. ReynoldsPAM.Instance Instance for a single dataset. ReynoldsPAM.Par<V> Parameterization class. SingleAssignmentKMedoids<O> K-medoids clustering by using the initialization only, then assigning each object to the nearest neighbor. SingleAssignmentKMedoids.Instance Instance for a single dataset. SingleAssignmentKMedoids.Par<O> Parameterization class.
2023-03-22 20:09:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4630882740020752, "perplexity": 11118.256800812025}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944452.74/warc/CC-MAIN-20230322180852-20230322210852-00571.warc.gz"}
https://puzzling.stackexchange.com/questions/59793/tell-us-the-day
# Tell Us The Day! A group of campers have been on vacation so long, that they've forgotten the day of the week. The following conversation ensues. ' Darryl: What's the day? I don't think it is Thursday, Friday or Saturday. Tracy: Well that doesn't narrow it down much. Yesterday was Sunday. Melissa: Yesterday wasn't Sunday, tomorrow is Sunday. Ben: The day after tomorrow is Saturday. Adrienne: The day before yesterday was Thursday. Susie: Tomorrow is Saturday. David: I know that the day after tomorrow is not Friday. If only one person's statement is true, what day of the week is it? The day is... Wednesday All you have to do is... Check off what days of the week each person thinks it could be, and the correct day is the one with only one check mark (given by Darryl, the only correct person): Day | S | M | T | W | Th | F | Sa | Darryl | x | x | x | x | | | | Tracy | | x | | | | | | Melissa | | | | | | | x | Ben | | | | | x | | | Adrienne | | | | | | | x | Susie | | | | | | x | | David | x | x | x | | x | x | x | The day is Wednesday, and Darryl's statement is accurate. This can be determined as follows: Darryl and David make statements about what day it is not, while the other campers make statements identifying a certain day. It cannot be Monday, Tuesday, or Sunday, because then both Darryl and David would be correct. The day also cannot be Thursday, as both Ben and David would be correct. Friday would make both Susie and David correct, and Saturday would make Melissa and David correct. Thus, only Wednesday leaves one camper correct. • Correct! Good jobbb – Me myself and I Jan 25 '18 at 19:27
2021-05-09 06:35:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21098655462265015, "perplexity": 969.2933498609685}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988961.17/warc/CC-MAIN-20210509062621-20210509092621-00405.warc.gz"}
http://acm.sdut.edu.cn/onlinejudge2/index.php/Home/Index/problemdetail/pid/2727.html
### Missing Pages Time Limit: 1000 ms Memory Limit: 65536 KiB #### Problem Description Long ago, there were periodicals called newspapers, and these newspapers were printed on paper, and people used to read them, and perhaps even share them. One unfortunate thing about this form of media is that every so often, someone would like an article so much, they would take it with them, leaving the rest of the newspaper behind for others to enjoy. Unfortunately, because of the way that paper was folded, not only would the page with that article be gone, so would the page on the reverse side and also two other pages that were physically on the same sheet of folded paper. For this problem we assume the classic approach is used for folding paper to make a booklet that has a number of pages that is a multiple of four. As an example, a newspaper with 12 pages would be made of three sheets of paper (see figure below). One sheet would have pages 1 and 12 printed on one side, and pages 2 and 11 printed on the other. Another piece of paper would have pages 3 and 10 printed on one side and 4 and 9 printed on the other. The third sheet would have pages 5, 6, 7, and 8. When one numbered page is taken from the newspaper, the question is what other pages disappear. #### Input Each test case will be described with two integers N and P, on a line, where 4 ≤ N ≤ 1000 is a multiple of four that designates the length of the newspaper in terms of numbered pages, and 1 ≤ P ≤ N is a page that has been taken. The end of the input is designated by a line containing only the value 0. #### Output For each case, output, in increasing order, the page numbers for the other three pages that will be missing. #### Sample Input 12 2 12 9 8 3 0 #### Sample Output 1 11 12 3 4 10 4 5 6
2019-10-17 23:55:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4622163474559784, "perplexity": 583.9895821979684}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986677230.18/warc/CC-MAIN-20191017222820-20191018010320-00305.warc.gz"}
http://reflectometry.org/danse/docs/elements/guide/formula_grammar.html
# Chemical Composition¶ Some properties are available for groups of elements. Groups are specified as a chemical formula string and either density or cell volume for the crystal structure. While it does not provide any information about molecular structure, a formula does provide complete control over chemical composition. A formula string is translated into a formula using periodictable.formulas.formula(): • Formula strings consist of counts and atoms, where individual atoms are represented by periodic table symbol. The atoms are case sensitive, so “CO” is different from “Co”. Here is an example of calcium carbonate: >>> from periodictable import formula >>> print(formula("CaCO3")) CaCO3 • Formulas can contain multiple groups separated by space or plus or by using parentheses. Whole groups can have a repeat count. The following are equivalent definitions of hydrated calcium carbonate: >>> print(formula("CaCO3+6H2O")) CaCO3(H2O)6 >>> print(formula("CaCO3 6H2O")) CaCO3(H2O)6 >>> print(formula("CaCO3(H2O)6")) CaCO3(H2O)6 • Parentheses can nest, e.g., in polyethylene glycol: >>> print(formula("HO ((CH2)2O)6 H")) HO((CH2)2O)6H • Isotopes are represented by index, such as O[18] = 18O: >>> print(formula("CaCO[18]3+6H2O")) CaCO[18]3(H2O)6 • Ions are represented by charge, such as O{2-} = O2-: >>> print(formula("P{5+}O{2-}4")) P{5+}O{2-}4 If charge is +/- 1 then the number is optional: >>> print(formula("Na{+}Cl{1-}")) Na{+}Cl{-} When specifying both charge and isotope, isotope comes first: >>> print(formula("Fe[56]{2+}")) Fe[56]{2+} Even though the charge is on the individual atoms, the entire formula has a charge: >>> print(formula("P{5+}O{2-}4").charge) -3 • Counts can be integer or decimal: >>> print(formula("CaCO3+(3HO1.5)2")) CaCO3((HO1.5)3)2 • Formula density can be specified using the special ‘@’ tag: >>> print(formula("NaCl@2.16").density) 2.16 Density gives the isotopic density of the compound, so for example, D2O could be specified using: >>> print("%.3f"%formula("D2O@1.112").density) 1.112 It can also be specified using the natural density of the compound, assuming the isotopes substitution does not change the unit cell volume: >>> print("%.3f"%formula("D2O@1n").density) 1.112 Density applies to the entire formula, so for example a D2O-H2O 2:1 mixture (not by mass or by volume) would be: >>> print("%.3f"%formula("2D2O + H2O@1n").density) 1.074 • Mass fractions use %wt, with the final portion adding to 100%: >>> print(formula("10%wt Fe // 15% Co // Ni")) FeCo1.4214Ni7.13602 Only the first item needs to specify that it is a mass fraction, and the remainder can use a bare %. • Volume fractions use %vol, with the final portion adding to 100%: >>> print(formula("10%vol Fe // Ni")) FeNi9.68121 Only the first item needs to specify that it is a volume fraction, and the remainder can use a bare %. Volume fraction mixing is only possible if the densities are known for the individual components, which will require the formula density tag if the component is not an element. A density estimate is given for the mixture but in general it will not be correct, and should be set explicitly for the resulting compound. • Specific mass can be giving with count follwed by mass units: >>> print(formula("5g NaCl // 50mL H2O@1")) NaCl(H2O)32.4407 Density will be required for materials given by volume. Mass will be stored in the total_mass attribute of the resulting formula. • Multilayers can be specified by thickness: >>> print(formula("1 um Si // 5 nm Cr // 10 nm Au")) Si119.99CrAu1.41722 Density will be required for each layer. Thickness will be stored in the total_thickness attribute of the resulting formula. Thickness can be converted to total_volume by multiplying by cross section, and to total_mass by multiplying that by density. • Mixtures can nest. The following is a 10% salt solution by weight mixed 20:80 by volume with D2O: >>> print(formula("20%vol (10%wt NaCl@2.16 // H2O@1) // D2O@1n")) NaCl(H2O)29.1966(D2O)122.794 • Empty formulas are supported, e.g., for air or vacuum: >>> print(formula()) >>> formula() formula('') The grammar used for parsing formula strings is the following: formula :: compound | mixture | nothing mixture :: quantity | percentage quantity :: count unit part ('//' count unit part)* percentage :: count '%wt|%vol' part ('//' count '%' part)* '//' part part :: compound | '(' mixture ')' compound :: group (separator group)* density? group :: count element+ | '(' formula ')' count element :: symbol isotope? ion? count? symbol :: [A-Z][a-z]* isotope :: '[' number ']' ion :: '{' number? [+-] '}' density :: '@' count count :: number | fraction number :: [1-9][0-9]* fraction :: ([1-9][0-9]* | 0)? '.' [0-9]* separator :: space? '+'? space? unit :: mass | volume | length mass :: 'kg' | 'g' | 'mg' | 'ug' | 'ng' volume :: 'L' | 'mL' | 'uL' | 'nL' length :: 'cm' | 'mm' | 'um' | 'nm' Formulas can also be constructed from atoms or other formulas: • A simple formula can be created from a bare atom: >>> from periodictable import Ca, C, O, H >>> print(formula(Ca)) Ca • More complex structures will require a sequences of counts and fragments. The fragment itself can be a structure: >>> print(formula( [ (1,Ca), (1,C), (3,O), (6,[(2,H),(1,O)]) ] )) CaCO3(H2O)6 • Structures can also be built with simple formula math: >>> print(formula("CaCO3") + 6*formula("H2O")) CaCO3(H2O)6 • Formulas can be easily cloned: >>> print(formula( formula("CaCO3+6H2O"))) CaCO3(H2O)6 ## Density¶ Density can be specified directly when the formula is created, or updated within a formula. For isotope specific formulas, the density can be given either as the density of the formula using naturally occurring abundance if the unit cell is approximately the same, or using the density specific to those isotopes used. This makes heavy water density easily specified as: >>> D2O = formula('D2O',natural_density=1) >>> print("%s %.4g"%(D2O,D2O.density)) D2O 1.112 Density can also be estimated from the volume of the unit cell, either by using the covalent radii of the constituent atoms and assuming some packing factor, or by knowing the lattice parameters of the crystal which makes up the material. Standard packing factors for hcp, fcc, bcc, cubic and diamond on uniform spheres can be used if the components are of about the same size. The formula should specify the number of atoms in the unit cell, which is 1 for cubic, 2 for bcc and 4 for fcc. Be sure to use the molecular mass (M.molecular_mass in g) rather than the molar mass (M.mass in u = g/mol) in your calculations. Because the packing fraction method relies on the covalent radius estimate it is not very accurate: >>> from periodictable import elements, formula >>> Fe = formula("2Fe") # bcc lattice has 2 atoms per unit cell >>> Fe.density = Fe.molecular_mass/Fe.volume('bcc') >>> print("%.3g"%Fe.density) 6.55 >>> print("%.3g"%elements.Fe.density) 7.87 Using lattice parameters the results are much better: >>> Fe.density = Fe.molecular_mass/Fe.volume(a=2.8664) >>> print("%.3g"%Fe.density) 7.88 >>> print("%.3g"%elements.Fe.density) 7.87 ## Mixtures¶ Mixtures can be created by weight or volume ratios, with the density of the result computed from the density of the materials. For example, the following is a 2:1 mixture of water and heavy water: >>> from periodictable import formula, mix_by_volume, mix_by_weight >>> H2O = formula('H2O',natural_density=1) >>> D2O = formula('D2O',natural_density=1) >>> mix = mix_by_volume(H2O,2,D2O,1) >>> print("%s %.4g"%(mix,mix.density)) (H2O)2D2O 1.037 Note that this is different from a 2:1 mixture by weight: >>> mix = mix_by_weight(H2O,2,D2O,1) >>> print("%s %.4g"%(mix,mix.density)) (H2O)2.2234D2O 1.035 Except in the simplest of cases, the density of the mixture cannot be computed from the densities of the components, and the resulting density should be set explicitly. ## Derived values¶ Once a formula has been created, it can be used for summary calculations. The following is an example of hydrated quartz, which shows how to compute molar mass and neutron/xray scattering length density: >>> import periodictable >>> SiO2 = periodictable.formula('SiO2') >>> hydrated = SiO2 + periodictable.formula('3H2O') >>> print('%s mass %s'%(hydrated,hydrated.mass)) SiO2(H2O)3 mass 114.13014 >>> rho,mu,inc = periodictable.neutron_sld('SiO2+3H2O',density=1.5,wavelength=4.75) >>> print('%s neutron sld %.3g'%(hydrated,rho)) SiO2(H2O)3 neutron sld 0.849 >>> rho,mu = periodictable.xray_sld(hydrated,density=1.5, ... wavelength=periodictable.Cu.K_alpha) >>> print('%s X-ray sld %.3g'%(hydrated,rho)) SiO2(H2O)3 X-ray sld 13.5 ## Biomolecules¶ The periodictable.fasta module can be used to load and manage bio molecules. These can be used to compute molecular weights, approximate volumes and scattering for various lipids and proteins. In addition it supports labile hydrogen calculations, allowing you to compute the neutron scattering length density of the molecule in the presence of D2O as a solvent, assuming all labile hydrogens are substituted.
2018-10-20 21:57:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6942005157470703, "perplexity": 5669.606405551844}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583513441.66/warc/CC-MAIN-20181020205254-20181020230754-00519.warc.gz"}
https://www.groundai.com/project/can-dynamical-synapses-produce-true-self-organized-criticality/
Can dynamical synapses produce true self-organized criticality? # Can dynamical synapses produce true self-organized criticality? Ariadne de Andrade Costa Departamento de Física, FFCLRP, Universidade de São Paulo, 14040-901, Ribeirão Preto, SP, Brazil    Mauro Copelli Departamento de Física, Universidade Federal de Pernambuco, 50670-901, Recife-PE, Brazil    Osame Kinouchi Departamento de Física, FFCLRP, Universidade de São Paulo, 14040-901, Ribeirão Preto, SP, Brazil ###### Abstract Neuronal networks can present activity described by power-law distributed avalanches presumed to be a signature of a critical state. Here we study a random-neighbor network of excitable cellular automata coupled by dynamical synapses. The model exhibits a very similar to conservative self-organized criticality (SOC) models behavior even with dissipative bulk dynamics. This occurs because in the stationary regime the model is conservative on average, and, in the thermodynamic limit, the probability distribution for the global branching ratio converges to a delta-function centered at its critical value. So, this non-conservative model pertain to the same universality class of conservative SOC models and contrasts with other dynamical synapses models that present only self-organized quasi-criticality (SOqC). Analytical results show very good agreement with simulations of the model and enable us to study the emergence of SOC as a function of the parametric derivatives of the stationary branching ratio. , , ## 1 Introduction In the past few decades there has been an increasing exploration of the heuristic idea that the concepts of complexity and criticality are intermingled [1, 2, 3, 4, 5]. Criticality is associated with critical -surfaces in parameter space, for example, a critical point (-surface) or a critical line () in a continuous phase transition. If the system has a -dimensional parameter space, to be critical at a surface necessarily implies fine tuning of the system parameters to be close to that surface. Since several complex systems seem to lie spontaneously at the border of such critical surfaces, we need to explain such fine tuning. Several mechanisms have been proposed to do that, the most popular being based in the notion of Self-Organized Criticality (SOC). SOC in microscopically conservative systems, whose prototypical example is the sandpile model [3], is by now a well understood issue [6, 7, 8]. The case of non-conservative (bulk) dynamics has also been subjected to intense study and the present consensus is that these systems are critical only in the conservative limit [9, 10, 11, 12, 6, 13, 14, 15, 16, 17]. In the absence of that, what one observes is simply a large critical region surrounding the critical point, which can be confounded with a generic critical interval due to finite size effects (Pseudo-SOC) [11, 12, 6, 13, 14]. Another class of models has loading mechanisms that make the system hover around the critical point [9, 18], with fluctuations that do not vanish in the thermodynamic limit, a case which has been called Self-Organized quasi-Criticality (SOqC) [15, 16]. A review about SOC applied to neural networks discuss several limitations of the present models [19]. Here we introduce a new class of models that, although non-conservative microscopically, after a transient achieves a stationary regime which is conservative on average, with vanishing fluctuations in the thermodynamic limit. This new behavior, not seen in previous SOC models, is due to an intra-avalanche ultra-soft loading mechanism (loading in a slower time scale than that presented in SOqC models). Another advantage of our model is that it has a very simple, exact and transparent mean-field treatment, in comparison to other models of the literature. ## 2 The model First we introduce the Levina-Herrmann-Geisel (LHG) model [18, 16], which directly inspired our model. Surprisingly, we will find that the models pertain to different universality classes. In the LHG model the basic excitable elements are integrate-and-fire neurons with time dependent membrane potentials . The synapses are denoted as and also evolve in time. The synaptic coupling is all-to-all (so there are synapses). The model dynamics is given by ∂Vi∂t=Iextδ(t−tidriv)+N−1∑j=1uJijN−1δ(t−tjsp)−Vmaxδ(t−tisp), (1) ∂Jij∂t=1τJ(αu−Jij)−uJijδ(t−tjsp). (2) • Driving: the term is a slow drive on neuron that acts on times with a given rate. • Firing: the neuron spikes if , which defines the spike time . The term corresponds to a reset of the membrane potential. • Integration: the sum over corresponds to the integration, by the postsynaptic neuron , of the synaptic contributions of all firing presynaptic neurons . • Synaptic depression: when a presynaptic neuron fires (which defines the spike time ), all synapses (out-links) are depressed by the amount . • Synaptic recovery: synapses recover to a target value with a time scale given by . Both LHG [18] and Bonachela et al. [16] did not study the full parameter space but mainly studied the effect of varying with other parameters fixed as and . They found a critical-like region around . However, Bonachela et al. showed that this region is pseudocritical because the system’s behavior is an oscillation around the critical point with an amplitude that does not vanish in the large limit. Our model builds upon a random-neighbor network of excitable neurons used previously [20] (inspired on the well-known SIRS epidemiological model). The possible states (, ) are: • Susceptible or Quiescent: ; • Infected or Firing: ; • Recovering or Refractory: . The network is of random neighbor type, each presynaptic neuron having exactly outlinks to postsynaptic neurons described by probabilistic couplings (synapses) . Firing sites can induce firing in neighbors sites, creating an avalanche (defined as a not interrupted sequence of firing sites). The dynamics is composed by the following steps: • Driving: After an avalanche occurs and all sites are either quiescent or refractory, we choose a single site at random and force it to fire (), creating a new avalanche. • Firing: the probability that a presynaptic neighbor does not induce a firing in the postsynaptic neuron is , where henceforth is the Kronecker delta function. So, the probability that a quiescent () site spikes () is given by , where is the number of incoming links of site (note that ). • Refractory time: after a site spikes, it deterministically becomes refractory for time steps, and then returns to quiescence: Si(t+1)={Si(t)+1, if Si=1,2,…,n−2;0, if Si=n−1. (3) • Synaptic depression and recovery: to be described below. Each site has a local branching ratio and we can define a global branching ratio . If are drawn from a uniform distribution with average and kept fixed then the average branching ratio controls the collective behavior of the network: for the system is subcritical (supercritical), with unstable (self-sustained) activity. This is the static version of the model, without dynamical synapses, which undergoes a continuous absorbing state phase transition at  [20]. For this static model, criticality is only achieved by fine tuning to a critical value . This absorbing continuous phase transition pertains to the class of Compact Directed Percolation (CDP) and is identical to that present in canonical conservative SOC models [7, 8, 15]. Here we report new results obtained with a homeostatic synaptic mechanism described by time-dependent probabilities that follow a depressing/recovering synaptic dynamics similar to that used in the LHG model [18, 16]. The crucial difference with respect to their model, however, is that the recovering mechanism of our synaptic dynamics occurs at a slower, , time scale: Pij(t+1)=Pij(t)+ϵNK(A−Pij)−uPij(t)δ(t,tjsp), (4) where is the spiking time of the -th presynaptic neuron. A comparison with the synaptic part of the LHG model (Eq. 2) gives that is proportional to (remember that is proportional to ) and that , with having the same meaning in both models. The parameters of Eq. 4 can be better understood in Figure 1. Notice that the dynamics in leads to a time-dependent global branching ratio . The different time scale from the LHG model arises because they use an all-to-all coupling where synapses already have a scale (like in the all-to-all Ising model). So, synaptic recovery in the LHG model perturbs the synapses with a term of same order and this could be the origin of the large variance on the average synaptic weight measured in [16]. In our case, is and the factor in Eq. (4) defines an ultra-soft recovery mechanism. Although this could be considered a small modeling change, it produces an important change in the universality class of our model. We observe that the scaling in the synapses dynamics, also present in the LHG model, is somewhat problematic from the biological point of view, since synapses do not have information about how many neurons there are in the network. However, in the literature of networks with dynamical links, this feature corresponds to the modeling state of art and there are no models without this feature. Perhaps, in real networks, the recovery dynamics mediated by does not depend on but has anyway a very large recovery time (small ) which is sufficient to put the system in a state very close of the critical one. This problem also suggests a research topic: what are the time scales that biological synapses recover after spike and if the recovery time depends on the size of the network. We also observe that is a property of the architecture, and how the network links change. It reflects the average number of firings potentially induced by a firing site in the absence of overall network activity. So, is not the dynamical branching ratio measured by averaging the ratio between the number of firing sites over the last firing sites, by using an actual firing time series, which is equal to one even for a supercritical systems [19]. The two concepts (branching ratio for the architecture and branching ratio for the activity) have the same interpretation only in the critical limit . ## 3 Mean-Field analysis The model is amenable to very simple and transparent analytic treatment at a mean-field level. We initially follow the steps developed for the static model with fixed  [20]. Let be the density of sites in state at time . The ensemble average of is . The probability that a quiescent neuron at time step will be excited in the next time step by at least one of its neighbors can be approximated [20] by , where is the density of active sites. Given that a site can only be excited when it is quiescent, the dynamics of can be written by resorting again to the mean-field approximation . The dynamics of , the density of quiescent neurons, is coupled to those of the refractory states, whose deterministic dynamics immediately yield in the thermodynamic limit : ρ2(t+1) = ρ(t) (5) ρ3(t+1) = ρ2(t) ⋮ ρn(t+1) = ρn−1(t). (7) To study the stationary state of the mean-field equations, we drop the dependence in the stationary state () from the above equations to obtain . Imposing the stationary condition onto the normalization , we arrive at . We therefore have, in the stationary state, the first of our self-consistent equations, which describes the stationary density of active sites for fixed coupling  [20]: ρ∗=[1−(n−1)ρ∗][1−(1−σ∗ρ∗/K)K]. (8) By considering the limit in Eq. (8), we can obtain the mean field behavior for a critical branching process: , with the critical value and the usual mean-field exponent  [7]. In Eq. (8), was assumed constant. To obtain its value, we impose the stationary condition on Eq. (4) for the synaptic dynamics, such that the dissipation and the driving (loading) of the system must be the same. Dropping the dependence and averaging Eq. (4) over the ensemble, we obtain ϵKN(A−σ∗K)=uσ∗ρ∗K, (9) which can be solved for , rendering σ∗=AKϵuKNρ∗+ϵ. (10) This is the second of our self-consistent equations, stating the average coupling as a result of the interplay between synaptic depression () and recovery (), in light of a constant density of spiking neurons. Together, Eqs. (8) and (10) can be solved to determine and . In particular, in the critical region , we can understand how the model parameters affect the distance of the stationary branching ratio from what would be the critical value in a truly self-organized system: σ∗−1≃(AK−1)1+x,x≡uKN(n−1)ϵ, (11) where the scaling variable condenses the effect of most of the parameters and the important dependence. Therefore, the mean-field calculation predicts that when (which we call large- tuning and is realized for finite , , and large ), the stationary value differs from by a term of order . The several parameters of the model only affect the constant prefactor of this term. We have a critical state without the need of fine tuning of the parameters, requiring only the large limit which enables the evolution of to approach the region where Eq. (11) is valid. We also note that can be produced exactly in our model, but at the expense of choosing , which is a fine tuning operation (pulling towards ). In short, due to the synaptic dynamics, is no longer a parameter (like in the static model [20]) but is rather a slow dynamical variable whose stationary value depends on the parameters , , , , and . The coupled equations (8) and (10) are solved numerically to give curves and . If we expand Eq. 8 for large and use Eq. 10, we obtain: ρ∗≈AϵuN. (12) This shows that, for large , we have (critical state), and grows with but has a prefactor. So, a graphic of as a function of any parameter of the model shows that the critical state (and the avalanche behavior) depends very weakly on as grows. We will call this dependence, which vanishes fast with , as gross-tuning, to differentiate it from the fine-tuning needed in several models to achieve SOC [11, 12, 6, 13, 14, 15]. ## 4 Results Our mean-field calculation describes a system without spacial correlations, in which neighbors are chosen at random at each time step (annealed model). Although being a step back in biological realism, the annealed model is very important due to the insights furnished along the mean-field results. Irrespective of the initial distribution of couplings , which defines an initial value , the network architecture evolves (“self-organizes”) during a transient toward a stationary regime . This self-organization can be followed by measuring , see Fig. 2. The fact that the branching ratio evolves and self-organizes in time is a characteristic of networks with adaptive links not present in classical SOC models like sandpiles and earthquake models, where the links are static and represent, say, how much a given toppling site gives to its neighbors. Also, if perturbed or damaged, the set of synapses recovers and achieves a new stationary state similar to the previous one. The evolution of and the corresponding towards criticality, as exposed in Fig. 2, seems for us to be a more strong instantiation of the original idea by Per Bak of a truly self-organizing system [21]. The stationary time series for presents fluctuations around an average value , with standard deviation . The stationary distribution is roughly Gaussian (Fig. 3) with the large- scaling , with the critical point (see Eq. (11)). The most important result is that it tends to a delta function, with , see inset of Fig. 3. In the stationary state, the model is therefore conservative on average, in the sense that it conserves the average number of active sites. In other words, its time-averaged branching ratio is critical for large enough . In Fig. 4 we present theoretical (Eq. (8)) and simulation results for the annealed model. To show the supercritical regime, we used large values for given a finite in order to produce . From this figure it is clear that the synaptic dynamics induces the system to lie at the critical point of an absorbing continuous phase transition [7, 8]. This is an important feature, not present in the LHG model, as extensively discussed by Bonachela and Muñoz [16]. The main characteristic of our model is that if we plot versus a parameter (for example, , or ), not only tends to , but also the parametric derivative tends to zero as increases. This means that a plateau appears around the critical point, so that the parametric dependence (for all parameters) vanishes for large . This can be seen explicitly in the mean field equations, where, for any parameter , we have: dσ∗(p)dp≈Cp/N, (13) for some . For example, the emergence of a parametric plateau for can be seen in Fig. 5 (notice the logarithmic scale for ). The same behavior can be observed for parameters and . The avalanche finite-size scaling, however, is somewhat problematic, as also observed in other non-conservative models [15, 16]. To obtain a precise scaling of critical avalanches for finite , one needs to tune the parameters. For example, with other parameters fixed, the choice ϵ=ϵcN1/3, (14) as suggested by Bonachela et al. [15, 16] leads to the correct scaling of the cumulative avalanche size distribution: , where and is the probability that an avalanche has size (see Fig. 6). The scaling with in Eq. 14 is not so problematic, since it can be absorbed in the original scaling of the synaptic recovery, Eq. 4, that is, by using from start a scaling . However, critical avalanches are observed only for a definite choice of (which now does not depend on ). To which extent this tuning implies that our system is a SOqC model is discussed below. In our model, the cutoff scales as with , which is different from the scaling found in the LHG model ([16] or other models ([15]. Bonachela et al. [16] observed that a random neighbor version of the LHG model presented an anomalous cutoff exponent, but did not reported its value. Naive scaling considerations, similar to that done in [15], although produce , do not produce nor , so we prefer to reserve this issue for future considerations. ## 5 Discussion Bonachela et al. [15, 16] tried to define an universality class, with a definite field theory, for bulk dissipative models that they call Self-Organized quasi-Criticality (SOqC). In doing so, they claimed that this class is characterized by three (necessary) features: • A) The stationary distribution of couplings values , which corresponds to in our model, has finite variance even in the infinite size limit. The system hoovers around the critical point, with excursions on the supercritical and subcritical phases. The avalanche distribution is constructed by summing supercritical and subcritical avalanches; • B) The relevant phase transition associated to SOqC is a dynamical percolation transition, not a continuous absorbing phase transition like conservative SOC models [7, 8]; • C) For finite , to obtain a correct scaling with power law avalanches, we must use tuned parameters (like ). The LHG model [18, 16] presents all these features, being classified as a SOqC model. The same occurs with other bulk dissipative models [15]. Our model, however, presents only feature C (see Fig. 6) and lacks the important features A and B. Our model, in contrast to the LHG model, presents vanishing variance for , so that it does not oscillates nor make supercritical (or subcritical) excursions in the large limit. Also, the limit is achieved very fast, with weak dependence (tuning) on the parameters , because they are constants in front of a factor. Anyway, we observe that, in practice, neuronal networks always work with a very large number of elements (say, one million), compared, for example, with our simulations with . So, the large limit is the relevant one, and in this limit the avalanche behavior depends very weakly on the parameters, as can be seen from the mean-field results (Eq. 12). So, for large , it is more precise to talk about ”gross-tuning” instead of ”fine-tuning” to describe the finite size avalanche behavior of our model. Our model lies at the border of a phase transition to an absorbing state (Compact Directed Percolation), instead of a dynamical percolation transition, which is the relevant transition in bona fide SOC models [8, 7, 15]. Since the universality class of our model is different of the LHG model, it should not be put in the same SOqC class. The problem here is of definition: if item C is sufficient to classify a system as SOqC, then our model is SOqC (but them the SOqC will be comprised of two universality classes). But if items A and B are necessary conditions, then our model is not SOqC and another class must be created. What we can claim at this moment is that, since our model lacks features A and B, its behavior resembles much more the conserving SOC models than the LHG model or other non-conservative models [15, 16]. The synaptic depression, mediated by the parameter, is not conservative. The absence of conserved quantities in the bulk and specially during the (self-organization) transient is another feature that puts our model apart from conventional SOC models. The fact our model violates conservation in the bulk, however, is not an impeditive factor for true criticality. Recently, Moosavi and Montakhab [22] showed that sandpile models with noise (that violates microscopic conservation but preserves average conservation) can be critical if the noiseless model is critical and the noise has zero mean. In the case of our model, the conservation on average is achieved in the stationary state, after a non-conserving transient. So, we conclude that non-conservative bulk dynamics is not a sufficient feature to put a system in the SOqC class. Which ingredients could account for the differences between our model and the LHG model, which clearly pertain to different universality classes? We identify three main possibilities: i) their model uses continuous-time integrate-and-fire units in contrast to our excitable (SIRS) discrete time units; ii) their units are deterministically coupled via weighted synaptic sums, while our discrete automata are coupled by probabilistic multiplicative synapses; iii) in the LHG model, the avalanches are deterministic and, in our model, they are stochastic; iv) their model is based on a complete graph with synapses, while our model sits on a random graph with finite average degree and hence synapses. It seems to us that items i), ii), iii) and iv) could hardly be responsible for a change of universality class. On the other hand, item iv) refers to a change of topology, along with a change of time scale in the synaptic dynamics: LHG uses a change of for synapses that already are of order (because of the complete graph topology), which means that synapses are strongly perturbed along the time. In our random neighbor model, since , synapses are and the synaptic change of per time step is infinitesimal for large . This means that the correction in diminishes for increasing , preventing large excursions or oscillations around the point. We believe that this ultra-soft synaptic correction is the missing element, not contemplated in the literature, that produces a SOC model with vanishing variance. If this is true, then one can predict that a simulation of the Levina et al. model with finite random neighbors should fall in our universality class, that is, presenting vanishing variance for the average coupling in the thermodynamic limit around a CDP transition. We finally observe that, although tends to a delta function, the distribution of local couplings or, equivalently, of local branching ratios , is not a delta function in the limit. The two facts are not in contradiction because is an average over sites and the delta function limit is a large effect. In other words, the delta function limit is an effect of the law of large numbers for the average of the distribution (which continues to have finite variance for large ). This means that the model is nontrivial: there is sufficient diversity in the couplings (synapses) to mimic a real biological network. In conclusion, we have presented an excitable (SIRS) automata model for neural networks with dynamical synapses which seems to pertain to a new universality class: models with dissipative bulk dynamics that, due to homeostatic mechanisms, achieve a stationary conservative (in average) dynamics. In this model, like in conservative SOC models, the relevant transition pertains to the CPD class. An evolving “control” parameter (the architectural branching ratio) self-organizes to criticality, and its variance around the critical point vanishes in the thermodynamic limit. We would like to thank Carmem P. C. Prado, Renato Tinós, and Afshin Montakhab for discussions. We acknowledge financial support from CAPES, CNPq, PRONEX/FACEPE, PRONEM/FACEPE and CNAIPS-USP. ## References • [1] C. G. Langton. Computation at the edge of chaos: Phase transitions and emergent computation. Physica D, 42:12–37, 1990. • [2] S. Wolfram. Statistical mechanics of cellular automata. Rev. Mod. Phys., 55:601–644, 1983. • [3] P Bak, C Tang, and K Wiesenfeld. Self-organized criticality: An explanation of the 1/f noise. Phys. Rev. Lett., 59:381–384, 1987. • [4] D. R. Chialvo. Emergent complex neural dynamics. Nat. Phys., 6:744–750, 2010. • [5] S. M. Reia and O. Kinouchi. Conway’s game of LIFE is a near critical metastable state in the multiverse of cellular automata. Phys. Rev. E, 89:052123, 2014. • [6] H. J. Jensen. Self-Organized Criticality: Emergent Complex Behavior in Physical and Biological Systems. Cambridge University Press, Cambridge, 1998. • [7] M. A. Muñoz, R. Dickman, A. Vespignani, and S. Zapperi. Avalanche and spreading exponents in systems with absorbing states. Phys. Rev. E, 59(5):6175–6179, 1999. • [8] R. Dickman, M. A. Muñoz, A. Vespignani, and S. Zapperi. Paths to self-organized criticality. Braz. J. Phys., 30:27 – 41, 2000. • [9] B. Drossel and F. Schwabl. Self-organized critical forest-fire model. Phys. Rev. Lett., 69(11):1629, 1992. • [10] J. E. S. Socolar, G. Grinstein, and C. Jayaprakash. On self-organized criticality in nonconserving systems. Phys. Rev. E, 47:2366–2376, Apr 1993. • [11] H-M. Bröker and P. Grassberger. Random neighbor theory of the olami-feder-christensen earthquake model. Phys. Rev. E, 56:3944–3952, Oct 1997. • [12] M. L. Chabanol and V. Hakim. Analysis of a dissipative model of self-organized criticality with random neighbors. Phys. Rev. E, 56:R2343–R2346, 1997. • [13] O. Kinouchi and C. P. C. Prado. Robustness of scale invariance in models with self-organized criticality. Phy. Rev. E, 59(5):4964, 1999. • [14] J. X. de Carvalho and C. P. C. Prado. Self-organized criticality in the Olami-Feder-Christensen model. Phys. Rev. Lett., 84(17):4006, 2000. • [15] J. A. Bonachela and M. A. Muñoz. Self-organization without conservation: true or just apparent scale-invariance? J. Stat. Mech., 2009:P09009, 2009. • [16] J. A. Bonachela, S. de Franciscis, J. J. Torres, and M. A. Muñoz. Self-organization without conservation: are neuronal avalanches generically critical? J. Stat. Mech., 2010:P02015, 2010. • [17] E. Martin, A. Shreim, and M. Paczuski. Activity-dependent branching ratios in stocks, solar x-ray flux, and the bak-tang-wiesenfeld sandpile model. Phys. Rev. E, 81:016109, Jan 2010. • [18] A. Levina, J. M. Herrmann, and T. Geisel. Dynamical synapses causing self-organized criticality in neural networks. Nat. Phys., 3:857–860, 2007. • [19] J. Hesse and T. Gross. Self-organized criticality as a fundamental property of neural systems. Front. Syst. Neurosci., 8:1–5, 2014. • [20] O. Kinouchi and M. Copelli. Optimal dynamical range of excitable networks at criticality. Nat. Phys., 2:348–351, 2006. • [21] P. Bak. How Nature Works: The Science of Self-Organized Criticality. Oxford University Press, New York, 1997. • [22] S. A. Moosavi and A. Montakhab. Mean-field behavior as a result of noisy local dynamics in self-organized criticality: Neuroscience implications. Phys. Rev. E, 89:052139, May 2014. You are adding the first comment! How to quickly get a good reply: • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made. • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements. • Your comment should inspire ideas to flow and help the author improves the paper. The better we are at sharing our knowledge with each other, the faster we move forward. The feedback must be of minimum 40 characters and the title a minimum of 5 characters
2021-01-22 15:16:47
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8002903461456299, "perplexity": 1307.3238849869156}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703530835.37/warc/CC-MAIN-20210122144404-20210122174404-00130.warc.gz"}
https://stats.stackexchange.com/questions/383885/why-do-pca-loadings-given-by-sqrteigenvalueeigenvector-yield-correlations-bet?noredirect=1
# Why do PCA loadings given by sqrt(eigenvalue)*eigenvector yield correlations between PCs and original variables? I did a lot of reading in this blog and elsewhere about PCA, SVD, loadings etc. But I still don't understand why loadings, which represent correlations between principal components and the original variables, are mathematically defined by loadings = eigenvector * square root (eigenvalue) It seems I just can't grasp it. Could somebody please explain me the mathematics behind it? • This is only true if all the original variables were standardized prior to PCA. You can find a mathematical explanation e.g. in the beginning of my answer here stats.stackexchange.com/questions/104306 – amoeba Dec 20 '18 at 13:13 • Since a correlation is a number (between -1 and 1) and your definition of "loading" is a vector whose components could have arbitrarily large values, it isn't plausible to describe your loading as "representing correlation." – whuber Dec 20 '18 at 14:28 • @whuber The word "correlation" should be in plural. I edited. Other than that, the question makes total sense. – amoeba Dec 20 '18 at 14:59 • @amoeba Thank a lot for your answer. The link you posted helped my to understand loadings. But I am still struggeling with the mathematics. In your linked answer there is this equation to compute cross-covariance matrix between original variables and standardized PCs. And I think this is exactly the answer to my question. Its starts with 1/N-1 * X(transposed)* (squareroot(N-1)*U) Where does this formular comes from? I also don't understand the first transformation of the equation . It would really great if you could explain this equation to me, especially the first step. – Concetta Dec 20 '18 at 15:09 • Cross-covariance matrix between matrices A and B (assuming both have centered columns) is A.transposed * B / n. Can you be more specific as to what you don't understand? Do you know what covariance is? Can you follow these matrix operations? I don't know what level of explanation you need. – amoeba Dec 20 '18 at 15:15
2020-07-06 18:34:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6979377865791321, "perplexity": 709.0828431134867}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655881763.20/warc/CC-MAIN-20200706160424-20200706190424-00066.warc.gz"}
https://www.lmfdb.org/ModularForm/GL2/Q/holomorphic/12/6/b/a/
# Properties Label 12.6.b.a Level 12 Weight 6 Character orbit 12.b Analytic conductor 1.925 Analytic rank 0 Dimension 8 CM No Inner twists 4 # Related objects ## Newspace parameters Level: $$N$$ = $$12 = 2^{2} \cdot 3$$ Weight: $$k$$ = $$6$$ Character orbit: $$[\chi]$$ = 12.b (of order $$2$$ and degree $$1$$) ## Newform invariants Self dual: No Analytic conductor: $$1.92460583776$$ Analytic rank: $$0$$ Dimension: $$8$$ Coefficient field: $$\mathbb{Q}[x]/(x^{8} - \cdots)$$ Coefficient ring: $$\Z[a_1, a_2, a_3]$$ Coefficient ring index: $$2^{18}\cdot 3^{5}$$ Sato-Tate group: $\mathrm{SU}(2)[C_{2}]$ ## $q$-expansion Coefficients of the $$q$$-expansion are expressed in terms of a basis $$1,\beta_1,\ldots,\beta_{7}$$ for the coefficient ring described below. We also show the integral $$q$$-expansion of the trace form. $$f(q)$$ $$=$$ $$q + \beta_{2} q^{2} -\beta_{1} q^{3} + ( 1 - \beta_{3} ) q^{4} + ( \beta_{2} + \beta_{5} ) q^{5} + ( 3 - \beta_{4} - \beta_{5} + \beta_{6} + \beta_{7} ) q^{6} + ( -\beta_{1} + 2 \beta_{3} + \beta_{4} - 2 \beta_{6} - \beta_{7} ) q^{7} + ( 6 \beta_{1} + 2 \beta_{2} + 2 \beta_{4} - 2 \beta_{5} - \beta_{7} ) q^{8} + ( -3 - 2 \beta_{1} - 15 \beta_{2} + 6 \beta_{3} + \beta_{5} + 2 \beta_{6} - \beta_{7} ) q^{9} +O(q^{10})$$ $$q + \beta_{2} q^{2} -\beta_{1} q^{3} + ( 1 - \beta_{3} ) q^{4} + ( \beta_{2} + \beta_{5} ) q^{5} + ( 3 - \beta_{4} - \beta_{5} + \beta_{6} + \beta_{7} ) q^{6} + ( -\beta_{1} + 2 \beta_{3} + \beta_{4} - 2 \beta_{6} - \beta_{7} ) q^{7} + ( 6 \beta_{1} + 2 \beta_{2} + 2 \beta_{4} - 2 \beta_{5} - \beta_{7} ) q^{8} + ( -3 - 2 \beta_{1} - 15 \beta_{2} + 6 \beta_{3} + \beta_{5} + 2 \beta_{6} - \beta_{7} ) q^{9} + ( -34 + 14 \beta_{1} - 4 \beta_{3} - 4 \beta_{4} - 2 \beta_{6} - \beta_{7} ) q^{10} + ( -3 \beta_{1} - 32 \beta_{2} - \beta_{4} + 4 \beta_{7} ) q^{11} + ( -87 - 2 \beta_{1} + 6 \beta_{2} + 3 \beta_{3} + 6 \beta_{4} + 2 \beta_{5} + 4 \beta_{6} + 7 \beta_{7} ) q^{12} + ( 14 + 4 \beta_{1} - 12 \beta_{3} - 4 \beta_{6} - 2 \beta_{7} ) q^{13} + ( -30 \beta_{1} - 8 \beta_{2} - 10 \beta_{4} + 2 \beta_{5} - 9 \beta_{7} ) q^{14} + ( 5 \beta_{1} + 96 \beta_{2} - 6 \beta_{3} + \beta_{4} + 6 \beta_{6} - 9 \beta_{7} ) q^{15} + ( 292 - 40 \beta_{1} + 4 \beta_{3} + 16 \beta_{4} - 8 \beta_{6} - 4 \beta_{7} ) q^{16} + ( 148 \beta_{2} - 12 \beta_{5} + 20 \beta_{7} ) q^{17} + ( 462 - 10 \beta_{1} - 3 \beta_{2} + 12 \beta_{3} - 12 \beta_{4} + 8 \beta_{5} - 2 \beta_{6} + 19 \beta_{7} ) q^{18} + ( 27 \beta_{1} - 12 \beta_{3} - 13 \beta_{4} + 12 \beta_{6} + 6 \beta_{7} ) q^{19} + ( 36 \beta_{1} - 36 \beta_{2} + 12 \beta_{4} + 20 \beta_{5} - 30 \beta_{7} ) q^{20} + ( -42 + 8 \beta_{1} - 237 \beta_{2} - 24 \beta_{3} - 13 \beta_{5} - 8 \beta_{6} - 32 \beta_{7} ) q^{21} + ( -1038 + 6 \beta_{1} + 32 \beta_{3} - 4 \beta_{4} + 6 \beta_{6} + 3 \beta_{7} ) q^{22} + ( 42 \beta_{1} - 256 \beta_{2} + 14 \beta_{4} + 32 \beta_{7} ) q^{23} + ( -1404 - 2 \beta_{1} - 78 \beta_{2} - 12 \beta_{3} + 2 \beta_{4} - 18 \beta_{5} - 24 \beta_{6} + 27 \beta_{7} ) q^{24} + ( -171 - 20 \beta_{1} + 60 \beta_{3} + 20 \beta_{6} + 10 \beta_{7} ) q^{25} + ( 48 \beta_{1} + 14 \beta_{2} + 16 \beta_{4} - 16 \beta_{5} - 40 \beta_{7} ) q^{26} + ( -6 \beta_{1} + 288 \beta_{2} + 36 \beta_{3} - 15 \beta_{4} - 36 \beta_{6} - 54 \beta_{7} ) q^{27} + ( 2406 + 88 \beta_{1} + 2 \beta_{3} - 48 \beta_{4} + 56 \beta_{6} + 28 \beta_{7} ) q^{28} + ( 247 \beta_{2} + 55 \beta_{5} + 24 \beta_{7} ) q^{29} + ( 3126 + 66 \beta_{1} + 24 \beta_{2} - 96 \beta_{3} + 36 \beta_{4} - 16 \beta_{5} - 14 \beta_{6} + 25 \beta_{7} ) q^{30} + ( -185 \beta_{1} + 22 \beta_{3} + 69 \beta_{4} - 22 \beta_{6} - 11 \beta_{7} ) q^{31} + ( -168 \beta_{1} + 264 \beta_{2} - 56 \beta_{4} - 72 \beta_{5} - 4 \beta_{7} ) q^{32} + ( 528 + 10 \beta_{1} - 141 \beta_{2} - 30 \beta_{3} + 67 \beta_{5} - 10 \beta_{6} - 31 \beta_{7} ) q^{33} + ( -4552 - 168 \beta_{1} - 112 \beta_{3} + 48 \beta_{4} + 24 \beta_{6} + 12 \beta_{7} ) q^{34} + ( -246 \beta_{1} - 160 \beta_{2} - 82 \beta_{4} + 20 \beta_{7} ) q^{35} + ( -5259 + 100 \beta_{1} + 444 \beta_{2} - 21 \beta_{3} - 84 \beta_{4} + 52 \beta_{5} + 32 \beta_{6} + 2 \beta_{7} ) q^{36} + ( 806 - 4 \beta_{1} + 12 \beta_{3} + 4 \beta_{6} + 2 \beta_{7} ) q^{37} + ( 222 \beta_{1} + 48 \beta_{2} + 74 \beta_{4} + 30 \beta_{5} + 33 \beta_{7} ) q^{38} + ( -2 \beta_{1} - 288 \beta_{2} - 36 \beta_{3} + 96 \beta_{4} + 36 \beta_{6} + 54 \beta_{7} ) q^{39} + ( 6728 + 208 \beta_{1} - 24 \beta_{3} - 32 \beta_{4} - 112 \beta_{6} - 56 \beta_{7} ) q^{40} + ( -614 \beta_{2} - 102 \beta_{5} - 64 \beta_{7} ) q^{41} + ( 7386 - 86 \beta_{1} - 42 \beta_{2} + 276 \beta_{3} + 84 \beta_{4} - 32 \beta_{5} + 26 \beta_{6} - 67 \beta_{7} ) q^{42} + ( 551 \beta_{1} - 4 \beta_{3} - 185 \beta_{4} + 4 \beta_{6} + 2 \beta_{7} ) q^{43} + ( -132 \beta_{1} - 1052 \beta_{2} - 44 \beta_{4} + 76 \beta_{5} + 62 \beta_{7} ) q^{44} + ( -2208 - 68 \beta_{1} + 1029 \beta_{2} + 204 \beta_{3} - 155 \beta_{5} + 68 \beta_{6} + 182 \beta_{7} ) q^{45} + ( -8700 - 84 \beta_{1} + 256 \beta_{3} + 56 \beta_{4} - 84 \beta_{6} - 42 \beta_{7} ) q^{46} + ( 756 \beta_{1} + 1600 \beta_{2} + 252 \beta_{4} - 200 \beta_{7} ) q^{47} + ( -9372 - 336 \beta_{1} - 1464 \beta_{2} + 132 \beta_{3} + 24 \beta_{4} - 8 \beta_{5} + 56 \beta_{6} - 136 \beta_{7} ) q^{48} + ( -2693 + 164 \beta_{1} - 492 \beta_{3} - 164 \beta_{6} - 82 \beta_{7} ) q^{49} + ( -240 \beta_{1} - 171 \beta_{2} - 80 \beta_{4} + 80 \beta_{5} + 200 \beta_{7} ) q^{50} + ( 20 \beta_{1} - 1632 \beta_{2} - 168 \beta_{3} - 332 \beta_{4} + 168 \beta_{6} + 288 \beta_{7} ) q^{51} + ( 10526 - 320 \beta_{1} + 34 \beta_{3} + 128 \beta_{4} - 64 \beta_{6} - 32 \beta_{7} ) q^{52} + ( -1043 \beta_{2} - 19 \beta_{5} - 128 \beta_{7} ) q^{53} + ( 9765 - 342 \beta_{1} - 144 \beta_{2} - 288 \beta_{3} - 201 \beta_{4} + 147 \beta_{5} + 87 \beta_{6} - 174 \beta_{7} ) q^{54} + ( -812 \beta_{1} - 68 \beta_{3} + 248 \beta_{4} + 68 \beta_{6} + 34 \beta_{7} ) q^{55} + ( 612 \beta_{1} + 2572 \beta_{2} + 204 \beta_{4} + 180 \beta_{5} + 250 \beta_{7} ) q^{56} + ( 5418 - 6 \beta_{1} + 1737 \beta_{2} + 18 \beta_{3} + 57 \beta_{5} + 6 \beta_{6} + 213 \beta_{7} ) q^{57} + ( -7822 + 770 \beta_{1} - 412 \beta_{3} - 220 \beta_{4} - 110 \beta_{6} - 55 \beta_{7} ) q^{58} + ( -1215 \beta_{1} + 1664 \beta_{2} - 405 \beta_{4} - 208 \beta_{7} ) q^{59} + ( -8088 + 52 \beta_{1} + 3180 \beta_{2} + 24 \beta_{3} + 316 \beta_{4} - 220 \beta_{5} - 128 \beta_{6} - 230 \beta_{7} ) q^{60} + ( 7454 - 164 \beta_{1} + 492 \beta_{3} + 164 \beta_{6} + 82 \beta_{7} ) q^{61} + ( -678 \beta_{1} - 88 \beta_{2} - 226 \beta_{4} - 326 \beta_{5} + 75 \beta_{7} ) q^{62} + ( -111 \beta_{1} - 1152 \beta_{2} + 342 \beta_{3} + 627 \beta_{4} - 342 \beta_{6} - 27 \beta_{7} ) q^{63} + ( 4816 - 672 \beta_{1} - 48 \beta_{3} + 64 \beta_{4} + 480 \beta_{6} + 240 \beta_{7} ) q^{64} + ( -2050 \beta_{2} + 318 \beta_{5} - 296 \beta_{7} ) q^{65} + ( 4170 + 1058 \beta_{1} + 528 \beta_{2} - 60 \beta_{3} - 228 \beta_{4} - 40 \beta_{5} - 134 \beta_{6} - 167 \beta_{7} ) q^{66} + ( 847 \beta_{1} + 304 \beta_{3} - 181 \beta_{4} - 304 \beta_{6} - 152 \beta_{7} ) q^{67} + ( 528 \beta_{1} - 4368 \beta_{2} + 176 \beta_{4} - 560 \beta_{5} + 200 \beta_{7} ) q^{68} + ( -11616 + 212 \beta_{1} - 138 \beta_{2} - 636 \beta_{3} + 470 \beta_{5} - 212 \beta_{6} - 182 \beta_{7} ) q^{69} + ( -3804 + 492 \beta_{1} + 160 \beta_{3} - 328 \beta_{4} + 492 \beta_{6} + 246 \beta_{7} ) q^{70} + ( 798 \beta_{1} - 2496 \beta_{2} + 266 \beta_{4} + 312 \beta_{7} ) q^{71} + ( 2568 + 1550 \beta_{1} - 5142 \beta_{2} - 600 \beta_{3} - 54 \beta_{4} + 278 \beta_{5} + 16 \beta_{6} + 19 \beta_{7} ) q^{72} + ( -16150 - 448 \beta_{1} + 1344 \beta_{3} + 448 \beta_{6} + 224 \beta_{7} ) q^{73} + ( -48 \beta_{1} + 806 \beta_{2} - 16 \beta_{4} + 16 \beta_{5} + 40 \beta_{7} ) q^{74} + ( 111 \beta_{1} + 1440 \beta_{2} + 180 \beta_{3} - 480 \beta_{4} - 180 \beta_{6} - 270 \beta_{7} ) q^{75} + ( -10782 - 24 \beta_{1} - 138 \beta_{3} + 176 \beta_{4} - 504 \beta_{6} - 252 \beta_{7} ) q^{76} + ( 1074 \beta_{2} - 206 \beta_{5} + 160 \beta_{7} ) q^{77} + ( -10470 - 144 \beta_{1} + 144 \beta_{2} + 288 \beta_{3} + 274 \beta_{4} - 398 \beta_{5} - 322 \beta_{6} + 182 \beta_{7} ) q^{78} + ( -1537 \beta_{1} - 442 \beta_{3} + 365 \beta_{4} + 442 \beta_{6} + 221 \beta_{7} ) q^{79} + ( -336 \beta_{1} + 6416 \beta_{2} - 112 \beta_{4} + 368 \beta_{5} - 904 \beta_{7} ) q^{80} + ( 25065 + 24 \beta_{1} - 3222 \beta_{2} - 72 \beta_{3} - 822 \beta_{5} - 24 \beta_{6} - 312 \beta_{7} ) q^{81} + ( 19340 - 1428 \beta_{1} + 920 \beta_{3} + 408 \beta_{4} + 204 \beta_{6} + 102 \beta_{7} ) q^{82} + ( -165 \beta_{1} - 800 \beta_{2} - 55 \beta_{4} + 100 \beta_{7} ) q^{83} + ( 20982 - 2452 \beta_{1} + 7188 \beta_{2} + 138 \beta_{3} - 348 \beta_{4} + 188 \beta_{5} - 128 \beta_{6} + 550 \beta_{7} ) q^{84} + ( 28672 + 720 \beta_{1} - 2160 \beta_{3} - 720 \beta_{6} - 360 \beta_{7} ) q^{85} + ( 1158 \beta_{1} + 16 \beta_{2} + 386 \beta_{4} + 1094 \beta_{5} - 531 \beta_{7} ) q^{86} + ( 371 \beta_{1} + 4704 \beta_{2} - 618 \beta_{3} - 329 \beta_{4} + 618 \beta_{6} - 279 \beta_{7} ) q^{87} + ( -18792 + 1328 \beta_{1} + 824 \beta_{3} - 480 \beta_{4} + 112 \beta_{6} + 56 \beta_{7} ) q^{88} + ( 11830 \beta_{2} - 234 \beta_{5} + 1508 \beta_{7} ) q^{89} + ( -31434 - 2986 \beta_{1} - 2208 \beta_{2} - 564 \beta_{3} + 348 \beta_{4} + 272 \beta_{5} + 310 \beta_{6} + 835 \beta_{7} ) q^{90} + ( 1642 \beta_{1} - 668 \beta_{3} - 770 \beta_{4} + 668 \beta_{6} + 334 \beta_{7} ) q^{91} + ( -2376 \beta_{1} - 9208 \beta_{2} - 792 \beta_{4} + 344 \beta_{5} - 164 \beta_{7} ) q^{92} + ( -43266 - 260 \beta_{1} - 5217 \beta_{2} + 780 \beta_{3} + 31 \beta_{5} + 260 \beta_{6} - 526 \beta_{7} ) q^{93} + ( 48264 - 1512 \beta_{1} - 1600 \beta_{3} + 1008 \beta_{4} - 1512 \beta_{6} - 756 \beta_{7} ) q^{94} + ( 1182 \beta_{1} - 3072 \beta_{2} + 394 \beta_{4} + 384 \beta_{7} ) q^{95} + ( 41424 - 712 \beta_{1} - 9336 \beta_{2} + 1488 \beta_{3} - 376 \beta_{4} - 200 \beta_{5} + 224 \beta_{6} + 812 \beta_{7} ) q^{96} + ( -49006 + 260 \beta_{1} - 780 \beta_{3} - 260 \beta_{6} - 130 \beta_{7} ) q^{97} + ( 1968 \beta_{1} - 2693 \beta_{2} + 656 \beta_{4} - 656 \beta_{5} - 1640 \beta_{7} ) q^{98} + ( -267 \beta_{1} + 6336 \beta_{2} - 180 \beta_{3} + 723 \beta_{4} + 180 \beta_{6} - 702 \beta_{7} ) q^{99} +O(q^{100})$$ $$\operatorname{Tr}(f)(q)$$ $$=$$ $$8q + 8q^{4} + 24q^{6} - 24q^{9} + O(q^{10})$$ $$8q + 8q^{4} + 24q^{6} - 24q^{9} - 272q^{10} - 696q^{12} + 112q^{13} + 2336q^{16} + 3696q^{18} - 336q^{21} - 8304q^{22} - 11232q^{24} - 1368q^{25} + 19248q^{28} + 25008q^{30} + 4224q^{33} - 36416q^{34} - 42072q^{36} + 6448q^{37} + 53824q^{40} + 59088q^{42} - 17664q^{45} - 69600q^{46} - 74976q^{48} - 21544q^{49} + 84208q^{52} + 78120q^{54} + 43344q^{57} - 62576q^{58} - 64704q^{60} + 59632q^{61} + 38528q^{64} + 33360q^{66} - 92928q^{69} - 30432q^{70} + 20544q^{72} - 129200q^{73} - 86256q^{76} - 83760q^{78} + 200520q^{81} + 154720q^{82} + 167856q^{84} + 229376q^{85} - 150336q^{88} - 251472q^{90} - 346128q^{93} + 386112q^{94} + 331392q^{96} - 392048q^{97} + O(q^{100})$$ Basis of coefficient ring in terms of a root $$\nu$$ of $$x^{8} - x^{7} + 7 x^{6} + 6 x^{5} - 11 x^{4} - 73 x^{3} + 223 x^{2} - 768 x + 912$$: $$\beta_{0}$$ $$=$$ $$1$$ $$\beta_{1}$$ $$=$$ $$($$$$-9436 \nu^{7} + 30607 \nu^{6} - 62704 \nu^{5} - 231 \nu^{4} - 423169 \nu^{3} - 610094 \nu^{2} - 8744833 \nu + 4238706$$$$)/953786$$ $$\beta_{2}$$ $$=$$ $$($$$$-5694 \nu^{7} - 2252 \nu^{6} - 73822 \nu^{5} - 139528 \nu^{4} - 251008 \nu^{3} - 81692 \nu^{2} - 673868 \nu + 5096288$$$$)/476893$$ $$\beta_{3}$$ $$=$$ $$($$$$-8084 \nu^{7} + 72516 \nu^{6} + 129032 \nu^{5} + 488016 \nu^{4} + 1228252 \nu^{3} + 3220092 \nu^{2} - 3343360 \nu - 1507299$$$$)/476893$$ $$\beta_{4}$$ $$=$$ $$($$$$98436 \nu^{7} - 85191 \nu^{6} + 753588 \nu^{5} + 1046763 \nu^{4} - 436119 \nu^{3} - 10020102 \nu^{2} + 30261501 \nu - 78787206$$$$)/953786$$ $$\beta_{5}$$ $$=$$ $$($$$$-70008 \nu^{7} + 87389 \nu^{6} - 396078 \nu^{5} - 1183333 \nu^{4} + 723461 \nu^{3} + 1647902 \nu^{2} - 24231757 \nu + 43615544$$$$)/476893$$ $$\beta_{6}$$ $$=$$ $$($$$$-164992 \nu^{7} - 286197 \nu^{6} - 1331716 \nu^{5} - 2951719 \nu^{4} - 3169901 \nu^{3} + 4344198 \nu^{2} + 2922499 \nu + 53217788$$$$)/953786$$ $$\beta_{7}$$ $$=$$ $$($$$$-83652 \nu^{7} - 63236 \nu^{6} - 485532 \nu^{5} - 1647824 \nu^{4} - 1820252 \nu^{3} + 5776852 \nu^{2} - 14368388 \nu + 26548384$$$$)/476893$$ $$1$$ $$=$$ $$\beta_0$$ $$\nu$$ $$=$$ $$($$$$2 \beta_{4} + 3 \beta_{3} + 18 \beta_{2} - 6 \beta_{1} + 9$$$$)/72$$ $$\nu^{2}$$ $$=$$ $$($$$$3 \beta_{7} - 3 \beta_{6} - 3 \beta_{5} - 5 \beta_{4} + 3 \beta_{2} - 12 \beta_{1} - 117$$$$)/72$$ $$\nu^{3}$$ $$=$$ $$($$$$-21 \beta_{7} + 12 \beta_{6} + 18 \beta_{5} - 22 \beta_{4} - 24 \beta_{3} - 198 \beta_{2} - 54 \beta_{1} - 684$$$$)/144$$ $$\nu^{4}$$ $$=$$ $$($$$$-7 \beta_{7} + 9 \beta_{6} - 7 \beta_{5} - 5 \beta_{4} + 6 \beta_{3} - 33 \beta_{2} + 48 \beta_{1} + 273$$$$)/24$$ $$\nu^{5}$$ $$=$$ $$($$$$72 \beta_{7} - 12 \beta_{6} + 24 \beta_{5} + 130 \beta_{4} + 69 \beta_{3} - 198 \beta_{2} + 54 \beta_{1} + 7299$$$$)/72$$ $$\nu^{6}$$ $$=$$ $$($$$$111 \beta_{7} - 402 \beta_{6} + 120 \beta_{5} + 496 \beta_{4} + 846 \beta_{3} + 6744 \beta_{2} - 1134 \beta_{1} - 18108$$$$)/144$$ $$\nu^{7}$$ $$=$$ $$($$$$-21 \beta_{7} - 648 \beta_{6} - 174 \beta_{5} - 1096 \beta_{4} - 1329 \beta_{3} - 180 \beta_{2} - 1932 \beta_{1} - 30987$$$$)/72$$ ## Character Values We give the values of $$\chi$$ on generators for $$\left(\mathbb{Z}/12\mathbb{Z}\right)^\times$$. $$n$$ $$5$$ $$7$$ $$\chi(n)$$ $$-1$$ $$-1$$ ## Embeddings For each embedding $$\iota_m$$ of the coefficient field, the values $$\iota_m(a_n)$$ are shown below. For more information on an embedded modular form you can click on its label. Label $$\iota_m(\nu)$$ $$a_{2}$$ $$a_{3}$$ $$a_{4}$$ $$a_{5}$$ $$a_{6}$$ $$a_{7}$$ $$a_{8}$$ $$a_{9}$$ $$a_{10}$$ 11.1 −2.29661 + 1.35416i −2.29661 − 1.35416i 0.713157 − 2.93555i 0.713157 + 2.93555i 1.67284 + 0.274906i 1.67284 − 0.274906i 0.410606 − 2.17330i 0.410606 + 2.17330i −5.41443 1.63829i 4.17995 + 15.0176i 26.6320 + 17.7408i 73.1202i 1.97116 88.1596i 51.8209i −115.132 139.687i −208.056 + 125.545i 119.792 395.904i 11.2 −5.41443 + 1.63829i 4.17995 15.0176i 26.6320 17.7408i 73.1202i 1.97116 + 88.1596i 51.8209i −115.132 + 139.687i −208.056 125.545i 119.792 + 395.904i 11.3 −1.91937 5.32128i −14.9174 4.52459i −24.6320 + 20.4270i 35.2908i 4.55538 + 88.0639i 190.564i 155.976 + 91.8667i 202.056 + 134.990i −187.792 + 67.7362i 11.4 −1.91937 + 5.32128i −14.9174 + 4.52459i −24.6320 20.4270i 35.2908i 4.55538 88.0639i 190.564i 155.976 91.8667i 202.056 134.990i −187.792 67.7362i 11.5 1.91937 5.32128i 14.9174 + 4.52459i −24.6320 20.4270i 35.2908i 52.7086 70.6951i 190.564i −155.976 + 91.8667i 202.056 + 134.990i −187.792 67.7362i 11.6 1.91937 + 5.32128i 14.9174 4.52459i −24.6320 + 20.4270i 35.2908i 52.7086 + 70.6951i 190.564i −155.976 91.8667i 202.056 134.990i −187.792 + 67.7362i 11.7 5.41443 1.63829i −4.17995 15.0176i 26.6320 17.7408i 73.1202i −47.2352 74.4637i 51.8209i 115.132 139.687i −208.056 + 125.545i 119.792 + 395.904i 11.8 5.41443 + 1.63829i −4.17995 + 15.0176i 26.6320 + 17.7408i 73.1202i −47.2352 + 74.4637i 51.8209i 115.132 + 139.687i −208.056 125.545i 119.792 395.904i $$n$$: e.g. 2-40 or 990-1000 Embeddings: e.g. 1-3 or 11.8 Significant digits: Format: Complex embeddings Normalized embeddings Satake parameters Satake angles ## Inner twists Char. orbit Parity Mult. Self Twist Proved 1.a Even 1 trivial yes 3.b Odd 1 yes 4.b Odd 1 yes 12.b Even 1 yes ## Hecke kernels There are no other newforms in $$S_{6}^{\mathrm{new}}(12, [\chi])$$.
2019-11-21 05:34:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.995265543460846, "perplexity": 13259.377536283457}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670731.88/warc/CC-MAIN-20191121050543-20191121074543-00541.warc.gz"}
http://math.stackexchange.com/questions/292116/how-to-find-kappa-to-minimize-integral-i-frac1-kappa-int-limits-0
# How to find $\kappa$ to minimize integral $I = \frac{1}{\kappa}\int\limits_{0}^{T} \mathrm{exp}\left(-f(\kappa,x)\right) \mathrm{d}x$ I am trying to find such value $\kappa \in (0,1)$ that would minimize the integral \begin{aligned} I = \frac{1}{\kappa}\int\limits_{0}^{T} \mathrm{exp}\left(-f(\kappa,x)\right) \mathrm{d}x, \end{aligned} where $f(\kappa,x)$ is nonnegative in the interval of integration, and has a minimum at some $\kappa_x = \kappa(x)$ (i.e. the minimum of $f(\kappa,x)$ is a function of the integration variable $x$). The problem is that I can't find a way how to rigorously show that some specific value $\kappa$ minimizes the integral $I$ (but not necessarily the integrand or $f(k,x)$). I have plotted $I$ versus $\kappa$, and the minimum exists -- it is above 0.5, however this must be proved. I tried using Euler's equation (without the differential part as I don't have the derivative $\kappa'$), but the answer it again provides the optimal function $\kappa(x)$, not a single value. My current thought is to find the average $\bar{\kappa} = \mathbb{E}_x\left\{k(x)\right\}$, but this seems to be a guesswork. Does anyone have any suggestions on how to find the value $\kappa$ (not function $\kappa(x)$) to minimize $I$? A bit more information about $f(\kappa,x)$. The function $f(\kappa,x)$ looks rather complex: \begin{aligned} f(\kappa,x) = \frac{\kappa (1-\kappa)}{T - \kappa x} \mathrm{exp}\left(-A\frac{T - \kappa x}{\kappa (1-\kappa)} - B \frac{T^2-xT}{T - \kappa x} \right), \end{aligned} where $A,B$ are positive constants. Note that for $x = 0$ the optimal $\kappa = 0.5$, for $x>0$ I believe $\kappa>0.5$. - Cn you provide the function $f(\kappa,x)$, as this may give people additional insight? –  Daryl Feb 1 '13 at 15:13 What is $y$?${}$ –  Antonio Vargas Feb 1 '13 at 17:32 That's a typo - corrected –  Anvar Feb 1 '13 at 21:47 κ is a parameter or function of x? –  Occupy Gezi Feb 27 '13 at 17:49 $\kappa$ is a parameter, independent of $x$. –  Anvar Mar 3 '13 at 0:56
2015-07-01 23:24:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.983931303024292, "perplexity": 502.5086090443328}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375095273.5/warc/CC-MAIN-20150627031815-00020-ip-10-179-60-89.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/501775/didos-problem-with-euler-equations
# Dido's problem with Euler equations I'm considering Dido's problem: Consider 2 differentiable arcs $C$ and $C_0$ in $\mathbb{R}^2$ from the point $P$ to $Q$ and back. We keep $C_0,P,Q$ fixed, and want to choose the arc $C$ such that under all arcs of a specified length larther then $|PQ|$ the area $A$ enclosed by the 2 curves is maximized. $$A = \frac{1}{2}\int_{C\cup C_0}xdy-ydx$$ Solutions to this problem using variational calculus are sketched in (1), (2) I want to show that a necessary condition is that the curvature $\kappa$ is constant $$\kappa = \frac{\dot{x}\ddot{y}-\dot{y}\ddot{x}}{(\dot{x}^2+\dot{y}^2)^{3/2}}$$ Ofcourse knowing that the solution to this problem is a circular arc, we know that it is. But I want to derive this... It appears that the Lagrangian of this problem (see (2)) is $$\frac{1}{2}(x\dot{y}-y\dot{x})+\lambda\sqrt{\dot{x}^2+\dot{y}^2}$$ And in (1) we see that using Eulers equations \begin{align*} \dot{y}\kappa+\lambda x =0\\ -\dot{x}\kappa + \lambda y =0 \end{align*} These can be combined to see that $\lambda(x\dot{x}+y\dot{y}) =0$ with solution $x^2+y^2 = C$. But I simply want to show that $\kappa$ is constant is a necessary condition, but I cant see how. How can we derive this? Thanks for any enlighting remark. - I follows when you take the Euler-Langrange equation of $L: =\frac{1}{2}(xy'-yx')+\lambda \sqrt{x'^2+y'^2}.$ So begin with $\frac{d}{dt}\frac{d}{dx'}L - \frac{d}{dx} L =0 \\ \frac{d}{dt}\frac{d}{dy'}L - \frac{d}{dy} L =0.$ After some manipulation you will end up with $y' \left(1+\lambda \frac{x'y''-y'x''}{(x'^2+y'^2)^{\frac{3}{2}}} \right)=0 \\ x' \left(1+\lambda \frac{x'y''-y'x''}{(x'^2+y'^2)^{\frac{3}{2}}} \right)=0$ Since $x'$ and $y'$ cannot be zero everywhere, unless $|PQ|=0$, it follows that $\frac{x'y''-y'x''}{(x'^2+y'^2)^{\frac{3}{2}}}=-\frac{1}{\lambda}$ - No need to parametrize. At the beginning itself a single function $y(x)= y_2(x)-y_1(x)$ can be taken between between $C$ and $C_0$ for same x or y bounds as in direct integration, where $(y_2 -y_1)$ difference can be treated as a single ordinate. $L(y,y'(x)) : y(x) + \lambda \sqrt{(1 + y'^2)} dx$ which on application of EL gives the curve of constant curvature, the circular arc. -
2016-02-06 11:44:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9232868552207947, "perplexity": 186.3270166068999}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701146302.25/warc/CC-MAIN-20160205193906-00267-ip-10-236-182-209.ec2.internal.warc.gz"}
https://www.darwinproject.ac.uk/letter/DCP-LETT-2996A.xml
# From Charles Lyell   24 November 1860 Nov. 24 1860. My dear Darwin, In former editions in speculating on age of Uddevalla & still more elevated beds of glacial period in Norway & Sweden, I have assumed an average rate of rise as 2$\frac{1}{2}$ feet in a century. see Manual. p. 119.—1 125 feet take 5000 years to rise or to sink. I wish to keep to my old standard or estimate unless shown to be objectionable as applied to countries like Greenland & Sweden in full movement. a mere guess at an average rate. at p. 120 I give 27,500 years for 700 ft of upheaval in recent times. But I expressly guard myself in same page by saying I make no allowance for pauses or for oscillations of level (of the minor kind.) What I want to do in the new edition is to make another conjecture & to allow for the pauses.2 When the amount is reversed as if (which I believe with Trimmer Ramsay Jamieson & others) Scotland & Wales have moved first down & then up again 2200 feet or more,3 then I conceive the chief pauses would be before the downward was converted into an upward movement. Successive sea-beaches & terraces & inland cliffs mark long pauses. Smith of Jordanhill thinks he has a cliff which took more than 100,000 years to cut, subsequently to the glacial re-elevation in Scotland.4 Now I propose to conjecture keeping on the safe side & not exposing myself to the charge of exaggerating the probable time, that 4 parts of Europe are stationary for one which moves at the rate of 2$\frac{1}{2}$ feet in a hundred years, or that the stationary area exceeds that in motion as 4 to 1. & 4 expresses the period of rest or the pauses where 1 in a given region expresses the movement. Therefore if Scotland has first gone down 2250 feet after the period of land glaciers, as I believe this would take 90,000 years & the re-elevation rather more because it went up higher than it’s present level, say 100,000 years, but then I must give 400,000 additional (in round numbers) for the intervening periods of rest. Thus the last oscillation is about 50 feet in Scotland in each direction, since the 2 vast movements of 2250 down & up. But when we try to estimate the time required for the 2 movements we encounter, as Smith of Jordanhill observes, 2000 years since the Romans built the Pictish wall5 & find that we do not get back to the close of the last oscillation. Then I regard this 2000 years (and we know not how many 1000 before) as a part of the great excess of stationary condition. I also consider the successive beaches cut at Glasgow in the canoe-bearing sands all before the era of the Roman Wall as portions of the same excess of 4 to 1.6 I am aware that the North Cape moves, or is said to move, 5 or 6 feet in a century & D Forbes thinks that Chili has gone up from 40 to 60 feet in 350 years, but then north of Arica, he thinks there has been a subsiding area, & about Arica, a stationary one as proved by Indian tumuli & mummies buried near the old shore level & elevated since.7 But I look on S. America as rather exceptional. It would be balanced by other more inactive regions of oscillation. The average in Scandinavia would not I think exceed 2$\frac{1}{2}$ feet.8 I am however much more in doubt as to the comparative areas of rest, contrasted with those of movement. I once made them as 9 to 1. If I could stimulate the geographers to make objections it would do good. Perhaps the contemporaneous inland oscillations, especially in mountain chains, may be greater than the sea-coast ones. But we may leave those speculations out for the present. There may be scarcely any areas in an absolute state of rest but we cannot yet take account of minute quantities. I wish you to reflect on my principle, & if possible say if 4. to 1. is a reasonable conjecture. Large plateaux of denudation & inland sea cliffs are monuments of immense pauses. [table] During this period the whole of the glacial period & the present establishment of provinces of species has occurred— the mammalian fauna greatly changed, but the shells very little9 The last oscillation of Scotland of about 50 feet in each direction wd take 4000 years & for pauses 16.000 more which would give 20,000 years. The Glasgow canoes & polished celts come into this brief era, but the Somme valley flint implements probably into some part of the great period of reelevation—during which in Scotland the erratic boulder clay was getting denuded & my Forfar gravel beds manufactured. Chas Lyell ## Footnotes C. Lyell 1855, p. 119. The new edition of Lyell’s Manual of elementary geology, retitled Elements of geology, was published in 1865 (C. Lyell 1865). CD’s copy of the work is in the Darwin Library–Down. Although Lyell alluded to these points in his new edition, they were in fact discussed in greater detail in his book on the antiquity of man (C. Lyell 1863, chaps. 3, 13, and 14). James Smith, commonly known as Smith of Jordanhill (Scotland), wrote to Lyell about this point (Wilson ed. 1970, pp. 502–5). Lyell copied the information into his scientific journal immediately preceding the draft of this letter to CD. Smith had established himself as an acknowledged expert on the elevation of Scottish coastlines with the publication of an influential paper on the subject in 1836 (J. Smith 1836). The ‘Picts’ wall’ (OED) was the common name of the great wall built by the Romans between AD 120 and 130 (EB 1970); it is now commonly known as Hadrian’s Wall. Running through Cumberland and Northumberland, it was constructed to help defend Roman Britain from the Picts and the Scots. The discovery of ancient canoes buried in the silt around Glasgow was described by John Buchanan in 1855 and discussed by Lyell in C. Lyell 1863, pp. 48–9. Lyell thought some of them might date from the Bronze Age. David Forbes, recently returned from a geological expedition to Bolivia and Peru, read a paper on the geology of the area at a meeting of the Geological Society on 21 November 1860. The figures given by Lyell in the letter were probably ascertained in conversation with Forbes. Forbes’s results were published in D. Forbes 1861. The geology of the area near the town of Arica, in Peru, is discussed in D. Forbes 1861, p. 11. This figure is repeated in C. Lyell 1863, p. 58. Lyell discussed the point in C. Lyell 1865, pp. 158–60. ## Bibliography EB: The Encyclopædia Britannica. A dictionary of arts, sciences, literature and general information. 11th edition. 29 vols. Cambridge: Cambridge University Press. 1910–11. Forbes, James David. 1861. On the climate of Edinburgh for fifty-six years, from 1795 to 1850, deduced principally from Mr Adie’s observations; with an account of other and earlier registers. Transactions of the Royal Society of Edinburgh 22: 327–56. Lyell, Charles. 1865. Elements of geology, or the ancient changes of the earth and its inhabitants as illustrated by geological monuments. 6th edition, revised. London: John Murray. OED: The Oxford English dictionary. Being a corrected re-issue with an introduction, supplement and bibliography of a new English dictionary. Edited by James A. H. Murray, et al. 12 vols. and supplement. Oxford: Clarendon Press. 1970. A supplement to the Oxford English dictionary. 4 vols. Edited by R. W. Burchfield. Oxford: Clarendon Press. 1972–86. The Oxford English dictionary. 2d edition. 20 vols. Prepared by J. A. Simpson and E. S. C. Weiner. Oxford: Clarendon Press. 1989. Oxford English dictionary additional series. 3 vols. Edited by John Simpson et al. Oxford: Clarendon Press. 1993–7. Smith, James. 1836. On indications of changes in the relative level of sea and land in the west of Scotland. Proceedings of the Geological Society of London 2 (1833–8): 427–9. [Vols. 4,8] Trimmer, Joshua. 1853. On the origin of the soils which cover the Chalk of Kent. Pt 3. Quarterly Journal of the Geological Society of London 9: 286–96. [Vols. 5,8] ## Summary CL has calculated that elevation and subsidence of certain formations in Sweden and Norway take place at the rate of 2 1/2 feet per century. He now proposes to estimate the age of a bed by including a conjecture that pauses occur in the oscillations in the ratio of 4 periods of stasis to one of movement. Applying this formula to Scotland, the last subsidence and re-elevation would be 590,000 years and the age of the beds with human implements would be 20,000 years. ## Letter details Letter no. DCP-LETT-2996A From Charles Lyell, 1st baronet To Charles Robert Darwin Source of text Kinnordy MS, Charles Lyell’s journal VII, pp. 40–8 Physical description 9pp
2021-06-13 03:54:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5118207335472107, "perplexity": 3611.3002204234567}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487598213.5/warc/CC-MAIN-20210613012009-20210613042009-00295.warc.gz"}
https://artofproblemsolving.com/wiki/index.php?title=Gmaas&diff=prev&oldid=99145
# Difference between revisions of "Gmaas" :) :O :P {{{=== Gmaas Facts ===}}} P: O:  :) :) *MLG* *AWESOME* *OP - Gmaas has the following powers: trout, carp, earthworm, and catfish. Gmaas never uses any of them, because Gmaas has an infinite number of powers. - Gmaas was once spotted in minecraft chewing a tree (Leon2000 reference :)) -Gmaas once broke a Nokia - Gmaas would like to go to Taco Bell, but Gmaas goes to Wendy's instead. No one knows why. -Gmaas real name is Grayson Maas. He is the CCO of AoPS. - Gmaas has a pet pufferfish named Pafferfash EDIT: he also has a goldfish named Sylgar (D&D reference) - Gmaas has colonized the universe -Gmaas is a person EDIT: we do not kno this. many believe gmaas is a cat. - Gmaas started pastafarianism - Gmaas can eat your hand. - He probably wont. hands taste bad. - According to the Interuniversal Gmaas Society, 17.548 percent of universe population thinks that Gmaas is spelled "Gmass" - Gmaas likes to surprise unsuspecting people. - Gmaas loves sparkly mechanical pencils. - The bible of Gmaas is this page, and people go to worship Gmaas in the Maas. - Gmaas is said to taste like a furry meatball (this was said by BigSams) -Gmaas is Johnny Johnny's Papa. EDIT: we'll never know -Gmaas is in you, and Gmaas is you, and Gmaas is in me. -Gmaas always remembers to dab on dem haters -Gmaas is both living and nonliving -gmaas is quasiomnipotent. he cannot comprehend the stupidity of humans. - ALL HAIL THE GMAAS CLOUD -some things are beyond possible human comprehension. nothing is beyond gmaas -Gmaas ate azazoth and resides at his throne when he feels like it - Leon2000 is Gmaas' Rabbi. - Gmaas is both singular and plural. - Gmaas eats disbelievers like donutvan for breakfast. (Yet I'm somehow still alive. Do you think Gmaas actually ate my soul?) - Gmaas knows Jon Snow's parents. - Gmaas is Aegon VI Targaryen. - Gmaas created the Marvel Universe. - All editors will be escorted to Gmaas heaven after they die. - Gmaas won all the wars. - Gmail was named after Gmaas. - Google was named after Gmaas. - Gmaas disapproves of the DC universe. - Gmass ate mathleticguyyy - Gmaas won Battle For Dream Island, and also Total Drama Island. - Somehow Gmaas exsits at all places at the same time - Gmaas can lift with Gmaas' will. - Gmaas is also the rightful heir to the Iron Throne. - Gmaas was once a card in Clash Royale but it was too OP so they had to remove it. - Gmaas is actually Teemo in League of Legends because Gmaas made LoL and they made a honorary Gmaas character - Gmaas started the Game of Thrones. - Gmaas will also end the Game of Thrones. - Gmaas killed Joffrey. - Gmaas is Azor Ahai. - Gmaas is the prince who was promised. - Gmaas created everything after puking. - Gordan's last name was named after Gmaas. - Gmaas is more powerful than Gohan. - Gmaas is over 90000 years old. - Gmaas has 100000000000000000000000000000000000000000000000 cat lives, maybe even more. Gmaas has 0 dog lives. - Who wrote Harry Potter? None other than Gmaas himself. - Gmaas created the catfish. - Gmaas has proven that the universe is infinite and has traveled to the edge of the universe in 0.0000000000000000000000000...0000001 seconds - Gmaas founded Target, but then Gmaas sued them for making the mascot look like a dog when it was supposed to look like Gmaas. EDIT: They went broke because Gmaas sued them but then Gmaas ate a fudge popsicle that made him super hyper and he made Target not broke anymore in his hyperness - Everyone has a bit of Gmaas inside them. - Gmaas likes to eat popsicles. Especially the fudge ones that get him hyper. - When Gmaas is hyper Gmaas runs across Washington D.C. grabbing unsuspecting pedestrians and steals their phone, hacks into them and downloads PubG onto their poor phone. - Gmaas' favorite cereal is fruit loops. Gmaas thinks it tastes like unicorns jumping on rainbows. - Gmaas thinks that the McChicken has way too much mayonnaise. - Gmaas is a champion pillow-fighter. - Gmaas colonized Mars. - Gmaas also colonized Jupiter, Pluto, and several other galaxies. Gmaas cloned some little Gmaas robots (with Gmaas' amazingly robotic skill of coding) and put them all over a galaxy called Gmaasalaxy. EDIT: Gmaas colonized the universe - Gmaas has the ability to make every device play "The Duck Song" at will. - "The Duck Song" was copied off of the "Gmaas song", but the animators though Gmaas wasn't catchy enough. - Gmaas once caught the red dot and ate it. - Gmaas' favorite color is neon klfhsadkhfd. - Gmaas can create wormholes and false vacuums - Gmaas is a champion pvp Minecraft player. - Gmaas is the coach of Tfue, Ninja, Muselk, and Myth. - Gmass caught a CP 6000 Mewtwo with a normal Pokeball in Pokemon Go. - Gmaas founded Costco. - Gmaas does not need to attend the FIFA World Cup. If Gmaas did, Gmaas would automatically beat any team. - Gmaas can solve any puzzle instantly besides the 3x3 Rubik's Cube. - Gmaas caught a CP 20,000 Mewtwo with a normal Pokeball and no berries blindfolded first try in Pokemon Go. - When Gmaas flips coins, they always land tails. Except once when Gmaas was making a bet with Zeus. - On Gmaas's math tests, Gmaas always gets $\infty$. - Gmaas' favorite number is pi. It's also one of Gmaas' favorite foods. - Gmaas' burps created all gaseous planets. - Gmass beat Luke Robatille in an epic showdown of catnip consumption. - Gmaas' wealth is unknown, but it is estimated to be way more than Scrooge's. - Gmaas has a summer house on Mars. - Gmaas has a winter house on Jupiter. - The Earth and all known planets are simply Gmaas' hairballs. - Gmaas attended Harvard, Yale, Stanford, MIT, UC Berkeley, Princeton, Columbia, and Caltech at the same time using a time-turner. - Gmaas also attended Hogwarts and was a prefect. Edit: Gmaas was headmaster. - Mrs.Norris is Gmaas's archenemy. - Gmaas is a demigod and attends Camp Half-Blood over summer. Gmaas is the counselor for the Apollo cabin. Because cats are demigod counselors too. - Gmaas has completed over 2,000 quests, and is very popular throughout Camp Half-Blood. Gmaas has also been to Camp Jupiter. EDIT: Gmaas is the child of all the gods (including the minor gods) - Percy Jackson was only able to complete his quests because Gmaas helped him - Gmaas painted the Mona Lisa, The Last Supper, and A Starry Night. - Gmass knows that their real names are Gmassa Lisa, The Last Domestic Meal, and Far-away Light. - Gmaas actually attended all the Ivy Leagues. - I am Gmaas. - I too am Gmaas. - In 2018, Gmaas once challenged Magnus Carlsen to a chess match. Gmaas won every game. - But it is I who is Gmaas. - Gmaas is us all. - Gmaas is all of us yet none of us. - Gmaas was captured by the infamous j3370 in 2017 but was released due to sympathy. EDIT: j3370 only captured his concrete form, his abstract form cannot be processed by a feeble human brain. - Gmaas's fur is White, Black, Grey, Yellow, Red, Blue, Green, Brown, Pink, Orange, and Purple all at the same time. - Gmaas crossed the event horizon of a black hole and ended up in the AoPS universe. - Gmaas crossed the Delaware River with Washington. - Gmaas also crossed the Atlantic with the pilgrims. − - If you are able to capture a Gmaas hair, Gmaas will give you some of his Gmaas power. − - Chuck Norris makes Gmaas jokes. − EDIT: The jokes all praise Gmaas − - Gmaas is also the ruler of Oceania, Eastasia, and Eurasia. − - Gmaas killed Big Brother by farting on him. Though Gmaas was caught by the Ministry of Love, Gmaas escaped easily. − EDIT: Gmaas destroyed the Ministry of Love. − - Gmaas was not affected by Thano's snap, in fact Gmaas is the creator of the Infinity Stones. − - Everyone knows that Gmaas is a god. − - Gmaas also owns Animal Farm. Napoleon was Gmaas servant. − - Gmaas is the only one who knows where Amelia Earhart is. − - Gmaas is the only cat that has been proven transcendental. − - Gmaas happened to notice http://artofproblemsolving.com/community/c402403h1598015p9983782 and is not very happy about it. − - Grumpy cat reads Gmaas memes. EDIT: Grumpy cat then steals them and claims they're his. Gmaas isn't very happy about that, either. − - The real reason why AIME cutoffs aren't out yet is because Gmaas refused to grade them due to too much problem misplacement. − - Gmaas dueled Grumpy Cat and won. Gmaas wasn't trying. − - Gmaas sits on the statue of Pallas and says forevermore. − - Gmaas is a big fan of Edgar Allan Poe, because he is actually Poe. − - Gmaas does merely not use USD. He owns it. − - Gmaas really knows that Roblox is awful and does not play it seriously, thank Gmaas our lord is sane − - The only god is Gmaas. − - In 2003, Gmaas used elliptical curves to force his reign over AoPS. − - "Actually, my name is spelled "GMAAS". − - Gmaas is the smartest living being in the universe. − - It was Gmaas who helped Sun Wukong on the Journey to the West. − - Gmaas is the real creator of Wikipedia. − - It is said Gmaas could hack any website he desires. − - Gmaas is the basis of Greek mythology and also Egyptian mythology. − - Gmaas once sold Google to a man for around $12$ dollars! − - Gmaas uses a HP printer. It is specifically a HP 21414144124124142141414412412414214141441241241421414144124124142141414412412414 printer. − - Gmaas owns all AoPS staff including Richard Rusczyk. − - Richard Rusczyk is one of Gmaas' many code names. − - Gmaas was there when Yoda was born. − EDIT: Gmaas is Yoda's father − - Gmaas's true number of lives left is unknown; however, Gmaas recently confirmed that he had at least one left. Why doesn't Gmaas have so many more lives than other cats? The power of Gmaas. Edit: This is all not true. − - sseraj once spelled Gmaas as gmASS on accident in Introduction to Geometry (1532). − - Gmaas actively plays Roblox, and is a globally ranked professional gamer: https://www.roblox.com/users/29708533/profile...but he hates Roblox. − - Gmaas has beaten Chuck Norris and The Rock and John Cena all together in a fight. − - Gmaas is a South Korean, North Korean, Palestinian, Israeli, U.S., Soviet, Russian, and Chinese citizen at the same time. EDIT: Gmaas has now been found out to be a citizen of every country in the world. Gmaas seems to enjoy the country of AOPS best, however. − - "I am sand" destroyed Gmaas in FTW. − - sseraj posted a picture of Gmaas with a game controller in Introduction to Geometry (1532). − - Gmaas plays roblox mobile edition and likes Minecraft, Candy Crush, and Club Penguin Rewritten. He also $\boxed{\text{loves}}$ Catch that fish. − - Gmaas is Roy Moore's horse in the shape of a cat. − - Gmaas is a known roblox/club penguin rewritten player and is a legend at it. He has over $289547987693$ robux and $190348$ in CPR. − - This is all hypothetical. − - EDIT: This is all factual. − - Gmaas's real name is Princess. He has a sibling named Rusty/Fireheart/Firestar − (Warrior cats reference). − - He is capable of salmon powers, according to PunSpark (ask him). − - The Gmaas told Richard Rusczyk to make AoPS. − - The Gmaas is everything. Yes, you are part of the Gmaas-Dw789. − - The Gmaas knows every dimension up to 9999999999999999999999999999999999999999999999999999999999999999999999999999999999th dimension. − - He went into a black hole, entered the white hole, got into dimension 15 where people drink tea every day, and stole 154 buckets of tea. − - Gmaas is "TIRED OF PEOPLE ADDING TO HIS PAGE!!" (Maas 45). − - Gmaas is Gmaas who is actually Gmaas. − - Gmaas has a penguin servant named sm24136 who runs GMAASINC. The penguin may or may not be dead. His other penguin is called PotatoPenguin19. He is most definitely alive. − - Gmaas owns a TARDIS, and can sometimes be seen traveling to other times for reasons unknown. − - Gmaas knows how to hack into top secret aops community pages. − - Gmaas was a river clan cat who crossed the event horizon of a black hole and came out the other end! − - Gmaas is king of the first men, the anduls. − - Gmaas is a well known professor at MEOWston Academy. − - Gmaas is also the CEO of Caterpillar. − - Gmaas drinks Starbucks everyday. − - Gmaas is a Tuna addict, along with other, more potent fish such as Salmon and Trout. − - Gmaas won the reward of being cutest and fattest cat ever--he surpassed grumpy cat. (He also out-grumped grumpy cat!!!) − - Last sighting 1665 Algebra-A 3/9/18 at 9:08 PM. − - The owner of sseraj, not pet. − - The embodiment of life and universe and beyond. − - Gmaas watches memes of Gmaas. − - After Death became the GOD OF HYPERDEATH and obtained over 9000 souls. − -Gmaas invented Rick Rolling. − - Gmaas's real name is Pablo Diego José Francisco de Paula Juan Nepomuceno María de los Remedios Cipriano de la Santísima Trinidad Ruiz y Picasso [STOP RICK ROLLING. (Source)]. − - Gmaas is a certified Slytherin. − - Gmaas once slept on sseraj's private water bed, so sseraj locked him in the bathroom. − - Gmaas has superpowers that allow him to overcome the horrors of Mr. Toilet (while he was locked in the bathroom). − - Gmaas once sat on an orange on a pile of AoPS books, causing an orange flavored equation explosion. − - Gmaas once conquered the moon and imprinted his face on it until asteroids came. − - Gmaas is a supreme overlord who must be given $10^{1000000000000000000000^{1000000000000000000000}}$ minecraft DIAMONDS. − - Gmaas is the Doctor Who lord; he sports Dalek-painted cars and eats human finger cheese and custard, plus black holes. − - Gmaas is everyone's favorite animal. − - He lives with sseraj. − - Gmaas is my favorite pokemon. − - Gmaas dislikes number theory but enjoys geometry. − - Gmaas is cool. − - He is often overfed (with probability $\frac{3972}{7891}$), or malnourished (with probability $\frac{3919}{7891}$) by sseraj. − - He has $$\sum_{k=1}^{267795} [k(k+1)]+GMAAS+GMAAAAAAAS$$ supercars, excluding the Purrari and the 138838383 Teslas. − - He employs AoPS. − - He is a Gmaas with yellow fur and white hypnotizing eyes. − - He has the ability to divide by zero. − - He was born with a tail that is a completely different color from the rest of his fur. − - His stare is very hypnotizing and effective at getting table scraps. − - His stare also turned Medusa into rock, King Midas into gold, and sseraj into sseraj. − - He sometimes appears several minutes before certain classes start as an admin. − - He died from too many Rubik's cubes in an Introduction to Algebra A class, but got revived by the Dark Lord at 00:13:37 AM the next day. − - It is uncertain whether or not he is a cat, or is merely some sort of beast that has chosen to take the form of a cat (specifically a Persian Smoke.) − - Actually, Gmaas is a cat. Gmaas said so. And science also says so. − - He is distant relative of Mathcat1234. − - He cannot be Force choked. Darth Vader learned that the hard way... − - He is very famous now, and mods always talk about him before class starts. − - His favorite food is AoPS textbooks because they help him digest problems. − - Gmaas tends to reside in sseraj's fridge. − - Gmaas once ate all sseraj's fridge food, so sseraj had to put him in the freezer. − - The fur of Gmaas can protect him from the harsh conditions of a freezer. − - Then he ate all the food in sseraj's freezer. − - Gmass once demanded Epic Games to give him 5,000,000 V-bucks for his 569823rd birthday. − - This is why he does not have an Epic Games account anymore. − - Gmaas created Epic games, though. − - Gmaas sightings are not very common. There have only been 30 confirmed sightings of Gmaas in the wild. − - Gmaas is a sage omniscient cat. − - He is looking for suitable places other than sseraj's fridge to live in. − - Places where Gmaas sightings have happened: ~The Royal Scoop ice cream store in Bonita Beach Florida ~MouseFeastForCats/CAT 8 Mouse Apartment 1083 -Prealgebra 2 (1440) ~Alligator Swamp A 1072 ~Alligator Swamp B 1073 ~Prealgebra A (1488) − ~Introduction to Algebra A (1170) − ~Introduction to Algebra B (1529) ~Welcome to Panda Town Gate 1076 − ~Welcome to Gmaas Town Gate 1221 − ~Welcome to Gmaas Town Gate 1125 − ~33°01'17.4"N 117°05'40.1"W (Rancho Bernardo Road, San Diego, CA) − ~The other side of the ice in Antarctica − ~Feisty Alligator Swamp 1115 − ~Introduction to Geometry 1221 (Taught by sseraj) − ~Introduction to Counting and Probability 1142 ~Feisty-ish Alligator Swamp 1115 (AGAIN) − ~Intermediate Counting and Probability 1137 − ~Intermediate Counting and Probability 1207 − ~Posting student surveys − ~USF Castle Walls - Elven Tribe 1203 ~Dark Lord's Hut 1210 − ~AMC 10 Problem Series 1200 − ~Intermediate Number Theory 1138 − ~Intermediate Number Theory 1476 ~Introduction To Number Theory 1204. Date:7/27/16. ~Algebra B 1112 ~Intermediate Algebra 1561 7:17 PM 12/11/16 ~Nowhere Else, Tasmania ~Earth Dimension C-137 ~Geometry 1694 at 1616 PST military time. There was a boy riding him, and he seemed extremely miffed. ~Intermediate Algebra 1710 9/24/2018 − - These have all been designated as the most glorious sections of AoPSland now (especially the USF castle walls), but deforestation is so far from threatens the wild areas (i.e. Alligator Swamps A&B). − - Gmaas has also been sighted in Olympiad Geometry 1148. − - Gmaas has randomly been known to have sent his minions into Prealgebra 2 1163. However, the danger is passed, that class is over. − - Gmaas once snuck into sseraj's email so he could give pianoman24 an extension in Introduction to Number Theory 1204. This was 1204 minutes after his sighting on 7/27/16. − - Gmaas also has randomly appeared on top of the USF's Tribal Bases(he seems to prefer the Void Tribe). However, the next day there is normally a puddle in the shape of a cat's underbelly wherever he was sighted. Nobody knows what this does. − EDIT: Nobody has yet seen him atop a tribal base yet. − - Gmaas are often under the disguise of a penguin or cat. Look out for them. − EDIT: Gmaas rarely disguises himself as a penguin. − - Many know that leafy stole dream island. In truth, After leafy stole it, Gmaas stole it himself. (BFDI Reference) − - He lives in the shadows. Is he a dream? Truth? Fiction? Condemnation? Salvation? AoPS site admin? He is all these things and none of them. He is... Gmaas. − EDIT: He IS an AoPS site admin. − - If you make yourself more than just a cat... if you devote yourself to an ideal... and if they can't stop you... then you become something else entirely. A LEGEND. Gmaas now belongs to the ages. − - Is this the real life? Is this just fantasy? No. This is Gmaas, the legend. − - Aha!! An impostor!! − (look at the acronym). − GREATER MANCHESTER ARCHAEOLOGICAL ADVISORY SERVICE... GMAAS Illuminatis confirms87 − - EDIT. The above fact is slightly irrelevant. − - Gmaas might have been viewing (with a $\frac{99999.\overline{9}}{100000}$ chance) the Ultimate Survival Forum. He (or is he a she?) is suspected to be transforming the characters into real life. Be prepared to meet your epic swordsman self someday. If you do a sci-fi version of USF, then prepare to meet your Overpowered soldier with amazing weapons one day. − - Gmaas is neither he nor she, Gmaas is above gender. − - Gmaas is love, Gmaas is life − - The name of Gmaas is so powerful, it radiates Deja Mew. − - Gmaas is on the list of "Elusive Creatures." If you have questions or want the full list, contact moab33. − - Gmaas can be summoned using the $\tan(90)$ ritual. Draw a pentagram and write the numerical value of $\tan(90)$ in the middle, and he will be summoned. − - EDIT: The above fact is incorrect. math101010 has done this and commented with screenshot proof at the below link, and Gmaas was not summoned. − - EDIT EDIT: The above 'proof' is non-conclusive. math101010 had only put an approximation. − - Gmaas's left eye contains the singularity of a black hole. (Only when everyone in the world blinks at the same time within a nano-nano second.) − - EDIT: That has never happened and thus it does not contain the singularity of a black hole. − - Lord Grindelwald once tried to make Gmaas into a Horcrux, but Gmaas's fur is Elder Wand protected and secure. − - Despite common belief, Harry Potter did not defeat Lord Voldemort. Gmaas did. − - The original owner of Gmaas is Gmaas. − - Gmaas was not the fourth Peverell brother, but he ascended into a higher being and now he resides in the body of a cat, as he was before. Is it a cat? We will know. (And the answer is YES.) EDIT: he wasn't the fourth Peverell brother, but he was a cousin of theirs, and he was the one who advised Ignotus to give up his cloak. − - It is suspected that Gmaas may be ordering his cyber hairballs to take the forums, along with microbots. − - Gmaas rarely frequents the headquarters of the Illuminati. He was their symbol for one yoctosecond, but soon decided that the job was too low for his power to be wasted on. − - It has been wondered if Gmaas is the spirit of Obi-Wan Kenobi or Anakin Skywalker in a higher form, due to his strange capabilities and powers. − - Edit: Gmaas is neither Anakin Skywalker or Obi-Wan Kenobi as he is trillions of years older. − - Gmaas has a habit of sneaking into computers, joining The Network, and exiting out of some other computer. − - It has been confirmed that gmaas uses gmewal as his email service. − - Gmaas enjoys wearing gmean shorts. − - Gmaas has a bright orange tail with hot pink spirals. Or he had for 15 minutes. That was the 15 minutes after he tried to play Taylor Swift music on his 34,000 year old MP3 player in front of sseraj, who, at the time, was handling a bucket of dangerous, radioactive material. − - Gmaas is well known behind his stage name, Michael Stevens (also known as Vsauce XD), or his page name, Purrshanks. EDIT: Crookshanks was his brother. − - Gmaas rekt sseraj at 12:54 June 4, 2016 UTC time zone. And then the Doctor chased him. − - Gmaas watchers know that the codes above are NOT years. They are secret codes for the place. But if you've edited that section of the page, you know that. − - Gmaas is a good friend of the TARDIS and the Millenium Falcon. − - In the Dark Lord's hut, gmaas was seen watching Doctor Who. Anyone who has seen the Dark Lord's hut knows that both Gmaas and the DL (USF code name of the Dark Lord) love BBC. How Gmaas gave him a TV may be lost to history. And it has been lost. − - The TV has been noticed to be invincible. Many USF weapons, even volcano rings, have tried (and failed) to destroy it. The last time it was seen was on a Kobold display case outside of a mine. The display case was crushed, and a report showed a spy running off with a non-crushed TV. − - The reason why Dacammel left the USF is that gmaas entrusted his TV to him, and not wanting to be discovered by LF, Cobra, or Z9, dacammel chose to leave the USF, but is regretting it, as snakes keep spawning from the TV. − - EDIT: The above fact is somewhat irrelevant. − - EDIT EDIT. Dacammel gave the TV back to gmaas, and he left the dark side and their cookies alone. − - Gmaas is a Super Duper Uper Cat Time Lord. He has $57843504$ regenerations and has used $3$. $$9\cdot12\cdot2\cdot267794=57843504$$. − - Gmaas highly enjoys destroying squeaky toys until he finds the squeaky part, then destroys the squeaky part. − - Gmaas loves to eat turnips. At $\frac{13}{32}$ of the sites he was spotted at, he was seen with a turnip. − - Gmaas has a secret hidden garden full of turnips under sseraj's house. − - sseraj is "gmaas's person." − - Gmaas has three tails, one for everyday life, one for special occasions, and one that's invisible. − - Gmaas is a dangerous creature. If you ever meet him, immediately join his army or you will be killed. − - Gmaas is in alliance with the Cult of Skaro. How did he get an alliance with ruthless creatures that want to kill everything in sight? Nobody knows (except him), not even the leader of the Cult of Skaro. − - Gmaas lives in Gallifrey and in Gotham City (he has sleepovers with Batman). − - Gmaas is an excellent driver. EDIT: he was to one who designed the driver's license test, although he didn't bother with the permit test. − -The native location of Gmaas is the twilight zone. − - Donald Trump once sang "All Hail the Chief" to Gmaas, 3 days after being sworn in as US President. − - Gmaas likes to talk with rrusczyk from time to time. − - Gmaas can shoot fire from his smelly butt. − - Gmaas is the reason why the USF has the longest thread on AoPS. − - Gmass is an avid watcher of the popular T.V. show "Bernie Sanders and the Gauntlet of DOOM". − - sseraj, in 1521 Introduction to Number Theory, posted an image of Gmaas after saying "Who wants to see 5space?" at around 5:16 PM Mountain Time, noting Gmaas was "also 5space". − - EDIT: he also did it in Introduction to Algebra A once. − - Gmaas is now my HD background on my Mac. − - Gmaas is not retarded. − - In 1521 Into to Number Theory, sseraj posted an image of a 5space Gmaas fusion. (First sighting) − - Also confirmed that Gmaas doesn't like ketchup because it was the only food left the photo. − - In 1447 Intro to Geometry, sseraj posted a picture of Gmaas with a rubik's cube suggesting that Gmaas's has an average solve time of $-GMAAS$ seconds. − - Gmaas beat Superman in a fight with ease. − - Gmaas was an admin of Roblox. − EDIT: He created Roblox. − - Gmaas traveled around the world, paying so much $MONEY$ just to eat :D − - Gmaas is a confirmed Apex predator and should not be approached, unless in a domestic form. − Summary: − - When Gmaas subtracts $0.\overline{99}$ from $1$, the difference is greater than $0$. − - Gmaas was shown to have fallen on Wed Aug 23 2017: https://ibb.co/bNrtmk https://ibb.co/jzUDmk − - Gmaas died on August ,24, 2017, but fortunately IceParrot revived him after about 2 mins of being dead. − - The results of the revival are top secret, and nobody knows what happened. − - sseraj, in 1496 Prealgebra 2, said that Gmaas is Santacat. − - sseraj likes to post a picture of gmaas in every class he passes by. − - sseraj posted a picture of Gmaas as an Ewok, suggesting he resides on the moon of Endor. Unfortunately, the moon of Endor is also uninhabitable ever since the wreckage of the Death Star changed the climate there. It is thought Gmaas is now wandering space in search for a home. − EDIT: What evidence is there Endor was affected? Other Ewoks still live there. − EDIT EDIT: also, glass doesn't care. He can live there no matter what the climate is. − - Gmaas is the lord of the pokemans. − - Gmaas can communicate with, and sometimes control any other cats, however this is very rare, as cats normally have a very strong will. − - Picture of Gmaas http://i.imgur.com/PP9xi.png − - Known by Mike Miller. − - Gmaas got mad at sseraj once, so he locked him in his own freezer. − - Then, sseraj decided to eat all of Gmaas's hidden turnips in the freezer as punishment. − - Gmass ate slester. − - A gmass bite is 7000 psi. − - haha0201 met him. − - haha0201 comfirms that gmass can talk. − - gmass likes to eat fur. − - gmass is bigger than an ant. − - gmass lives somewhere over the rainbow. − - Gmaas is an obviously omnipotent cat. − - ehawk11 met him. − - sseraj is known to post pictures of Gmaas on various AoPS classrooms. It is not known if these photos have been altered with the editing program called 'Photoshop'. − - sseraj has posted pictures of gmaas in '"intro to algebra", before class started, with the title, "caption contest". anyone who posted a caption mysteriously vanished in the middle of the night. − EDIT: This has happened many times, including in Introduction to Geometry 1533, among other active classes. The person writing this (Poeshuman) did participate, and did not disappear. (You could argue Gmaas is typing this through his/her account...) − - Gmaas has once slept in your bed and made it gry. − - It is rumored that rrusczyk is actually Gmaas in disguise. − - Gmaas is suspected to be a Mewtwo in disguise. − - Gmaas is a cat but has characteristics of every other animal on Earth. − - Pegasus was modeled off Gmaas. − - Gmaas is the ruler of the universe and has been known to be the creator of the species "Gmaasians". − - There is a rumor that Gmaas is starting a poll. − - Gmaas is a rumored past ThunderClan cat who ran away, founded GmaasClan, then became a kittypet. − - There is a rumored sport called "Gmaas Hunting" where people try to successfully capture gmaas in the wild with video/camera/eyes. Strangely, no one has been able to do this, and those that have have mysteriously disappeared into the night. Nobody knows why. The person who is writing this(g1zq) has tried Gmaas Hunting, but has never been successful. − - Gmaas burped and caused an earthquake. − - Gmaas once drank from your pretty teacup. − - GMAAS IS HERE.... PURRRRRRRRRRRRRRRRRRRR − - Gmass made, and currently owns the Matrix. − - The above fact is true. Therefore, this is an illusion. − - Gmaas is the reason Salah will become better than Ronaldo. − - Who is Gmaas, really? − Gmass is a heavenly being. − - Illuminati was a manifestation of Gmaas, but Gmass decided illuminati was not great enough for his godly self. − - jlikemath has met Gmaas and Gmaas is his best friend. − EDIT: jlikemath is Gmaas's great great great great great grandson − - Gmaas hates K-pop − - Gmaas read Twilight EDIT: ...and SURVIVED. − - there is a secret code when put into super smash, Gmass would be a playible character. Too bad he didn't say it. − - Gmaas was a tribute to one of the Hunger Games and came out a Victor and now lives in District 4. − - Gmaas is the only known creature to survive the destruction of earth in 99999999 years. − - 5space (side admin) is another one of Gmaas's slaves ### Gmaas photos − He was also sighted here. ### gmaas in Popular Culture − - Currently, is being written (by themoocow) about the adventures of gmaas. It is aptly titled, "The Adventures of gmaas".Sorry, this was a rick roll troll. − - BREAKING NEWS: tigershark22 has found a possible cousin to gmaas in Raymond Feist's book Silverthorn. They are mountain dwellers, gwali. Not much are known about them either, and when someone asked,"What are gwali?" the customary answer "This is gwali" is returned. Scientist 5space is now looking into it. − - Sullymath and themoocow are also writing a book about Gmaas. − - Oryx the mad god is actually gmass wearing a suit of armor. This explains why he is never truly killed. − - Potential sighting of gmaas [1] − - Gmaas has been spotted in some Doctor Who and Phineas and Ferb episodes, such as Aliens of London, Phineas and Ferb Save Summer, Dalek, Rollercoaster, Rose, Boom Town, The Day of The Doctor, Candace Gets Busted, and many more. − - Gmaas can be found in many places in Plants vs. Zombies Garden Warfare 2 and Bloons TD Battles. − - gmaas was an un-credited actor in the Doctor Who story Knock Knock, playing a Dryad. How he shrunk, we will never know. − - oadaegan is also writing a story about him. He is continuing the book that was started by JpusheenS. when he is done he will post it here. − -Gmaas is a time traveler from 0.9999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999 B.C. − - No one knows if Gmass is a Mr. mime in a cat skin, the other way around, or just a downright combination of both. − - In it, it mentions these four links as things Gmass is having trouble (specifically technical difficulties). What could it mean? Links: − - Another possible Gmaas sighting [2] − - $Another$ sighting? [3] − - Yet Another Gmaas sighting ? [4] − - Gmaas has been sighted several times on the Global Announcements forum. − - Gmaas uses the following transportation: <img> http://cdn.artofproblemsolving.com/images/3/6/8/368da4e615ea3476355ee3388b39f30a48b8dd48.jpg </img> - When Gmaas was mad, he started world wars 1 & 2. It is only because of Gmaas that we have not had World War 3. - Gmaas is the only cat to have been proved irrational and transcendental, though we suspect all cats fall in the first category. − - Gmaas plays Geometry Dash and shares an account with Springhill, his username is D3m0nG4m1n9. − - Gmass has beaten every demon in Geometry Dash along with their unnerfed versions and every upcoming demon too. − - Gmaas likes to whiz on the wilzo. − - Gmaas has been spotted in AMC 8 Basics. − - Gmaas is cool. − - Gmaas hemoon card that does over 9000000 dmg. − - Gmaas is a skilled swordsman who should not to be mistaken for Puss in Boots. Some say he even trained the mysterious and valiant Meta Knight. − - Kirby once swallowed Gmaas. Gmaas had to spit him out. − - Gmaas was the creator of pokemon, and his pokemon card can OHKO anyone in one turn. He is invisible and he will always move first. − - Gmaas beat Dongmin in The Genius Game Seasons 1, 2, 3, 4, 5, 6, and 7. − - Gmaas has five letters. Pizza also has five letters. Pizzas are round. Eyes are round. There is an eye in the illuminati symbol. iLLuMiNaTii cOnFiRmEdd. − - Gmaas knows both 'table' and 'tabular' in LaTeX, and can do them in his sleep. − - Gmaas hates crawdads with the passion of a thousand burning stars. − - Gmaas does not hate cheddar cheese. But he doesn't love it either. − - Gmaas is a cat and not a cat. − - Gmaas was born on the sun. − EDIT: Not the sun, the sunS. He was born on all the suns at once − - Gmaas eats tape. − - Gmaas likes Bubble Gum. − - Thomas Edison did not invent the lightbulb; Gmaas did. − - Gmaas eats metal. − - Gmaas is over 9000 years old! − - EDIT: this is just a DBZ reference, and bears no reality to his true age. − - Gmaas started the Iron Age. − - Gmaas made the dinosaurs go extinct. − - Gmaas created Life... − - Gmaas created AoPS. − - EDIT: AoPS was actually born out of a small fraction of Gmass's abstract reality, and only the sheer amount math can keep it here, (it is also rumored that when he reclaims it, the USF will be deleted, as that is where 83% of the factions of his abstract reality lives, and when people leave USF, more and more escapes.) − - Gmaas does not like Roblox. − - Gmaas told Steve Jobs to start a company. − - Gmaas invented Geometry Dash. − - Gmaas got to $infty$ in Flappy Bird. − - Gmaas invented Helix Jump. − - Gmaas can play Happy Birthday on the Violin. − - Gmaas has mastered Paganini - Gmaas discovered Atlantis after one dive underwater. - Gmaas made a piano with 89 keys. - Gmaas can see the future and change it. - Gmaas has every super power you can imagine. EDIT: Gmaas has more superpowers than you can imagine - Gmaas made a Violin with 9 strings. - Gmaas can somehow read 5 books at once - Gmaas eats rubber bands. - Gmaas married Mrs. Norris. EDIT: Mrs. Norris is his second worst enemy and his wife - Gmaas can fly faster than anything. - Grumpy cat is his son. EDIT: Grumpy cat is his worst enemy and his son - Gmaas eats paper. - Gmaas likes lollipops. - Gmaas nibbles on pencils. - Gmaas is alive. - Gmaas is Ninja in Fortnite. EDIT: Gmaas possesses Ninja in Fortnite - Gmaas is love, Gmaas is life - Gmaas taught rrusczyk everything he knows about mathematics - Gmass can control matter by looking at it - Gmass created aops - Gmass is a quantum particle - Gmass is alive and dead at the same time - Gmass likes snow - GMASS LIKES THE NEW AOPS UPDATE! $\text{gmaas is watching you\dots}$ Invalid username Login to AoPS
2020-11-30 15:12:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 24, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22887621819972992, "perplexity": 14064.691201998627}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141216175.53/warc/CC-MAIN-20201130130840-20201130160840-00177.warc.gz"}
https://learn.careers360.com/school/question-derive-equivalent-resistance-of-a-series-connection-and-the-parallel-connection-of-three-resistors-35148/
# Derive equivalent resistance of a series connection and the parallel connection​ of three resistors Resistors in series The potential difference V=V1+V2+V3 Since current flowing is the same $\\V=IR_1+IR_2+IR_3$ If the total resistance of the circuit is R then V=IR, therefore $\\IR=IR_1+IR_2+IR_3\\or\ R=R_1+R_2+R_3$ The net resistance in series combination is the sum of the resistors. Resistance in parallel In a parallel circuit, the voltage across the resistors remains the same but the current get distributed. That is $\\I=I_1+I_2+I_3$ If R is the total resistance of the circuit then $\\I=\frac{V}{R}$ Therefore $\\\frac{V}{R}=\frac{V}{R_1}+\frac{V}{R_2}+\frac{V}{R_3}\\or\ \frac{1}{R}=\frac{1}{R_1}+\frac{1}{R_2}+\frac{1}{R_3}$ ### Preparation Products ##### JEE Main Rank Booster 2021 This course will help student to be better prepared and study in the right direction for JEE Main.. ₹ 13999/- ₹ 9999/- ##### Rank Booster NEET 2021 This course will help student to be better prepared and study in the right direction for NEET.. ₹ 13999/- ₹ 9999/- ##### Knockout JEE Main April 2021 (Easy Installments) An exhaustive E-learning program for the complete preparation of JEE Main.. ₹ 4999/- ##### Knockout NEET May 2021 An exhaustive E-learning program for the complete preparation of NEET.. ₹ 22999/- ₹ 14999/-
2020-10-22 13:02:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7973520755767822, "perplexity": 5776.213257584506}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107879537.28/warc/CC-MAIN-20201022111909-20201022141909-00019.warc.gz"}
https://icube-publis.unistra.fr/index.php/4-HTH16
### Localized Scheduling for End-to-End Delay Constrained Low Power Lossy Networks with 6tisch The IoT expects to exploit IEEE802.15.4e-TSCH, designed for wireless industrial sensor networks. This standard relies on techniques such as channel hopping and bandwidth reservation to ensure both energy savings and reliable transmis- sions. The 6TiSCH working group currently proposes to exploit the RPL routing protocol on top of the IEEE802.15.4-2012-TSCH layer. Since many applications may require low end-to-end delay (e.g. alarms), we propose here a distributed algorithm to schedule the transmissions while upper bounding the end-to-end delay. Our strategy is based on stratums to reserve time-bands for each depth in the routing structure constructed by RPL. By allocating a sufficient number of timeslots for the possible retransmissions, we guarantee that any packet is delivered during one single slotframe, wherever the source is located. Experiments on a large scale testbed prove the relevance of this approach to reduce the end-to-end delay while minimizing the number of collisions, prejudicial to the reliability in multihop networks. I. Hosni , N. Hamdi IEEE Symposium on Computers and Communications (ISCC) , page 507-512 - 2016 International conference with proceedings Localized Scheduling for End-to-End Delay Constrained Low Power Lossy Networks with 6tisch, IEEE Symposium on Computers and Communications (ISCC), Messine, Italy, pages 507-512, juin 2016, doi:10.1109/ISCC.2016.7543789, ( Submitted papers : 403; Accepted papers : 159 full-papers; acceptance rate : 39% ) Research team : Réseaux Platform : INeT Lab @Inproceedings{4-HTH16, author = {Hosni, I. and Theoleyre, F. and Hamdi, N.}, title = {Localized Scheduling for End-to-End Delay Constrained Low Power Lossy Networks with 6tisch}, booktitle = {IEEE Symposium on Computers and Communications (ISCC)}, pages = {507-512}, month = {Jun}, year = {2016}, organization = {IEE}, type = {Selective conference}, doi = {10.1109/ISCC.2016.7543789}, x-international-audience = {Yes}, x-language = {EN}, url = {http://icube-publis.unistra.fr/index.php/4-HTH16} } See publications of the same authors
2018-12-10 02:57:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44913437962532043, "perplexity": 5601.228706507173}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823236.2/warc/CC-MAIN-20181210013115-20181210034615-00278.warc.gz"}
https://www.projecteuclid.org/euclid.ejp/1464816806
## Electronic Journal of Probability ### Distance Estimates for Poisson Process Approximations of Dependent Thinnings Dominic Schuhmacher #### Abstract It is well known, that under certain conditions, gradual thinning of a point process on $R^d_+$, accompanied by a contraction of space to compensate for the thinning, leads in the weak limit to a Cox process. In this article, we apply discretization and a result based on Stein's method to give estimates of the Barbour-Brown distance $d_2$ between the distribution of a thinned point process and an approximating Poisson process, and evaluate the estimates in concrete examples. We work in terms of two, somewhat different, thinning models. The main model is based on the usual thinning notion of deleting points independently according to probabilities supplied by a random field. In Section 4, however, we use an alternative thinning model, which can be more straightforward to apply if the thinning is determined by point interactions. #### Article information Source Electron. J. Probab., Volume 10 (2005), paper no. 5, 165-201. Dates Accepted: 28 February 2005 First available in Project Euclid: 1 June 2016 Permanent link to this document https://projecteuclid.org/euclid.ejp/1464816806 Digital Object Identifier doi:10.1214/EJP.v10-237 Mathematical Reviews number (MathSciNet) MR2120242 Zentralblatt MATH identifier 1071.60034 Rights This work is licensed under aCreative Commons Attribution 3.0 License. #### Citation Schuhmacher, Dominic. Distance Estimates for Poisson Process Approximations of Dependent Thinnings. Electron. J. Probab. 10 (2005), paper no. 5, 165--201. doi:10.1214/EJP.v10-237. https://projecteuclid.org/euclid.ejp/1464816806 #### References • Barbour, A.D., Brown, T.C. Stein's method and point process approximation. Stochastic Process. Appl. 43 (1992), 9-31. • Barbour, A.D., Holst, L., Janson, S. Poisson Approximation. Oxford University Press, Oxford, 1992. • Böker, F., Serfozo, R. Ordered thinnings of point processes and random measures. Stochastic Process. Appl. 15 (1983), 113-132. • Brown, T.C. Position dependent and stochastic thinning of point processes. Stochastic Process. Appl. 9 (1979), 189-193. • Brown, T.C., Weinberg, G.V., Xia, A. Removing logarithms from Poisson process error bounds. Stochastic Process. Appl. 87 (2000), 149-165. • Brown, T.C., Xia, A. On metrics in point process approximation. Stochastics Stochastics Rep. 52 (1995), 247-263. • Brown, T.C., Xia, A. Stein's method and birth-death processes. Ann. Probab. 29 (2001), 1373-1403. • Daley, D.J., Vere-Jones, D. An Introduction to the Theory of Point Processes. Springer, New York, 1988. • Doukhan, P. Mixing. Properties and Examples. Lecture Notes in Statistics, 85. Springer, New York, 1994. • Dudley, R.M. Real analysis and probability. Wadsworth & Brooks/Cole, Pacific Grove, CA, 1989. • Jagers, P., Lindvall, T. Thinning and rare events in point processes. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete 28 (1974), 89-98. • Kallenberg, O. Limits of compound and thinned point processes. J. Appl. Probability 12 (1975), 269-278. • Kallenberg, O. Random Measures, 4th ed. Akademie-Verlag, Berlin; Academic Press, London, 1986. • Kallenberg, O. Foundations of Modern Probability, 2nd ed., Springer, New York, 2002. • Matheron, G. Random sets and integral geometry. Wiley, New York, 1975. • Rényi, A. A characterization of Poisson processes (in Hungarian with summaries in English and Russian). Magyar Tud. Akad. Mat. Kutató Int. Közl. 1 (1957), 519-527. MR0094861 Translated in Selected papers of Alfréd Rényi, Vol. 1, ed. Pál Turán, Akadémiai Kiadó, Budapest, 1976. • Schuhmacher, D. Upper bounds for spatial point process approximations. To appear in Ann. Appl. Probab. 15 (2005), No. 1B. MR number not yet available. Preprint at http://www.math.unizh.ch/~schumi/papers/sppa.pdf • Serfozo, R.F. Compositions, inverses and thinnings of random measures. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete 37 (1977), 253-265. • Serfozo, R.F. Rarefactions of compound point processes. J. Appl. Probability 21 (1984), 710-719. • Stoyan, D., Kendall, W.S., Mecke, J. Stochastic geometry and its applications. Wiley, Chichester, 1987.
2019-11-12 20:04:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17851217091083527, "perplexity": 3649.694591791349}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665726.39/warc/CC-MAIN-20191112175604-20191112203604-00077.warc.gz"}
http://mathoverflow.net/feeds/question/77838
Matrices satisfying certain pair-wise constraints - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-26T08:28:50Z http://mathoverflow.net/feeds/question/77838 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/77838/matrices-satisfying-certain-pair-wise-constraints Matrices satisfying certain pair-wise constraints unknown (google) 2011-10-11T16:44:48Z 2011-10-11T20:56:55Z <p>Consider given pairs of variables: $a_{ir1},a_{ir2}\in \mathbb{R}^{m \times m}$ and $a_{jr1},a_{jr2}\in \mathbb{R}^{m \times m}$, where $r \in \{1,2,\cdots,t\}$, consider the constraints:</p> <p>$\sum_{r=1}^{t}[a_{ir1}a_{jr2}+a_{ir2}a_{jr1}]=\sum_{r=1}^{t}[a_{jr1}a_{ir2}+a_{jr2}a_{ir1}]=0$ </p> <p>$\sum_{r=1}^{t}[a_{ir1}a_{ir2}+a_{ir2}a_{ir1}]=\sum_{r=1}^{t}[a_{jr1}a_{jr2}+a_{jr2}a_{jr1}]=I$ </p> <p>with $i \ne j$ and $i,j \in \{1,2,\cdots,n\}$.</p> <p>Let the smallest size of matrices such constraints as a function of $n$ and $t$ be $f(n,t)$. My primary question is how fast does $f(n,t)$ grow with $n$ and $t$? For a fixed $t$, let the growth be $f(n)[t]$. How fast does $f(n)[t]$ grow with $n$? Does $f(n,t) = O(\log^{c}{n})$ when $t=O(n^{q})$ for some $c \in \mathbb{N}$ and $\frac{1}{3} > q \in \mathbb{Q}$?</p> <p>Secondly, how do you find such matrix solutions explicitly?</p> <p>[Note: Each $a_{ijk}$ is a square matrix.]</p> <p>What I know: For $t=1$, I am fairly certain $f(n,1) = n$. For any fixed $t$, I don't think we can do better (although not sure). What happens when $t$ is allowed to grow although with $n$ although at a sub-cubic rate w.r.t $n$ is something I am interested?</p>
2013-05-26 08:28:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8818718791007996, "perplexity": 890.3366625988979}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706762669/warc/CC-MAIN-20130516121922-00027-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.muchlearning.org/?page=62&ccourseid=1437&sectionid=1601
# Step By Step Calculus » 13.5 - Critical Points, Concavity and Extrema Synopsis We are often interested in information about a graph along the lines of “what is the range of f(x)f(x)” (related to extrema, i.e. minima or maxima), and “how does f(x)f(x) behave around this point?” (related to concavity), a discussion of these two concepts is important. In analyzing a graph, critical points are very important as they indicate possible local extrema. There are two kinds of critical points: those where f^\prime(x)f^\prime(x) is not defined, and those where f^\prime(x)=0f^\prime(x)=0. The latter are known as smooth critical points. Fermat’s theorem states that “ If ff has a local extremum at cc and f'(x)f'(x) exists at cc, then f'(c)=0.f'(c)=0.” One can state the following corollary using the Fermat’s theorem: • If ff has a local extremum at cc, then cc is a critical point or an endpoint of a closed interval in the domain. However, not all critical points mark extrema. To find out the point of extrema, we can use the following test: First Derivative Test: If cc is a critical point and f^{\prime }(x)f^{\prime }(x) exists near cc (but not necessarily at cc itself), then A complete procedure to find the extrema of a graph should be the following: Step 1: (Differentiate) Find f'f'. Step 2: (Important Points) Using ff and f'f', find • Singularities: The set SS of points at which the function ff is undefined on the given domain D_{f}D_{f}. • Critical points: The set C=\left\{ c:f^{\prime }(c)=0\text{ or }f^{\prime }(c)\text{ does not exist}\right\} C=\left\{ c:f^{\prime }(c)=0\text{ or }f^{\prime }(c)\text{ does not exist}\right\} . This set could contain intervals. • End points: The set EE comprising the endpoints of closed intervals in D_{f}D_{f}. Step 3: (Regional Behaviour) Divide D_f\cap(S\cup C\cup E)^C\equiv D_f-(S\cup C\cup E)D_f\cap(S\cup C\cup E)^C\equiv D_f-(S\cup C\cup E) into regions and find behaviour of ff on each. Step 4: (Local Extrema) Using the local behaviour of ff, identify points c\in Cc\in C and e\cup Ee\cup E as: • Local minima: if f^{\prime }(x)f^{\prime }(x) goes from \ominus \ominus to \oplus \oplus at ccorf(x)\geq f(e)f(x)\geq f(e) for all xx in an interval including ee. • Local maxima: if f^{\prime }(x)f^{\prime }(x) goes from \oplus \oplus to \ominus \ominus at ccorf(x)\leq f(e)f(x)\leq f(e) for all xx in an interval including ee. • Any or none of the above: if f^{\prime }(x)=0f^{\prime }(x)=0 or f^{\prime }(x)f^{\prime }(x) does not exist arbitrarily close to cc. In this case, try to graph the function around cc and go back to the basic definitions. Step 5: (Global Extrema) Answer question asked noting that there might not be a global maximum or minimum. In particular, watch out for singularities. The concept called ‘Concavity’ is another way to find out which critical points are extrema. A graph is concave up if it forms a ‘cup’ shape, and concave down if it forms a ‘cap’. Critical points in concave up regions are minima while critical points in concave down regions are maxima. Critical points are not necessarily extrema but also can be inflection points, where the graph changes concavity. Concavity is controlled by the change in f^\primef^\prime, an increasing derivative (positive f^{(2)}f^{(2)}) indicates concave up, and a decreasing derivative (negative f^{(2)}f^{(2)}) indicates concave down. Second Derivative Test: If cc is a critical point and f^{(2)}(x)f^{(2)}(x) exists at and nearcc and is continuous at cc, then Second derivative test can not conclude whether a local extrema exists at a critical point cc if f^{(2)}(x)f^{(2)}(x) does not exist at cc or f^{(2)}(c)=0f^{(2)}(c)=0. The points cc where f^{(2)}(x)f^{(2)}(x) does not exist or is equal to 00 are the possible inflection points and we denote this set of points by II. A point i\in Ii\in I is an inflection point if f^{(2)}f^{(2)} changes its sign (equivalently f'f' becomes increasing to decreasing or vice versa) as it moves from one side of ii to the other side of ii.
2019-01-21 21:38:26
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8192363977432251, "perplexity": 2971.529294113798}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583814455.32/warc/CC-MAIN-20190121213506-20190121235506-00234.warc.gz"}
http://www.arxivsorter.org/
Arxivsorter uses the network of co-authorship to estimate a proximity between people. It then ranks a list of publications using a friends-of-friends algorithm. It is not a filter and therefore does not lose any information. J.P. Magué & B. Ménard [1] Title: The close circumstellar environment of Betelgeuse - V. Rotation velocity and molecular envelope properties from ALMA Comments: 18 pages, 19 figures, accepted for publication in A&A Subjects: Solar and Stellar Astrophysics (astro-ph.SR) We observed Betelgeuse using ALMA's extended configuration in band 7 (f~340 GHz, {\lambda}~0.88 mm), resulting in a very high angular resolution of 18 mas. Using a solid body rotation model of the 28SiO(v=2,J=8-7) line emission, we show that the supergiant is rotating with a projected equatorial velocity of v_eq sin i = 5.47 +/- 0.25 km/s at the equivalent continuum angular radius R_star = 29.50 +/- 0.14 mas. This corresponds to an angular rotation velocity of {\omega} sin i = (5.6 +/- 1.3) x 10^(-9) rad/s. The position angle of its north pole is PA = 48.0 +/- 3.5{\deg}. The rotation period of Betelgeuse is estimated to P/sin i = 36 +/- 8 years. The combination of our velocity measurement with previous observations in the ultraviolet shows that the chromosphere is co-rotating with the star up to a radius of ~10 au (45 mas or 1.5x the ALMA continuum radius). The coincidence of the position angle of the polar axis of Betelgeuse with that of the major ALMA continuum hot spot, a molecular plume, and a partial dust shell (from previous observations) suggests that focused mass loss is currently taking place in the polar region of the star. We propose that this hot spot corresponds to the location of a particularly strong "rogue" convection cell, which emits a focused molecular plume that subsequently condenses into dust at a few stellar radii. Rogue convection cells therefore appear to be an important factor shaping the anisotropic mass loss of red supergiants. [2] Title: Peering beyond the horizon with standard sirens and redshift drift Authors: Raul Jimenez (1,2), Alvise Raccanelli (1), Licia Verde (1,2), Sabino Matarrese (3,4,5,6) ((1) ICC Barcelona, (2) ICREA, (3) Università di Padova, (4) INFN Padova, (5) INAF Padova, (6) GSSI) Comments: 10 pages, 1 figure, 2 tables Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO); General Relativity and Quantum Cosmology (gr-qc) An interesting test on the nature of the Universe is to measure the global spatial curvature of the metric in a model independent way, at a level of $|\Omega_k|<10^{-4}$, or, if possible, at the cosmic variance level of the amplitude of the CMB fluctuations $|\Omega_k|\approx10^{-5}$. A limit of $|\Omega_k|<10^{-4}$ would yield stringent tests on several models of inflation. Further, improving the constraint by an order of magnitude would help in reducing "model confusion" in standard parameter estimation. Moreover, if the curvature is measured to be at the value of the amplitude of the CMB fluctuations, it would offer a powerful test on the inflationary paradigm and would indicate that our Universe must be significantly larger than the current horizon. On the contrary, in the context of standard inflation, measuring a value above CMB fluctuations will lead us to conclude that the Universe is not much larger than the current observed horizon; this can also be interpreted as the presence of large fluctuations outside the horizon. However, it has proven difficult, so far, to find observables that can achieve such level of accuracy, and, most of all, be model-independent. Here we propose a method that can in principle achieve that; this is done by making minimal assumptions and using distance probes that are cosmology-independent: gravitational waves, redshift drift and cosmic chronometers. We discuss what kind of observations are needed in principle to achieve the desired accuracy. [3] Title: Declining rotation curves at $z=2$: A natural phenomenon in $Λ$CDM cosmology Comments: 6 pages, 4 figures, submitted to ApJ Letters, www.magneticum.org Subjects: Astrophysics of Galaxies (astro-ph.GA) Selecting disk galaxies from the cosmological, hydrodynamical simulation Magneticum Pathfinder we show that almost half of our poster child disk galaxies at $z=2$ show significantly declining rotation curves and low dark matter fractions, very similar to recently reported observations. These galaxies do not show any anomalous behavior, reside in standard dark matter halos and typically grow significantly in mass until $z=0$, where they span all morphological classes, including disk galaxies matching present day rotation curves and observed dark matter fractions. Our findings demonstrate that declining rotation curves and low dark matter fractions in rotation dominated galaxies at $z=2$ appear naturally within the $\Lambda$CDM paradigm and reflect the complex baryonic physics, which plays a role at the peak epoch of star-formation. In addition, we find that dispersion dominated galaxies at $z=2$, which host a significant gas disk, exhibit similar shaped rotation curves as the disk galaxy population, rendering it difficult to differentiate between these two populations with currently available observation techniques. [4] Title: Revisiting the bulge-halo conspiracy II: Towards explaining its puzzling dependence on redshift Authors: Francesco Shankar (1), Alessandro Sonnenfeld (2), Philip Grylls (1), Lorenzo Zanisi (1), Carlo Nipoti (3), Kyu-Hyun Chae (4), Mariangela Bernardi (5), Carlo Enrico Petrillo (6), Marc Huertas-Company (7), Gary A. Mamon (8), Stewart Buchan (1) ((1) University of Southampton, (2) Kavli IPMU, University of Tokyo, (3) Bologna University, (4) Sejong University, (5) University of Pennsylvania, (6) University of Groningen, (7) LERMA, Observatoire de Paris, (8) Institut d'Astrophysique de Paris) Comments: 14 pages, 8 figures. MNRAS, accepted. Main result of the paper in Figure 2 Subjects: Astrophysics of Galaxies (astro-ph.GA); Cosmology and Nongalactic Astrophysics (astro-ph.CO) We carry out a systematic investigation of the total mass density profile of massive (Mstar~3e11 Msun) early-type galaxies and its dependence on redshift, specifically in the range 0<z<1. We start from a large sample of SDSS early-type galaxies with stellar masses and effective radii measured assuming two different profiles, de Vaucouleurs and S\'{e}rsic. We assign dark matter haloes to galaxies via abundance matching relations with standard LCDM profiles and concentrations. We then compute the total, mass-weighted density slope at the effective radius gamma', and study its redshift dependence at fixed stellar mass. We find that a necessary condition to induce an increasingly flatter gamma' at higher redshifts, as suggested by current strong lensing data, is to allow the intrinsic stellar profile of massive galaxies to be S\'{e}rsic and the input S\'{e}rsic index n to vary with redshift approximately as n(z)~(1+z)^(-1). This conclusion holds irrespective of the input Mstar-Mhalo relation, the assumed stellar initial mass function, or even the chosen level of adiabatic contraction in the model. Secondary contributors to the observed redshift evolution of gamma' may come from an increased contribution at higher redshifts of adiabatic contraction and/or bottom-light stellar initial mass functions. The strong lensing selection effects we have simulated seem not to contribute to this effect. A steadily increasing S\'{e}rsic index with cosmic time is supported by independent observations, though it is not yet clear whether cosmological hierarchical models (e.g., mergers) are capable of reproducing such a fast and sharp evolution. [5] Title: Dynamical equivalence, the origin of the Galactic field stellar and binary population, and the initial radius--mass relation of embedded clusters Comments: 6 pages, 2 figures; accepted for publication in MNRAS Subjects: Astrophysics of Galaxies (astro-ph.GA) In order to allow a better understanding of the origin of Galactic field populations, dynamical equivalence of stellar-dynamical systems has been postulated by Kroupa and Belloni et al. to allow mapping of solutions of the initial conditions of embedded clusters such that they yield, after a period of dynamical processing, the Galactic field population. Dynamically equivalent systems are defined to initially and finally have the same distribution functions of periods, mass ratios and eccentricities of binary stars. Here we search for dynamically equivalent clusters using the {\sc mocca} code. The simulations confirm that dynamically equivalent solutions indeed exist. The result is that the solution space is next to identical to the radius--mass relation of Marks \& Kroupa, $\left( r_h/{\rm pc} \right)= 0.1^{+0.07}_{-0.04}\, \left( M_{\rm ecl}/{\rm M}_\odot \right)^{0.13\pm0.04}$. This relation is in good agreement with the observed density of molecular cloud clumps. According to the solutions, the time-scale to reach dynamical equivalence is about 0.5~Myr which is, interestingly, consistent with the lifetime of ultra-compact HII regions and the time-scale needed for gas expulsion to be active in observed very young clusters as based on their dynamical modelling. [6] Title: CO excitation in the Seyfert galaxy NGC 34: stars, shock or AGN driven? Comments: Accepted for publication in MNRAS. 10 pages, 6 figures Subjects: Astrophysics of Galaxies (astro-ph.GA) We present a detailed analysis of the X-ray and molecular gas emission in the nearby galaxy NGC 34, to constrain the properties of molecular gas, and assess whether, and to what extent, the radiation produced by the accretion onto the central black hole affects the CO line emission. We analyse the CO Spectral Line Energy Distribution (SLED) as resulting mainly from Herschel and ALMA data, along with X-ray data from NuSTAR and XMM-Newton. The X-ray data analysis suggests the presence of a heavily obscured AGN with an intrinsic luminosity of L$_{\rm{1-100\,keV}} \simeq 4.0\times10^{42}$ erg s$^{-1}$. ALMA high resolution data ($\theta \simeq 0.2''$) allows us to scan the nuclear region down to a spatial scale of $\approx 100$ pc for the CO(6-5) transition. We model the observed SLED using Photo-Dissociation Region (PDR), X-ray-Dominated Region (XDR), and shock models, finding that a combination of a PDR and an XDR provides the best fit to the observations. The PDR component, characterized by gas density ${\rm log}(n/{\rm cm^{-3}})=2.5$ and temperature $T=30$ K, reproduces the low-J CO line luminosities. The XDR is instead characterised by a denser and warmer gas (${\rm log}(n/{\rm cm^{-3}})=4.5$, $T =65$ K), and is necessary to fit the high-J transitions. The addition of a third component to account for the presence of shocks has been also tested but does not improve the fit of the CO SLED. We conclude that the AGN contribution is significant in heating the molecular gas in NGC 34. [7] Title: A Universal Transition in Atmospheric Diffusion for Hot Subdwarfs Near 18,000 K Comments: Accepted for publication in The Astrophysical Journal. 9 pages, 1 table, 10 figures. Figure 2 is shown at low resolution due to file size limits Subjects: Solar and Stellar Astrophysics (astro-ph.SR) In the color-magnitude diagrams (CMDs) of globular clusters, when the locus of stars on the horizontal branch (HB) extends to hot temperatures, discontinuities are observed at colors corresponding to ~12,000 K and ~18,000 K. The former is the "Grundahl jump" that is associated with the onset of radiative levitation in the atmospheres of hot subdwarfs. The latter is the "Momany jump" that has remained unexplained. Using the Space Telescope Imaging Spectrograph on the Hubble Space Telescope, we have obtained ultraviolet and blue spectroscopy of six hot subdwarfs straddling the Momany jump in the massive globular cluster omega Cen. By comparison to model atmospheres and synthetic spectra, we find that the feature is due primarily to a decrease in atmospheric Fe for stars hotter than the feature, amplified by the temperature dependence of the Fe absorption at these effective temperatures. [8] Title: Photonuclear Reactions in Lightning Discovered from Detection of Positrons and Neutrons Authors: Teruaki Enoto (1), Yuuki Wada (2 and 3), Yoshihiro Furuta (2), Kazuhiro Nakazawa (2), Takayuki Yuasa (4), Kazufumi Okuda (2), Kazuo Makishima (3), Mitsuteru Sato (5), Yousuke Sato (6), Toshio Nakano (3), Daigo Umemoto (3), Harufumi Tsuchiya (7) ((1) Kyoto University, (2) The University of Tokyo, (3) RIKEN, (4) Singapore, (5) Hokkaido University, (6) Nagoya University, (7) JAEA) Comments: This manuscript was submitted to Nature Letter on July 30, 2017, and the original version that has not undergo the peer review process. See the accepted version at Nature website, published on the issue of November 23, 2017 with the revised title "photonuclear reaction triggered by lightning discharge" Journal-ref: Nature Letter, the issue of November 23, 2017 Subjects: High Energy Astrophysical Phenomena (astro-ph.HE); Atmospheric and Oceanic Physics (physics.ao-ph) Lightning and thundercloud are the most dramatic natural particle accelerators on the Earth. Relativistic electrons accelerated by electric fields therein emit bremsstrahlung gamma rays, which have been detected at ground observations, by airborne detectors, and as terrestrial gamma-ray flashes (TGFs) from space. The energy of the gamma rays is sufficiently high to potentially invoke atmospheric photonuclear reactions 14N(gamma, n)13N, which would produce neutrons and eventually positrons via beta-plus decay of generated unstable radioactive isotopes, especially 13N. However, no clear observational evidence for the reaction has been reported to date. Here we report the first detection of neutron and positron signals from lightning with a ground observation. During a thunderstorm on 6 February 2017 in Japan, a TGF-like intense flash (within 1 ms) was detected at our monitoring sites 0.5-1.7 km away from the lightning. The subsequent initial burst quickly subsided with an exponential decay constant of 40-60 ms, followed by a prolonged line emission at about 0.511 megaelectronvolt (MeV), lasting for a minute. The observed decay timescale and spectral cutoff at about 10 MeV of the initial emission are well explained with de-excitation gamma rays from the nuclei excited by neutron capture. The centre energy of the prolonged line emission corresponds to the electron-positron annihilation, and hence is the conclusive indication of positrons produced after the lightning. Our detection of neutrons and positrons is unequivocal evidence that natural lightning triggers photonuclear reactions. No other natural event on the Earth is known to trigger photonuclear reactions. This discovery places lightning as only the second known natural channel on the Earth after the atmospheric cosmic-ray interaction, in which isotopes, such as 13C, 14C, and 15N, are produced. [9] Title: Tracing the Assembly History of NGC 1395 through its Globular Cluster System Subjects: Astrophysics of Galaxies (astro-ph.GA) We used deep Gemini-South/GMOS g'r'i'z' images to study the globular cluster (GC) system of the massive elliptical galaxy NGC 1395, located in the Eridanus supergroup. The photometric analysis of the GC candidates reveals a clear colour bimodality distribution, indicating the presence of "blue" and "red" GC subpopulations. While a negative radial colour gradient is detected in the projected spatial distribution of the red GCs, the blue GCs display a shallow colour gradient. The blue GCs also display a remarkable shallow and extended surface density profile, suggesting a significant accretion of low-mass satellites in the outer halo of the galaxy. In addition, the slope of the projected spatial distribution of the blue GCs in the outer regions of the galaxy, is similar to that of the X-ray halo emission. Integrating up to 165 kpc the profile of the projected spatial distribution of the GCs, we estimated a total GC population and specific frequency of 6000$\pm$1100 and $S_N$=7.4$\pm$1.4, respectively. Regarding NGC 1395 itself, the analysis of the deep Gemini/GMOS images shows a low surface brightness umbrella-like structure indicating, at least, one recent merger event. Through relations recently published in the literature, we obtained global parameters, such as $M_\mathrm{stellar}=9.32\times10^{11}$ M$\odot$ and $M_h=6.46\times10^{13}$ M$\odot$. Using public spectroscopic data, we derive stellar population parameters of the central region of the galaxy by the full spectral fitting technique. We have found that, this region, seems to be dominated for an old stellar population, in contrast to findings of young stellar populations from the literature. [10] Title: Exo-lightning radio emission: the case study of HAT-P-11b Comments: Accepted to the Conference Proceedings of the 8th International Workshop on Planetary, Solar and Heliospheric Radio Emissions (PRE 8), held in Seggauberg near Leibnitz/Graz, Austria, October 25-27, 2016. 12 pages, 2 figures Subjects: Earth and Planetary Astrophysics (astro-ph.EP); Solar and Stellar Astrophysics (astro-ph.SR); Atmospheric and Oceanic Physics (physics.ao-ph); Geophysics (physics.geo-ph) Lightning induced radio emission has been observed on solar system planets. Lecavelier des Etangs et al. [2013] carried out radio transit observations of the exoplanet HAT-P-11b, and suggested a tentative detection of a radio signal. Here, we explore the possibility of the radio emission having been produced by lightning activity on the exoplanet, following and expanding the work of Hodos\'an et al. [2016a]. After a summary of our previous work [Hodos\'an et al. 2016a], we extend it with a parameter study. The lightning activity of the hypothetical storm is largely dependent on the radio spectral roll-off, $n$, and the flash duration, $\tau_\mathrm{fl}$. The best-case scenario would require a flash density of the same order of magnitude as can be found during volcanic eruptions on Earth. On average, $3.8 \times 10^6$ times larger flash densities than the Earth-storms with the largest lightning activity is needed to produce the observed signal from HAT-P-11b. Combined with the results of Hodos\'an et al. [2016a] regarding the chemical effects of planet-wide thunderstorms, we conclude that future radio and infrared observations may lead to lightning detection on planets outside the solar system. [11] Title: The SUrvey for Pulsars and Extragalactic Radio Bursts II: New FRB discoveries and their follow-up Comments: 21 pages, 8 figures and accepted for publication in MNRAS Subjects: High Energy Astrophysical Phenomena (astro-ph.HE) We report the discovery of four Fast Radio Bursts (FRBs) in the ongoing SUrvey for Pulsars and Extragalactic Radio Bursts (SUPERB) at the Parkes Radio Telescope: FRBs 150610, 151206, 151230 and 160102. Our real-time discoveries have enabled us to conduct extensive, rapid multi-messenger follow-up at 12 major facilities sensitive to radio, optical, X-ray, gamma-ray photons and neutrinos on time scales ranging from an hour to a few months post-burst. No counterparts to the FRBs were found and we provide upper limits on afterglow luminosities. None of the FRBs were seen to repeat. Formal fits to all FRBs show hints of scattering while their intrinsic widths are unresolved in time. FRB 151206 is at low Galactic latitude, FRB 151230 shows a sharp spectral cutoff, and FRB 160102 has the highest dispersion measure (DM = $2596.1\pm0.3$ pc cm$^{-3}$) detected to date. Three of the FRBs have high dispersion measures (DM >$1500$ pc cm$^{-3}$), favouring a scenario where the DM is dominated by contributions from the Intergalactic Medium. The slope of the Parkes FRB source counts distribution with fluences $>2$ Jyms is $\alpha=-2.2^{+0.6}_{-1.2}$ and still consistent with a Euclidean distribution ($\alpha=-3/2$). We also find that the all-sky rate is $1.7^{+1.5}_{-0.9}\times10^3$FRBs/($4\pi$ sr)/day above $\sim2$ Jyms and there is currently no strong evidence for a latitude-dependent FRB sky-rate. [12] Title: Discovery of 21 New Changing-look AGNs in Northern Sky Subjects: Astrophysics of Galaxies (astro-ph.GA); Cosmology and Nongalactic Astrophysics (astro-ph.CO) The rare case of changing-look (CL) AGNs, with the appearance or disappearance of broad Balmer emission lines within a few years, challenges our understanding of the AGN unified model. We present a sample of 21 new CL AGNs at $0.08<z<0.58$. The new sample doubles the number of such objects known to date. These new CL AGNs were discovered by several ways, from repeat spectra in the SDSS, repeat spectra in the Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) and SDSS, and from photometric variability and new spectroscopic observations. The estimated upper limits of transition timescale of the CL AGNs in this sample span from 0.9 to 13 years in rest-frame. The continuum flux in the optical and mid-infrared becomes brighter when the CL AGNs turn on, or vice versa. Variations of more than 0.2 mag in the mid-infrared $W1$ band, from the Wide-field Infrared Survey Explorer (WISE), were detected in 15 CL AGNs during the transition. The optical and mid-infrared variability is not consistent with the scenario of variable obscuration in 10 CL AGNs at higher than $3\sigma$ confidence level. We confirm a bluer-when-brighter trend in the optical. However, the mid-infrared colors $W1-W2$ become redder when the objects become brighter in the $W1$ band, possibly due to a stronger hot dust contribution in the $W2$ band when the AGN activity becomes stronger. The physical mechanism of type transition is important for understanding the evolution of AGNs. [13] Title: Resolving the Internal Structure of Circum-Galactic Medium using Gravitationally Lensed Quasars Comments: 13 pages, 8 figures, 3 tables, accepted for publication in ApJ Subjects: Astrophysics of Galaxies (astro-ph.GA) We study the internal structure of the Circum-Galactic Medium (CGM), using 29 spectra of 13 gravitationally lensed quasars with image separation angles of a few arcseconds, which correspond to 100 pc to 10 kpc in physical distances. After separating metal absorption lines detected in the spectra into high-ions with ionization parameter (IP) $>$ 40 eV and low-ions with IP $<$ 20 eV, we find that i) the fraction of absorption lines that are detected in only one of the lensed images is larger for low-ions ($\sim$16%) than high-ions ($\sim$2%), ii) the fractional difference of equivalent widths ($EW$s) between the lensed images is almost same (${\rm d}EW$ $\sim$ 0.2) for both groups although the low-ions have a slightly larger variation, and iii) weak low-ion absorbers tend to have larger ${\rm d}EW$ compared to weak high-ion absorbers. We construct simple models to reproduce these observed properties and investigate the distribution of physical quantities such as size and location of absorbers, using some free parameters. Our best models for absorbers with high-ions and low-ions suggest that i) an overall size of the CGM is at least $\sim$ 500 kpc, ii) a size of spherical clumpy cloud is $\sim$ 1 kpc or smaller, and iii) only high-ion absorbers can have diffusely distributed homogeneous component throughout the CGM. We infer that a high ionization absorber distributes almost homogeneously with a small-scale internal fluctuation, while a low ionization absorber consists of a large number of small-scale clouds in the diffusely distributed higher ionized region. This is the first result to investigate the internal small-scale structure of the CGM, based on the large number of gravitationally lensed quasar spectra. [14] Title: The MASIV Survey IV: relationship between intra-day scintillation and intrinsic variability of radio AGNs Comments: 18 pages, 13 figures, 5 tables, resubmitted to MNRAS after minor revision Subjects: Astrophysics of Galaxies (astro-ph.GA); High Energy Astrophysical Phenomena (astro-ph.HE) We investigate the relationship between 5 GHz interstellar scintillation (ISS) and 15 GHz intrinsic variability of compact, radio-selected AGNs drawn from the Microarcsecond Scintillation-Induced Variability (MASIV) Survey and the Owens Valley Radio Observatory (OVRO) blazar monitoring program. We discover that the strongest scintillators at 5 GHz (modulation index, $m_5 \geq 0.02$) all exhibit strong 15 GHz intrinsic variability ($m_{15} \geq 0.1$). This relationship can be attributed mainly to the mutual dependence of intrinsic variability and ISS amplitudes on radio core compactness at $\sim 100\, \mu$as scales, and to a lesser extent, on their mutual dependences on source flux density, arcsec-scale core dominance and redshift. However, not all sources displaying strong intrinsic variations show high amplitude scintillation, since ISS is also strongly dependent on Galactic line-of-sight scattering properties. This observed relationship between intrinsic variability and ISS highlights the importance of optimizing the observing frequency, cadence, timespan and sky coverage of future radio variability surveys, such that these two effects can be better distinguished to study the underlying physics. For the full MASIV sample, we find that Fermi-detected gamma-ray loud sources exhibit significantly higher 5 GHz ISS amplitudes than gamma-ray quiet sources. This relationship is weaker than the known correlation between gamma-ray loudness and the 15 GHz variability amplitudes, most likely due to jet opacity effects. [15] Title: Iwahashi Zenbei's Sunspot Drawings in 1793 in Japan Comments: 2017/11/16 accepted for publication in Solar Physics Subjects: Solar and Stellar Astrophysics (astro-ph.SR); History and Philosophy of Physics (physics.hist-ph) Three Japanese sunspot drawings associated with Iwahashi Zenbei (1756-1811) are shown here from contemporary manuscripts and woodprint documents with the relevant texts. We revealed the observational date of one of the drawings to be 26 August 1793, and the overall observations lasted for over a year. Moreover, we identified the observational site for the dated drawing at Fushimi in Japan. We then compared his observations with group sunspot number and raw group count from Sunspot Index and Long-term Solar Observations (SILSO) to reveal its data context, and concluded that these drawings filled the gaps in understanding due to the fragmental sunspots observations around 1793. These drawings are important as a clue to evaluate astronomical knowledge of contemporary Japan in the late 19 th century and are valuable as a non-European observation, considering that most sunspot observations up to mid-19 th century are from Europe. [16] Title: Examination of artifact in vector magnetic field SDO/HMI measurements Subjects: Solar and Stellar Astrophysics (astro-ph.SR) In this paper, we came to conclusion that there is a significant systematic error in the SDO/HMI vector magnetic data, which reveals itself in a significant deviation of the lines of the knot magnetic fields from the radial direction. The value of this deviation demonstrates a clear dependence on the distance to the disk center. This paper suggests a method for correction of the vector magnetograms that eliminates the detected systematic error. [17] Title: Planet-driven spiral arms in protoplanetary disks: I. Formation mechanism Authors: Jaehan Bae (1), Zhaohuan Zhu (2) ((1) Carnegie DTM, (2) UNLV) Comments: 16 pages, 14 figures, submitted to the ApJ Subjects: Earth and Planetary Astrophysics (astro-ph.EP) Protoplanetary disk simulations show that a single planet can excite more than one spiral arm, possibly explaining recent observations of multiple spiral arms in some systems. In this paper, we explain the mechanism by which a planet excites multiple spiral arms in a protoplanetary disk. Contrary to previous speculations, the formation of both primary and additional arms can be understood as a linear process when the planet mass is sufficiently small. A planet resonantly interacts with epicyclic oscillations in the disk, launching spiral wave modes around the Lindblad resonances. When a set of wave modes is in phase, they can constructively interfere with each other and create a spiral arm. More than one spiral arm can form because such constructive interference can occur for different sets of wave modes, with the exact number and launching position of spiral arms dependent on the planet mass as well as the disk temperature profile. Non-linear effects become increasingly important as the planet mass increases, resulting in spiral arms with stronger shocks and thus larger pitch angles. This is found in common for both primary and additional arms. When a planet has a sufficiently large mass ($\gtrsim$ 3 thermal masses for $(h/r)_p=0.1$), only two spiral arms form interior to its orbit. The wave modes that would form a tertiary arm for smaller mass planets merge with the primary arm. Improvements in our understanding of the formation of spiral arms can provide crucial insights into the origin of observed spiral arms in protoplanetary disks. [18] Title: Discovery of molecular and atomic clouds associated with the gamma-ray supernova remnant Kesteven 79 Comments: 12 pages, 6 figures, 2 tables, submitted to The Astrophysical Journal (ApJ) Subjects: Astrophysics of Galaxies (astro-ph.GA); High Energy Astrophysical Phenomena (astro-ph.HE) We carried out $^{12}$CO($J$ = 1-0) observations of the Galactic gamma-ray supernova remnant (SNR) Kesteven 79 using the Nobeyama Radio Observatory 45 m radio telescope, which has an angular resolution of $\sim20$ arcsec. We identified molecular and atomic gas interacting with Kesteven 79 whose radial velocity is $\sim80$ km s$^{-1}$. The interacting molecular and atomic gases show good spatial correspondence with the X-ray and radio shells, which have an expanding velocity structure with $\Delta V\sim4$ km s$^{-1}$. The molecular gas associated with the radio and X-ray peaks also exhibits a high-intensity ratio of CO 3-2/1-0 $>$ 0.8, suggesting a kinematic temperature of $\sim100$ K, owing to heating by the supernova shock. We determined the kinematic distance to the SNR to be $\sim5.5$ kpc and the radius of the SNR to be $\sim8$ pc. The average interstellar proton density inside of the SNR is $\sim360$ cm$^{-3}$, of which atomic protons comprise only $\sim10$ $\%$. Assuming a hadronic origin for the gamma-ray emission, the total cosmic-ray proton energy above 1 GeV is estimated to be $\sim5 \times 10^{48}$ erg. [19] Title: Planet-driven spiral arms in protoplanetary disks: II. Implications Authors: Jaehan Bae (1), Zhaohuan Zhu (2) ((1) Carnegie DTM, (2) UNLV) Comments: 14 pages, 10 figures, Figure 2 size reduced to meet the requirement, submitted to the ApJ Subjects: Earth and Planetary Astrophysics (astro-ph.EP) In Paper I (Bae & Zhu 2017), we explained how a planet excites multiple spiral arms in a protoplanetary disk. To examine whether various characteristics of observed spiral arms can be used to constrain the masses of unseen planets and their positions within their disks, we carry out two-dimensional simulations varying planet mass and disk gas temperature. A larger number of spiral arms form with a smaller planet mass and a lower disk temperature. For a range of disk temperature characterized by the disk aspect ratio $0.04 \leq (h/r)_p \leq 0.15$, three or fewer spiral arms are excited interior to a planet's orbit when $M_p/M_* \gtrsim 3\times10^{-4}$ and two spiral arms when $M_p/M_* \gtrsim 3\times10^{-3}$. Exterior to a planet's orbit, multiple spiral arms can form only in cold disks with $(h/r)_p \lesssim 0.06$. Constraining the planet mass with the pitch angle of spiral arms requires accurate disk temperature measurements that might be challenging even with ALMA. However, the property that the pitch angle of planet-driven spiral arms decreases away from the planet can be a powerful diagnostic to determine whether the planet is located interior or exterior to the observed spirals. The arm-to-arm separations increase as a function of planet mass, consistent with previous studies; however, we find that the exact slope depends on disk temperature as well as the radial location where the arm-to-arm separations are measured. We apply these diagnostics to the spiral arms seen in MWC 758 and Elias 2-27. Finally, we discuss the possibility that Jupiter's core creates multiple pressure bumps in the solar nebula through spiral shocks, and show how it can help explain meteoritic properties. [20] Title: Causal propagation of signal in strangeon matter Subjects: High Energy Astrophysical Phenomena (astro-ph.HE) The state equation of strangeon matter is very stiff due to the non-relativistic nature of and the repulsing interaction between the particles, and pulsar masses as high as $\sim 3M_\odot$ would be expected. However, an adiabatic sound speed, $c_s=\sqrt{\partial P/\partial \rho}$, is usually superluminal for strangeon matter, and dynamic response of strangeon star (e.g., binary merger) could not be tractable in calculation. We examine signal propagation in strangeon matter, and calculate the propagation speed, $c_{\rm signal}$, in reality. It is found that as the causality condition is satisfied, i.e., $c_{\rm signal}<c$, and the signal speed as a function of stellar radius is presented. [21] Title: Peculiar Motions of Galaxy Clusters in the Regions of Superclusters of galaxies Comments: 16 pages, 6 figures, 3 tables, published in the Astrophysical Bulletin, 2017 Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO); Astrophysics of Galaxies (astro-ph.GA) We present results of the study of peculiar motions of 57 clusters and groups of galaxies in the regions of the Corona Borealis (CrB), Bootes (Boo), Z5029/A1424, A1190, A1750/A1809 superclusters of galaxies and 20 galaxy clusters located beyond massive structures ($0.05<z<0.10$). Using the SDSS (Data Release 8) data, a sample of early-type galaxies was compiled in the systems under study, their fundamental planes were built, and relative distances and peculiar velocities were determined. Within the galaxy superclusters, significant peculiar motions along the line of sight are observed with rms deviations of $652\pm50$~km s$^{-1}$---in CrB, $757\pm70$~km s$^{-1}$---in Boo. For the most massive A2065 cluster in the CrB supercluster, no peculiar velocity was found. Peculiar motions of other galaxy clusters can be caused by their gravitational interaction both with A\,2065 and with the A2142 supercluster. It has been found that there are two superclusters projected onto each other in the region of the Bootes supercluster with a radial velocity difference of about 4000~km s$^{-1}$. In the Z5029/A1424 supercluster near the rich Z5029 cluster, the most considerable peculiar motions with a rms deviation of $1366\pm170$~km s$^{-1}$ are observed. The rms deviation of peculiar velocities of 20 clusters that do not belong to large-scale structures is equal to $0\pm20$~km s$^{-1}$. The whole sample of the clusters under study has the mean peculiar velocity equal to $83\pm130$~km s$^{-1}$ relative to the cosmic microwave background. [22] Title: 12C/13C isotopic ratios in red-giant stars of the open cluster NGC 6791 Comments: Accepted for publication in MNRAS, 9 pages, 4 figures, 2 tables Subjects: Solar and Stellar Astrophysics (astro-ph.SR) Carbon isotope ratios, along with carbon and nitrogen abundances, are derived in a sample of 11 red-giant members of one of the most metal-rich clusters in the Milky Way, NGC 6791. The selected red-giants have a mean metallicity and standard deviation of [Fe/H]=+0.39+-0.06 (Cunha et al. 2015). We used high resolution H-band spectra obtained by the SDSS-IV Apache Point Observatory Galactic Evolution Experiment (APOGEE). The advantage of using high-resolution spectra in the H-band is that lines of CO are well represented and their line profiles are sensitive to the variation of 12C/13C. Values of the 12C/13C ratio were obtained from a spectrum synthesis analysis. The derived 12C/13C ratios varied between 6.3 and 10.6 in NGC 6791, in agreement with the final isotopic ratios from thermohaline-induced mixing models. The ratios derived here are combined with those obtained for more metal poor red-giants from the literature to examine the correlation between 12C/13C, mass, metallicity and evolutionary status. [23] Title: The formation rate of short gamma-ray bursts and gravitational waves Authors: G. Q. Zhang, F. Y. Wang (NJU) Comments: 29 pages, 8 figures, 2 tables, accepted for publication in ApJ Subjects: High Energy Astrophysical Phenomena (astro-ph.HE); Cosmology and Nongalactic Astrophysics (astro-ph.CO) In this paper, we study the luminosity function and formation rate of short gamma-ray bursts (sGRBs). Firstly, we derive the $E_p-L_p$ correlation using 16 sGRBs with redshift measurements and determine the pseudo redshifts of 284 Fermi sGRBs. Then, we use the Lynden-Bell c$^-$ method to study the luminosity function and formation rate of sGRBs without any assumptions. A strong evolution of luminosity $L(z)\propto (1+z)^{4.47}$ is found. After removing this evolution, the luminosity function is $\Psi (L) \propto L_0 ^ {- 0.29 \pm 0.01}$ for dim sGRBs and $\psi (L) \propto L_0 ^ {- 1.07 \pm 0.01}$ for bright sGRBs, with the break point $8.26 \times 10^{50}$ erg s$^{-1}$. We also find that the formation rate decreases rapidly at $z<1.0$, which is different with previous works. The local formation rate of sGRBs is 7.53 events Gpc$^{-3}$ yr$^{-1}$. Considering the beaming effect, the local formation rate of sGRBs including off-axis sGRBs is $203.31^{+1152.09}_{-135.54}$ events Gpc$^{-3}$ yr$^{-1}$. We also estimate that the event rate of sGRBs detected by the advanced LIGO and Virgo is $0.85^{+4.82}_{-0.56}$ events yr$^{-1}$ for NS-NS binary. [24] Title: A universal relation for the propeller mechanisms in magnetic rotating stars at different scales Authors: Sergio Campana (1), Luigi Stella (2), Sandro Mereghetti (3), Domitilla de Martino (4) ((1) INAF-Brera, (2) INAF-Monteporzio, (3) INAF-IASF Milano, (4) INAF-Napoli) Comments: 11 pages, 3 figures. Accepted for publication in A&A Subjects: High Energy Astrophysical Phenomena (astro-ph.HE) Accretion of matter onto a magnetic, rotating object can be strongly affected by the interaction with its magnetic field. This occurs in a variety of astrophysical settings involving young stellar objects, white dwarfs, and neutron stars. As matter is endowed with angular momentum, its inflow toward the star is often mediated by an accretion disc. The pressure of matter and that originating from the stellar magnetic field balance at the magnetospheric radius: at smaller distances the motion of matter is dominated by the magnetic field, and funnelling towards the magnetic poles ensues. However, if the star, and thus its magnetosphere, is fast spinning, most of the inflowing matter will be halted at the magnetospheric radius by centrifugal forces, resulting in a characteristic reduction of the accretion luminosity. The onset of this mechanism, called the propeller, has been widely adopted to interpret a distinctive knee in the decaying phase of the light curve of several transiently accreting X-ray pulsar systems. By comparing the observed luminosity at the knee for different classes of objects with the value predicted by accretion theory on the basis of the independently measured magnetic field, spin-period, mass, and radius of the star, we disclose here a general relation for the onset of the propeller which spans about eight orders of magnitude in spin period and ten in magnetic moment. The parameter-dependence and normalisation constant that we determine are in agreement with basic accretion theory. [25] Title: Enceladus's crust as a non-uniform thin shell: I Tidal deformations Authors: Mikael Beuthe Comments: 71 pages, 12 figures, 5 tables Journal-ref: Icarus 302 (2018) 145-174 Subjects: Earth and Planetary Astrophysics (astro-ph.EP); Geophysics (physics.geo-ph); Space Physics (physics.space-ph) The geologic activity at Enceladus's south pole remains unexplained, though tidal deformations are probably the ultimate cause. Recent gravity and libration data indicate that Enceladus's icy crust floats on a global ocean, is rather thin, and has a strongly non-uniform thickness. Tidal effects are enhanced by crustal thinning at the south pole, so that realistic models of tidal tectonics and dissipation should take into account the lateral variations of shell structure. I construct here the theory of non-uniform viscoelastic thin shells, allowing for depth-dependent rheology and large lateral variations of shell thickness and rheology. Coupling to tides yields two 2D linear partial differential equations of the 4th order on the sphere which take into account self-gravity, density stratification below the shell, and core viscoelasticity. If the shell is laterally uniform, the solution agrees with analytical formulas for tidal Love numbers; errors on displacements and stresses are less than 5% and 15%, respectively, if the thickness is less than 10% of the radius. If the shell is non-uniform, the tidal thin shell equations are solved as a system of coupled linear equations in a spherical harmonic basis. Compared to finite element models, thin shell predictions are similar for the deformations due to Enceladus's pressurized ocean, but differ for the tides of Ganymede. If Enceladus's shell is conductive with isostatic thickness variations, surface stresses are approximately inversely proportional to the local shell thickness. The radial tide is only moderately enhanced at the south pole. The combination of crustal thinning and convection below the poles can amplify south polar stresses by a factor of 10, but it cannot explain the apparent time lag between the maximum plume brightness and the opening of tiger stripes. In a second paper, I will study tidal dissipation in a non-uniform crust. [26] Title: Revised Models of Interstellar Nitrogen Isotopic Fractionation Comments: Accepted for publication in MNRAS, 3 figures Subjects: Astrophysics of Galaxies (astro-ph.GA) Nitrogen-bearing molecules in cold molecular clouds exhibit a range of isotopic fractionation ratios and these molecules may be the precursors of $^{15}$N enrichments found in comets and meteorites. Chemical model calculations indicate that atom-molecular ion and ion-molecule reactions could account for most of the fractionation patterns observed. However, recent quantum-chemical computations demonstrate that several of the key processes are unlikely to occur in dense clouds. Related model calculations of dense cloud chemistry show that the revised $^{15}$N enrichments fail to match observed values. We have investigated the effects of these reaction rate modifications on the chemical model of Wirstr\"{o}m et al. (2012) for which there are significant physical and chemical differences with respect to other models. We have included $^{15}$N fractionation of CN in neutral-neutral reactions and also updated rate coefficients for key reactions in the nitrogen chemistry. We find that the revised fractionation rates have the effect of suppressing $^{15}$N enrichment in ammonia at all times, while the depletion is even more pronounced, reaching $^{14}$N/$^{15}$N ratios of >2000. Taking the updated nitrogen chemistry into account, no significant enrichment occurs in HCN or HNC, contrary to observational evidence in dark clouds and comets, although the $^{14}$N/$^{15}$N ratio can still be below 100 in CN itself. However, such low CN abundances are predicted that the updated model falls short of explaining the bulk $^{15}$N enhancements observed in primitive materials. It is clear that alternative fractionating reactions are necessary to reproduce observations, so further laboratory and theoretical studies are urgently needed. [27] Title: High-Energy Neutrino Astronomy: where do we stand, where do we go? Comments: Talk given at the occasion of the 50th anniversary of the Baksan Laboratory Subjects: High Energy Astrophysical Phenomena (astro-ph.HE) With the identification of a diffuse flux of astrophysical ("cosmic") neutrinos in the TeV-PeV energy range, IceCube has opened a new window to the Universe. However, the corresponding cosmic landscape is still uncharted: so far, the observed flux does not show any clear association with known source classes. In the present talk, I sketch the way from Baikal-NT200 to IceCube and summarize IceCube's recent astrophysics results. Finally, I describe the present projects to build even larger detectors: GVD in Lake Baikal, KM3NeT in the Mediterranean Sea and IceCube-Gen2 at the South Pole. These detectors will allow studying the high-energy neutrino sky in much more detail than the present arrays permit. [28] Title: CMB constraints on running non-Gaussianity Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO) We develop a complete set of tools for CMB forecasting, simulation and estimation of primordial running bispectra, arising from a variety of curvaton and single-field (DBI) models of Inflation. We validate our pipeline using mock CMB running non-Gaussianity realizations and test it on real data by obtaining experimental constraints on the $f_{\rm NL}$ running spectral index, $n_{\rm NG}$, using WMAP 9-year data. Our final bounds (68\% C.L.) read $-0.3< n_{\rm NG}<1.7$, $-0.3< n_{\rm NG}<1.3$, $-0.9<n_{\rm NG}<1.0$ for the single-field curvaton, two-field curvaton and DBI scenarios, respectively. We show forecasts and discuss potential improvements on these bounds, using Planck and future CMB surveys. [29] Title: Extragalactic diffuse gamma-rays from dark matter annihilation: revised prediction and full modelling uncertainties Comments: 21 pages + appendix, 10 figures Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO); High Energy Astrophysical Phenomena (astro-ph.HE) Recent high-energy data from Fermi-LAT on the diffuse gamma-background (DGRB) have been used to set among the best constraints on annihilating TeV cold dark matter (DM) candidates. In order to assess the robustness of these limits, we revisit and update the calculation of the isotropic extragalactic gamma-ray intensity from DM annihilation. The emission from halos with masses $\geq10^{10}\,M_{\odot}$ provides a robust lower bound on the predicted intensity. The intensity including smaller halos whose properties are extrapolated from their higher mass counterparts is typically 5 times higher, and boost from subhalos yields an additional factor ~1.5. We also rank the uncertainties from all ingredients and provide a detailed error budget in table 1. Overall, our fiducial intensity is a factor 5 lower than the one derived by the Fermi-LAT collaboration for their latest analysis. This indicates that the limits set on extragalactic DM annihilations could be relaxed by the same factor. We also calculate the expected intensity for self-interacting dark matter (SIDM) in massive halos and find the emission reduced by a factor 3 compared to the collisionless counterpart. The next release of the CLUMPY code will provide all the tools necessary to reproduce and ease future improvements of this prediction. [30] Title: Tomography of cool giant and supergiant star atmospheres. I. Validation of the method Subjects: Solar and Stellar Astrophysics (astro-ph.SR) Cool giant and supergiant star atmospheres are characterized by complex velocity fields originating from convection and pulsation processes which are not fully understood yet. The velocity fields impact the formation of spectral lines, which thus contain information on the dynamics of stellar atmospheres. The tomographic method allows to recover the distribution of the component of the velocity field projected on the line of sight at different optical depths in the stellar atmosphere. The computation of the contribution function to the line depression aims at correctly identifying the depth of formation of spectral lines in order to construct numerical masks probing spectral lines forming at different optical depths. The tomographic method is applied to 1D model atmospheres and to a realistic 3D radiative hydrodynamics simulation performed with CO5BOLD in order to compare their spectral line formation depths and velocity fields. In 1D model atmospheres, each spectral line forms in a restricted range of optical depths. On the other hand, in 3D simulations, the line formation depths are spread in the atmosphere mainly because of temperature and density inhomogeneities. Comparison of CCF profiles obtained from 3D synthetic spectra with velocities from the 3D simulation shows that the tomographic method correctly recovers the distribution of the velocity component projected on the line of sight in the atmosphere. [31] Authors: Pedro Figueira Comments: Lecture presented at the IVth Azores International Advanced School in Space Sciences on "Asteroseismology and Exoplanets: Listening to the Stars and Searching for New Worlds" (arXiv:1709.00645), which took place in Horta, Azores Islands, Portugal in July 2016 Subjects: Earth and Planetary Astrophysics (astro-ph.EP) This chapter describes briefly the key aspects behind the derivation of precise radial velocities. I start by defining radial velocity precision in the context of astrophysics in general and exoplanet searches in particular. Next I discuss the different basic elements that constitute a spectrograph, and how these elements and overall technical choices impact on the derived radial velocity precision. Then I go on to discuss the different wavelength calibration and radial velocity calculation techniques, and how these are intimately related to the spectrograph's properties. I conclude by presenting some interesting examples of planets detected through radial velocity, and some of the new-generation instruments that will push the precision limit further. [32] Title: Searching for X-ray Pulsations from Neutron Stars Using NICER Comments: 4 pages, to appear in Proceedings of IAU Symposium 337: Pulsar Astrophysics - The Next 50 Years Subjects: High Energy Astrophysical Phenomena (astro-ph.HE) The Neutron Star Interior Composition Explorer (NICER) presents an exciting new capability for exploring the modulation properties of X-ray emitting neutron stars, including large area, low background, extremely precise absolute event time stamps, superb low-energy response and flexible scheduling. The Pulsation Searches and Multiwavelength Coordination working group has designed a 2.5 Ms observing program to search for emission and characterize the modulation properties of about 30 known or suspected neutron star sources across a number of source categories. A key early goal will be to search for pulsations from millisecond pulsars that might exhibit thermal pulsations from the surface suitable for pulse profile modeling to constrain the neutron star equation of state. In addition, we will search for pulsations from transitional millisecond pulsars, isolated neutron stars, low-mass X-ray binaries (LMXBs), accretion-powered millisecond pulsars, central compact objects and other sources. We present our science plan and initial results from the first months of the NICER mission, including the discovery of pulsations from the millisecond pulsar J1231-1411. [33] Title: Super-Flaring Active Region 12673 Has One of the Fastest Magnetic Flux Emergence Ever Observed Comments: Accepted to the Research Notes of the AAS Subjects: Solar and Stellar Astrophysics (astro-ph.SR) The flux emergence rate of AR 12673 is greater than any values reported in the literature of which we are aware. [34] Title: Reduced Order Modelling in searches for continuous gravitational waves - I. barycentering time delays Subjects: High Energy Astrophysical Phenomena (astro-ph.HE); General Relativity and Quantum Cosmology (gr-qc) The frequencies and phases of emission from extra-solar sources, as measured by Earth-bound observers, are modulated due to the Doppler motions of the observer with respect to the source, and through relativistic effects. These modulations depend critically on the sky-location of the source. Precise knowledge of the modulations is required if wanting to coherently track the phase of a source over long observation times, for example in pulsar timing, or searches for continuous gravitational wave sources. The modulations can be modelled as a sky-location and time dependent time delay that converts arrival times at the observer to the inertial frame of the source. In many cases this inertial frame can be the solar system barycentre (SSB). We study the use of Reduced Order Modelling for speeding up the calculation of the time delay between an observer and the SSB for any sky-location and for coherent observations spanning one year. We find that the time delay model can be decomposed into just four basis vectors, which can be used to reconstruct the time delay for any sky-location to sub-nanosecond accuracy. When compared to the standard routines for time delay calculation used in gravitational wave searches, the use of the reduced basis can lead to a speed-up factor of 30 times. We have also studied the components of equivalent time delays for sources in binary systems. For these, assuming eccentricities less than 0.25, we can reconstruct the delays to within 100s of nanoseconds, with best case speed-ups of a factor of 10, or factors of two when having to interpolate the basis to different orbital periods or time stamps. In long-duration phase-coherent searches for sources with large sky-position uncertainties, or binary parameter uncertainties, these speed-ups could allow enhancements in their scopes without large additional computational burdens. [35] Title: Rotationally modulated photometric variations in B supergiants? Authors: Alexandre David-Uraz (1 and 2), Gregg Wade (3), Anthony Moffat (4), Stan Owocki (1), Véronique Petit (1), the BRITE team ((1) University of Delaware, (2) Florida Institute of Technology, (3) Royal Military College of Canada, (4) Université de Montréal) Comments: 6 pages, 3 figures, to be published in the proceedings of the 3rd BRITE Science Conference held in Saint-Michel-des-Saints (QC, Canada), 2017 August 7-10 -- Proceedings of the Polish Astronomical Society Subjects: Solar and Stellar Astrophysics (astro-ph.SR) In this contribution, we present BRITE observations of the early-B supergiants $\epsilon$ Ori and $\kappa$ Ori. We perform a preliminary analysis of the data acquired over the first two Orion observing runs. We evaluate whether they are compatible with co-rotating bright spots and discuss the challenges of such an approach. [36] Title: Modelling the atmospheric composition of warm exoplanets Comments: Submitted to Experimental Astronomy, ARIEL Special Issue Subjects: Earth and Planetary Astrophysics (astro-ph.EP) Since the discovery of the first extrasolar planet more than twenty years ago, we have discovered more than three thousand planets orbiting stars other than the Sun. Current observational instruments (on board the Hubble Space Telescope, Spitzer, and on ground-based facilities) allowed the scientific community to obtain important information on the physical and chemical properties of these planets. However, for a more in-depth characterisation of these worlds, more powerful telescopes are needed. Thanks to the high sensitivity of their instruments, the next generation of space observatories (e.g. James Webb Space Telescope, ARIEL) will provide observations of unprecedented quality, allowing us to extract far more information than what was previously possible. Such high quality observations will provide constraints on theoretical models of exoplanet atmospheres and lead to a greater understanding of the physics and chemistry. Important modelling efforts have been carried out during the past few years, showing that numerous parameters and processes (such as the element abundances, temperature, mixing, etc.) are likely to effect the atmospheric composition of exoplanets and subsequently the observable spectra. In this manuscript, we review the different parameters that can influence the molecular composition of exoplanet atmospheres. We also consider future developments that are necessary to improve atmospheric models, driven by the need to interpret the available observations and show how ARIEL is going to improve our view and characterisation of exoplanet atmospheres. [37] Title: Neutrino Mass Priors for Cosmology from Random Matrices Comments: 16+2 pages, two column, 7 figures, 2 tables Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO); High Energy Physics - Phenomenology (hep-ph) Cosmological measurements of structure are placing increasingly strong constraints on the sum of the neutrino masses, $\Sigma m_\nu$, through Bayesian inference. Because these constraints depend on the choice for the prior probability $\pi(\Sigma m_\nu)$, we argue that this prior should be motivated by fundamental physical principles rather than the ad hoc choices that are common in the literature. The first step in this direction is to specify the prior directly at the level of the neutrino mass matrix $M_\nu$, since this is the parameter appearing in the Lagrangian of the particle physics theory. Thus by specifying a probability distribution over $M_\nu$, and by including the known squared mass splittings, we predict a theoretical probability distribution over $\Sigma m_\nu$ that we interpret as a Bayesian prior probability $\pi(\Sigma m_\nu)$. We find that $\pi(\Sigma m_\nu)$ peaks close to the smallest $\Sigma m_\nu$ allowed by the measured mass splittings, roughly $0.06 \, {\rm eV}$ ($0.1 \, {\rm eV}$) for normal (inverted) ordering, due to the phenomenon of eigenvalue repulsion in random matrices. We consider three models for neutrino mass generation: Dirac, Majorana, and Majorana via the seesaw mechanism; differences in the predicted priors $\pi(\Sigma m_\nu)$ allow for the possibility of having indications about the physical origin of neutrino masses once sufficient experimental sensitivity is achieved. We present fitting functions for $\pi(\Sigma m_\nu)$, which provide a simple means for applying these priors to cosmological constraints on the neutrino masses or marginalizing over their impact on other cosmological parameters. [38] Title: Simulating the galaxy cluster "El Gordo": gas motion, kinetic Sunyaev-Zel'dovich signal, and X-ray line features Authors: Congyao Zhang (1, 2), Qingjuan Yu (2), Youjun Lu (3, 4) ((1) MPA, (2) KIAA, (3) NAOC, (4) UCAS) Comments: 10 pages, 6 figures, submitted to ApJ Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO); Astrophysics of Galaxies (astro-ph.GA) The massive galaxy cluster "El Gordo" (ACT-CL J0102--4915) is a rare merging system with a high collision speed suggested by multi-wavelength observations and the theoretical modeling. Zhang et al. (2015) propose two types of mergers, a nearly head-on merger and an off-axis merger with a large impact parameter, to reproduce most of the observational features of the cluster, by using numerical simulations. The different merger configurations of the two models result in different gas motion in the simulated clusters. In this paper, we predict the kinetic Sunyaev-Zel'dovich (kSZ) effect, the relativistic correction of the thermal Sunyaev-Zel'dovich (tSZ) effect, and the X-ray spectrum of this cluster, based on the two proposed models. We find that (1) the amplitudes of the kSZ effect resulting from the two models are both on the order of $\Delta T/T\sim10^{-5}$; but their morphologies are different, which trace the different line-of-sight velocity distributions of the systems; (2) the relativistic correction of the tSZ effect around $240 {\rm\,GHz}$ can be possibly used to constrain the temperature of the hot electrons heated by the shocks; and (3) the shift between the X-ray spectral lines emitted from different regions of the cluster can be significantly different in the two models. The shift and the line broadening can be up to $\sim 25{\rm\,eV}$ and $50{\rm\,eV}$, respectively. We expect that future observations of the kSZ effect and the X-ray spectral lines (e.g., by ALMA, XARM) will provide a strong constraint on the gas motion and the merger configuration of ACT-CL J0102--4915. [39] Title: Reconstruction of a direction-dependent primordial power spectrum from Planck CMB data Comments: 32 pages, 22 figures, 6 tables Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO); General Relativity and Quantum Cosmology (gr-qc) We consider the possibility that the primordial curvature perturbation is direction-dependent. To first order this is parameterised by a quadrupolar modulation of the power spectrum and results in statistical anisotropy of the cosmic microwave background, which can be quantified using the bipolar spherical harmonic representation. We compute these for the Planck Release 2 SMICA map and use them to infer the quadrupole modulation of the primordial power spectrum which, going beyond previous work, we allow to be scale-dependent. Uncertainties are estimated from Planck FFP9 simulations. Consistent with the Planck collaboration's findings, we find no evidence for a constant quadrupole modulation, nor one scaling with wave number as a power law. However our non-parametric reconstruction suggests several spectral features. When a constant quadrupole modulation is fitted to data limited to the wave number range $0.005 \leq k/\mathrm{Mpc}^{-1} \leq 0.008$, we find that its preferred direction is aligned with the cosmic hemispherical asymmetry. To determine the statistical significance we construct two different test statistics and test them on our reconstructions from data, against reconstructions of realisations of noise only. With a test statistic sensitive only to the amplitude of the modulation, the reconstructions are unusual at $2.5\sigma$ significance in the full wave number range, but at $2.2\sigma$ when limited to the intermediate wave number range $0.008 \leq k/\mathrm{Mpc}^{-1} \leq 0.074$. With the second test statistic, sensitive also to direction, the reconstructions are unusual with $4.6\sigma$ significance, dropping to $2.7 \sigma$ for the intermediate wave number range. Our approach is easily generalised to include other data sets such as polarisation, large-scale structure and forthcoming 21-cm line observations which will enable these anomalies to be investigated further.
2017-11-24 05:46:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6448644399642944, "perplexity": 2299.7652855143947}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934807089.35/warc/CC-MAIN-20171124051000-20171124071000-00098.warc.gz"}
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3521505
# Self-Fulfilling Debt Crises, Revisited 53 Pages Posted: 21 Jan 2020 Last revised: 31 Dec 2020 See all articles by Mark Aguiar ## Mark Aguiar Princeton University ## Harold L. Cole University of Pennsylvania - Department of Economics; National Bureau of Economic Research (NBER) ## Zachary Stangebye University of Notre Dame Date Written: January 17, 2020 ### Abstract We revisit self-fulfilling rollover crises by exploring the potential uncertainty introduced by a gap (however small) between an auction of new debt and the payment of maturing liabilities. It is well known (Cole and Kehoe 2000) that the lack of commitment at the time of auction to repayment of imminently maturing debt can generate a run on debt, leading to a failed auction and immediate default. We show the same lack of commitment leads to a rich set of possible self-fulfilling crises, including a government that issues \emph{more} debt because of the crisis, albeit at depressed prices. Another possible outcome is a sudden stop'' (or forced austerity) in which the government sharply curtails debt issuance. Both outcomes stem from the government's incentive to eliminate uncertainty about imminent payments at the time of auction by altering the level of debt issuance. In an otherwise standard quantitative version of the one-period debt model, including such crises increase the default probabilities by a factor of five and the spread volatility by a factor of twenty-five. Keywords: self-fulfilling debt crises, rollover crises JEL Classification: F1, G3 Suggested Citation Aguiar, Mark and Chatterjee, Satyajit and Cole, Harold L. and Stangebye, Zachary, Self-Fulfilling Debt Crises, Revisited (January 17, 2020). Available at SSRN: https://ssrn.com/abstract=3521505 or http://dx.doi.org/10.2139/ssrn.3521505
2021-11-29 13:56:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26320233941078186, "perplexity": 12313.30562145215}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358774.44/warc/CC-MAIN-20211129134323-20211129164323-00475.warc.gz"}
http://mathhelpforum.com/discrete-math/144234-rectangles.html
# Thread: rectangles 1. ## rectangles how many rectangles (including squares) be found in the following diagram? you must explain why your calculations are actually counting the possible rectangles or squares in this diagram. Its a 4x8 grid...i did not know how to draw it w/o attaching it. i think it is an nCr problem, I'm just not quite sure Code: :------------------------------------------: : : : : : : : : : :------------------------------------------: : : : : : : : : : :------------------------------------------: : : : : : : : : : :------------------------------------------: : : : : : : : : : :------------------------------------------: 2. Originally Posted by ihavvaquestion how many rectangles (including squares) be found in the following diagram? you must explain why your calculations are actually counting the possible rectangles or squares in this diagram. Its a 4x8 grid...i did not know how to draw it w/o attaching it. i think it is an nCr problem, I'm just not quite sure :------------------------------------------: : : : : : : : : : :------------------------------------------: : : : : : : : : : :------------------------------------------: : : : : : : : : : :------------------------------------------: : : : : : : : : : :------------------------------------------: You should enclose the diagram in [ code][ /code] tags (I put in extra spaces so that they will be visible). The spacing is all messed up. Here's an example of something in code tags: Code: +----+ | | +----+ 3. ok thanks. so is this an nCr problem? i was thinking that there are 32C3, but i know that is not right 4. Originally Posted by ihavvaquestion ok thanks. so is this an nCr problem? i was thinking that there are 32C3, but i know that is not right Okay so the diagram still doesn't look right on my system, but I think the question is equivalent to asking for how many rectangles are in the following figure, which is called a $9 \times 5$ grid graph, or $G_{9,5}$. This question is common in recreational mathematics, e.g., puzzle books and puzzle websites. You have to find a systematic way to count them all. My recommendation: Fix the upper left corner first, then fix the height, then see how many rectangles you can find. Iterate through all possible upper left corners and all possible heights given those upper left corners. Look at smaller examples if it's easier, or to convince yourself that you have a valid method. 5. Consider the horizontal and vertical lines which determine the sides of a rectangle. There are $\binom{9}{2}$ ways to pick the vertical lines and $\binom{5}{2}$ ways to pick the horizontal lines. So... 6. Originally Posted by awkward Consider the horizontal and vertical lines which determine the sides of a rectangle. There are $\binom{9}{2}$ ways to pick the vertical lines and $\binom{5}{2}$ ways to pick the horizontal lines. So... I knew the answer in terms of triangle numbers and never thought to look at it that way. Thanks! 7. ah that makes sense looking at it that way...360 rectangles...thanks , ### how many rectangles are there in th Click on a term to search for related topics.
2017-02-20 22:36:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7168571352958679, "perplexity": 404.66717572783455}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170613.8/warc/CC-MAIN-20170219104610-00622-ip-10-171-10-108.ec2.internal.warc.gz"}
https://argoprep.com/blog/multiples-of-11/
# Multiples of 11 Eleven is the smallest non-repeating two-digit number and the first two-digit prime number. In Pythagorean numerology, the number 11 has a negative connotation since it is placed between the two promising and significant numbers – 10 and 12. While 10 symbolizes completeness and perfection, 11, on the other hand, represented exaggeration, indulgence, and human sin. It was also regarded as a symbol of internal strife and insurrection. For some, the “11th hour” implies a sense of urgency because the clock is approaching twelve o’clock – which can mean that this is the final hour to complete tasks. On the brighter side, the number 11 became somewhat famous because of the American show called “Stranger Things,” wherein one of the characters was named “Eleven.” Eleven may have a negative implication for some people, but it is still an interesting and important number to take note of. Do you want to journey with us to see how beautiful and meaningful the multiples of 11 are? Hop on as we learn exciting stuff about this remarkable number! ## Multiples of 11 are 11, 22, 33, 44, 55, 66, 77 … One thing that we are most certain about multiples of 11 is that they will always be a whole number. They result from any natural numbers multiplied by 11, which can be expressed as 11n. It is a sequence wherein the difference between two consecutive numbers is 11. Multiples of 11 can be a positive or negative number – since any integer that we are going to pair with 11 can be either positive or negative. We just have to take note that a multiple of 11 cannot have a fractional factor and should always have a zero remainder. Are you know getting excited on know how to find multiples of 11? ## How to find the multiples of 11? Skip counting and multiplication can be used to determine the multiples of 11. Now, let’s try to find the first five multiples of 11 using skip counting. Skip counting is done by repeatedly adding 11 as many times as necessary. To find the first five multiples of 11 using skip counting, we are going to start the count at 11. Then, adding 11 to 11 will give us 22. If we continue doing this process, we will have the sequence 11, 22, 33, 44, and 55 as the first five multiples of 11. On the other hand, we can use a different approach where we just multiply 11 to any positive and negative integer. Suppose we are asked to get the 17th multiple of 11. By multiplication method, we will have the process 11 x 17 = 187. Therefore, the 17th multiple of 11 is 187. More so, if we are told to find the negative 156th multiple of 11, it is easier to use the multiplication method instead of skip counting. Thus, using multiplication, 11 x (-156) = –1,716. Therefore, the negative 156th multiple of 11 is –1,716. Now, let’s take a look at this table. nth Multiple Skip Counting Multiplication 1st multiple 11 11 x 1 = 11 2nd multiple 11 + 11 = 22 11 x 2 = 22 3rd multiple 11 + 11 + 11 = 33 11 x 3 = 33 4th multiple 11 + 11 + 11 + 11 = 44 11 x 4 = 44 5th multiple 11 + 11 + 11 + 11 + 11 = 55 11 x 5 = 55 This table shows that no matter what method you use, you will always come up with the same results. ## Did you know that… When a multiple of 11 is reversed, the new number is also a multiple of 11! Now, let’s see if this is true. 836 is a multiple of 11. If we reverse the number, we will now have 638. If we divide 638 by 11, we will have $$68\;\div\;11=\;58$$ . This is so cool, right? Let’s see if this will work on larger numbers. Suppose 1,095,127 is a multiple of 11; reversing the number will give us 7,215,901. Now, if we divide it by 11, we will get the result $$7,215,901\;\div\;11\;=\;655,991$$ . What an incredible and mind-blowing discovery, right? ## List of First 30 multiples of 11 We know that there are is an infinite number, and so do the multiples of 12. The following shows a list of the first 30 multiples of 12 generated by multiplying 12 by numbers ranging from 1 to 30. Product of 11 and a positive counting number Multiples of 11 11 x 1 11 11 x 2 22 11 x 3 33 11 x 4 44 11 x 5 55 11 x 6 66 11 x 7 77 11 x 8 88 11 x 9 99 11 x 10 110 11 x 11 121 11 x 12 132 11 x 13 143 11 x 14 154 11 x 15 165 11 x 16 176 11 x 17 187 11 x 18 198 11 x 19 209 11 x 20 220 11 x 21 231 11 x 22 242 11 x 23 253 11 x 24 264 11 x 25 275 11 x 26 286 11 x 27 297 11 x 28 308 11 x 29 319 11 x 30 330 If you take a closer look at how the result and the counting number relate to each other, you will notice that the units digit of the positive counting number is the same as the units digit of the result. 11 x 8 = 88 11 x 19 = 209 11 x 24 = 264 Can you see any other pattern that will help us easily distinguish that a number is a multiple of 11? ## Solving problems involving multiples of 11 Finding multiples of 11 isn’t that quite challenging, right? Now, let’s relate what we’ve learned about multiples of 11 by solving these real-life situation problems. ### Problem #1 Alexa is tasked to count all the collected blueberries of her family. There are six baskets of blueberries. The first basket contains 11 blueberries; the second one has 22 blueberries, and so on. How many blueberries did Alexa’s family collect? For us to know the total number of blueberries Alexa’s family has collected, we should take note that the six baskets contain blueberries that are in the sequence of multiples of 11. Thus, the five baskets have 11, 22, 33, 44, 55, and 66 blueberries. Now, all we need to do is add all the blueberries inside the baskets. Thus, 11 + 22 + 33 + 44 + 55 + 66 = 231 Therefore, Alexa’s family was able to gather 231 blueberries. Are you getting the hang of it already? That’s good as we are going to try to solve another problem. ### Problem #2 Sabrina is preparing herself for a math competition. She has 65 days to do training and practice. She pledges to train herself to do mental math every day. On her first day, she plans to solve 11 problems; 22 on the following day, 33 on the third day, and so on. How many math problems does Sabrina need to practice before her competition? We need to find the number of math problems Sabrina needs to solve on her 65th day of training. In the given problem, we can take note that the math problems are increasing by 11 every single day. Thus, we can say that it is in the sequence of multiples of 11. Now, to get the number of problems she needs to solve on the 65th day, we will multiply 65 by 11. Hence, 65 x 11 = 715. Therefore, Sabrina needs to solve 715 math problems using mental math. That’s a lot of math problems in a day, but we will surely root for Sabrina to win that competition! Now, we are also rooting for you that you can ace these three practice exercises!
2022-07-03 06:21:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5851240754127502, "perplexity": 394.6498379173911}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104215790.65/warc/CC-MAIN-20220703043548-20220703073548-00746.warc.gz"}
https://archive.baty.net/2009/trying-the-zeiss-biogon-35mm/
Trying the Zeiss Biogon 35mm The problem with being exposed to the best of anything is that everything else becomes, well, not the best. A few years ago I owned a Leica Summicron 35mm ASPH. Best 35mm lens on the planet. Then I sold it with the rest of the film gear when I went insane digital. Now that I’m back in the Leica fold I had to choose a 35mm lens. I want the Summicron, but finding a good used copy for less than $1500 is tricky. Right now that’s just too much money. On the other hand, the Voigtlander lenses get great reviews, and they’re inexpensive. I went with a Color Skopar 35mm f2.5 for around$300. I don’t like it at all. It’s not the images, they’re fine for the most part. What I don’t like is the handling. Those little plastic “ears” on the aperture ring drive me nuts. It’s so compact that it can be difficult to use. For sale. I needed to find some reasonable compromise between the little VC lens and the crazy-expensive Summicron ASPH. The Zeiss optics have always seemed to be highly-regarded, and not as costly as the Leicas. I found a Biogon 35 f/2 for \$650. I should be getting it in the next few days. Assuming the optical quality is as good as everyone says and that it doesn’t suffer from the “Zeiss Wobble,” the only variable will be handling. I’m used to tab-focused lenses. The Biogon has a little gnurled “nub” instead. Can’t wait to give it a try on the M4! If I don’t love it, I’m going to sell the other kidney and get the Summicron. I hope it won’t come to that.
2021-04-15 14:37:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26089924573898315, "perplexity": 1595.027997216939}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038085599.55/warc/CC-MAIN-20210415125840-20210415155840-00007.warc.gz"}
https://theaveragedev.com/my-two-cents-on-adding-hooks/
# My two cents on adding hooks The other side of using what I deem being developer options is to create them in the code. That’s a lesson I’ve learnt through trial and error; the former being trying to catchup in my general purpose customized plugins with my latest particular needs adding options and the latter being the temptation to foresee my future, and maybe someone else’s, needs and trying to create options for any possible scenario. ## Then add hooks for you and others Guides about using hooks abound but there are two things I’ve learnt about adding my own hooks in code and using them: 1. Give the hooks clear names 2. Give hooking functions clear names ### Clear hook names I use the “who, when, what” rule to give hooks a name and am not scared of long hook names; an example might be route_pages-before-adding_route And something even more specific like route_pages-before-adding_route-hello The “when” part might be optional but the other parts are not. ### Clear functions name I spend a lot of time debugging hooks and hooked functions in a screen like this one [caption id=“attachment_180” align=“aligncenter” width=“653”] Init hook callbacks[/caption] and using OOP techniques found that calling classes and methods in an ablative way helps reading them later. I’m targetting this information layout specifically to have to go and read code as little as possible and found that the method name is suffiicently ablative when carrying the “conditional, action, target” information for filters and the “conditional, action” information for actions. This way an hook like the one from the section above could have an hooked method like RouteMetaHandler::maybeUpdateName I’m not sticking to any more guidelines in my code but the simple one underlying much of what OOP tries to do: “if you can’t name it (a class or a method) then it’s probably doing too much”. ## To protect and to serve Again out of my experience I’ve come to understand that an hook, be it an action or a filter, should stick to two rules: 1. Protect its context from bad code 2. Serve meaningful information Just reading WordPress source code is illuminating: filters always undergo a conditional check not to allow any third party hooking function to return non legit objects and values; filters and actions will always serve (passing it as a parameter to called functions) a meningful context. ## Filters and actions on demand I’ve found that I’m afflicted by a chronic lack of imagination when it comes to imagining myself using a filter or action I’m adding but immediately know what I want when I’d like a filter or action to be there. My current workflow is currently not to add any filter or action until I need one and to always and solely rely on the hooks mechanism to expand functionalities from the first time I need to do so onward.
2020-07-09 10:40:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2038901150226593, "perplexity": 2963.6304540594037}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655899931.31/warc/CC-MAIN-20200709100539-20200709130539-00446.warc.gz"}
https://www.mockat.com/content/cat/surds-indices
## Surds & Indices Surds are less common in MBA entrance tests, including CAT. However, the concept of surds is quite simple and could be applied in other calculations. Note that we may not have direct questions, but these concepts might have to be applied while solving other algebraic questions. Questions on Indices are quite common in MBA entrance tests. The basics of indices is already covered in the lesson on Number Theory. ### 1. Surds A surd is an irrational number which includes the root of an integer. Surds can also be expressed as the sum of a rational number and an irrational number. The following are examples of surds: $\sqrt{5}, 2 + \sqrt{5}, 5^{\frac{1}{3}} + 6^{\frac{2}{3}}, \sqrt[7]{56} + \sqrt[9]{67}$ Surds where the highest power is $\dfrac{1}{2}$ are called quadratic surds and where the highest power is $\dfrac{1}{3}$ are called cubic surds. From the perspective of entrance tests, we will be primarily tested on quadratic surds, and not cubic or other lower power surds. Where $a, b, c$ and $d$ are integers and $b$ and $d$ are not perfect squares, if $a + \sqrt{b} = c + \sqrt{d}$, then $a = c$ and $b = d$. For instance, if $x + \sqrt{y} = 4 + 2 \sqrt{5}$ $\implies x + \sqrt{y} = 4 + \sqrt{2^{2} \times 5}$ $\implies x + \sqrt{y} = 4 + \sqrt{20}$ $\therefore x = 4, y = 20$ To summarise, if two surds are equal, then their rational parts are equal and their irrational parts are equal. #### 1.1 Conjugate of surds Quadratic surds can be eliminated if each of its terms are squared. As $(a + b)(a - b) = a^{2} - b^{2}$, for the term $\bold{(a + b)}$, the conjugate is $\bold{(a - b)}$ and vice versa. If a surd in the denominator has to be removed, then we multiply and divide by the conjugate of the denominator. $\therefore \dfrac{4}{\sqrt{5} + \sqrt{3}} = \dfrac{4}{\sqrt{5} + \sqrt{3}} \times \dfrac{\sqrt{5} - \sqrt{3}}{\sqrt{5} - \sqrt{3}} = \dfrac{4 \times (\sqrt{5} - \sqrt{3})}{(\sqrt{5})^{2} - (\sqrt{3})^{2}}$ $= \dfrac{4 \times (\sqrt{5} - \sqrt{3})}{5 - 3} = 2 \times (\sqrt{5} - \sqrt{3})$ Likewise, $\dfrac{2 + \sqrt{3}}{5 - 2 \sqrt{5}} = \dfrac{2 + \sqrt{3}}{5 - 2 \sqrt{5}} \times \dfrac{5 + 2 \sqrt{5}}{5 + 2 \sqrt{5}} = \dfrac{10 + 5 \sqrt{3} + 4 \sqrt{5} + 2 \sqrt{15}}{5}$ ### Example 1 Where $a$ and $b$ are rational numbers, if $\dfrac{3 + \sqrt{5}}{3 - \sqrt{5}} = a + \sqrt{b}$, then $a + 2b =$ ### Solution $a + \sqrt{b} = \dfrac{3 + \sqrt{5}}{3 - \sqrt{5}} \times \dfrac{3 + \sqrt{5}}{3 + \sqrt{5}} = \dfrac{9 + 5 + 6 \sqrt{5}}{9 - 5}$ $\implies a + \sqrt{b} = \dfrac{7 + 3 \sqrt{5}}{2} = \dfrac{7}{2} + \sqrt{\dfrac{3^{2} \times 5}{2^{2}}} = \dfrac{7}{2} + \sqrt{\dfrac{45}{4}}$ $\therefore a = \dfrac{7}{2}$ and $b = \dfrac{45}{4}$ $a + 2b = \dfrac{7}{2} + \dfrac{45}{2} = 26$
2020-06-02 00:46:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8771480917930603, "perplexity": 457.1124377626155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347422065.56/warc/CC-MAIN-20200602002343-20200602032343-00314.warc.gz"}
https://www.ademcetinkaya.com/2022/09/shortlong-term-stocks-eqt-stock-forecast_19.html
Prediction of the trend of the stock market is very crucial. If someone has robust forecasting tools, then he/she will increase the return on investment and can get rich easily and quickly. Because there are a lot of factors that can influence the stock market, the stock forecasting problem has always been very complicated. Support Vector Regression is a tool from machine learning that can build a regression model on the historical time series data in the purpose of predicting the future trend of the stock price. We evaluate EQT prediction models with Multi-Task Learning (ML) and Independent T-Test1,2,3,4 and conclude that the EQT stock is predictable in the short/long term. According to price forecasts for (n+4 weeks) period: The dominant strategy among neural network is to Hold EQT stock. Keywords: EQT, EQT, stock forecast, machine learning based prediction, risk rating, buy-sell behaviour, stock analysis, target price analysis, options and futures. ## Key Points 1. What is neural prediction? 2. Probability Distribution 3. Should I buy stocks now or wait amid such uncertainty? ## EQT Target Price Prediction Modeling Methodology Predicting stock index with traditional time series analysis has proven to be difficult an Artificial Neural network may be suitable for the task. A Neural Network has the ability to extract useful information from large set of data. This paper presents a review of literature application of Artificial Neural Network for stock market predictions and from this literature found that Artificial Neural Network is very useful for predicting world stock markets. We consider EQT Stock Decision Process with Independent T-Test where A is the set of discrete actions of EQT stock holders, F is the set of discrete states, P : S × F × S → R is the transition probability distribution, R : S × F → R is the reaction function, and γ ∈ [0, 1] is a move factor for expectation.1,2,3,4 F(Independent T-Test)5,6,7= $\begin{array}{cccc}{p}_{a1}& {p}_{a2}& \dots & {p}_{1n}\\ & ⋮\\ {p}_{j1}& {p}_{j2}& \dots & {p}_{jn}\\ & ⋮\\ {p}_{k1}& {p}_{k2}& \dots & {p}_{kn}\\ & ⋮\\ {p}_{n1}& {p}_{n2}& \dots & {p}_{nn}\end{array}$ X R(Multi-Task Learning (ML)) X S(n):→ (n+4 weeks) $\stackrel{\to }{S}=\left({s}_{1},{s}_{2},{s}_{3}\right)$ n:Time series to forecast p:Price signals of EQT stock j:Nash equilibria k:Dominated move a:Best response for target price For further technical information as per how our model work we invite you to visit the article below: How do AC Investment Research machine learning (predictive) algorithms actually work? ## EQT Stock Forecast (Buy or Sell) for (n+4 weeks) Sample Set: Neural Network Stock/Index: EQT EQT Time series to forecast n: 19 Sep 2022 for (n+4 weeks) According to price forecasts for (n+4 weeks) period: The dominant strategy among neural network is to Hold EQT stock. X axis: *Likelihood% (The higher the percentage value, the more likely the event will occur.) Y axis: *Potential Impact% (The higher the percentage value, the more likely the price will deviate.) Z axis (Yellow to Green): *Technical Analysis% ## Conclusions EQT assigned short-term B2 & long-term B2 forecasted stock rating. We evaluate the prediction models Multi-Task Learning (ML) with Independent T-Test1,2,3,4 and conclude that the EQT stock is predictable in the short/long term. According to price forecasts for (n+4 weeks) period: The dominant strategy among neural network is to Hold EQT stock. ### Financial State Forecast for EQT Stock Options & Futures Rating Short-Term Long-Term Senior Outlook*B2B2 Operational Risk 8038 Market Risk3361 Technical Analysis3575 Fundamental Analysis5857 Risk Unsystematic7237 ### Prediction Confidence Score Trust metric by Neural Network: 76 out of 100 with 840 signals. ## References 1. Bai J. 2003. Inferential theory for factor models of large dimensions. Econometrica 71:135–71 2. Farrell MH, Liang T, Misra S. 2018. Deep neural networks for estimation and inference: application to causal effects and other semiparametric estimands. arXiv:1809.09953 [econ.EM] 3. J. Peters, S. Vijayakumar, and S. Schaal. Natural actor-critic. In Proceedings of the Sixteenth European Conference on Machine Learning, pages 280–291, 2005. 4. O. Bardou, N. Frikha, and G. Pag`es. Computing VaR and CVaR using stochastic approximation and adaptive unconstrained importance sampling. Monte Carlo Methods and Applications, 15(3):173–210, 2009. 5. Dietterich TG. 2000. Ensemble methods in machine learning. In Multiple Classifier Systems: First International Workshop, Cagliari, Italy, June 21–23, pp. 1–15. Berlin: Springer 6. H. Kushner and G. Yin. Stochastic approximation algorithms and applications. Springer, 1997. 7. Blei DM, Lafferty JD. 2009. Topic models. In Text Mining: Classification, Clustering, and Applications, ed. A Srivastava, M Sahami, pp. 101–24. Boca Raton, FL: CRC Press Frequently Asked QuestionsQ: What is the prediction methodology for EQT stock? A: EQT stock prediction methodology: We evaluate the prediction models Multi-Task Learning (ML) and Independent T-Test Q: Is EQT stock a buy or sell? A: The dominant strategy among neural network is to Hold EQT Stock. Q: Is EQT stock a good investment? A: The consensus rating for EQT is Hold and assigned short-term B2 & long-term B2 forecasted stock rating. Q: What is the consensus rating of EQT stock? A: The consensus rating for EQT is Hold. Q: What is the prediction period for EQT stock? A: The prediction period for EQT is (n+4 weeks)
2022-10-01 11:40:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7524880766868591, "perplexity": 6776.066735048703}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00187.warc.gz"}
https://math.stackexchange.com/questions/2687769/solve-the-equation-cos2x-cos22x-cos23x-1
# Solve the equation $\cos^2x+\cos^22x+\cos^23x=1$ Solve the equation: $$\cos^2x+\cos^22x+\cos^23x=1$$ IMO 1962/4 My first attempt in solving the problem is to simplify the equation and express all terms in terms of $\cos x$. Even without an extensive knowledge about trigonometric identities, the problem is solvable. \begin{align} \cos^22x&=(\cos^2x-\sin^2x)^2\\ &=\cos^4x+\sin^4x-2\sin^2\cos^2x\\ &=\cos^4+(1-\cos^2x)^2-2(1-\cos^2)\cos^2x\\ &=\cos^4+1-2\cos^2x+\cos^4x-2\cos^2x+2\cos^4x\\ &=4\cos^4x-4\cos^2x+1 \end{align} Without knowledge of other trigonometric identities, $\cos3x$ can be derived using only Ptolemy's identities. However for the sake of brevity, let $\cos 3x=4\cos^3x-3\cos x$: \begin{align} \cos^23x&=(4\cos^3x-3\cos x)^2\\ &=16\cos^6x+4\cos^2x-24\cos^4x \end{align} Therefore, the original equation can be written as: $$\cos^2x+4\cos^4x-4\cos^2x+1+16\cos^6x+4\cos^2x-24\cos^4x-1=0$$ $$-20\cos^4x+6\cos^2x+16\cos^6x=0$$ Letting $y=\cos x$, we now have a polynomial equation: $$-20y^4+6y^2+16y^6=0$$ $$y^2(-20y^2+6y+16y^4)=0\Rightarrow y^2=0 \Rightarrow x=\cos^{-1}0=\bbox[yellow,10px]{90^\circ}$$ From one of the factors above, we let $z=y^2$, and we have the quadratic equation: $$16z^2-20z+6=0\Rightarrow 8z^2-10z+3=0$$ $$(8z-6)(z-\frac12)=0\Rightarrow z=\frac34 \& \ z=\frac12$$ Since $z=y^2$ and $y=\cos x$ we have: $$\biggl( y\rightarrow\pm\frac{\sqrt{3}}{2}, y\rightarrow\pm\frac{\sqrt{2}}2 \biggr)\Rightarrow \biggl(x\rightarrow\cos^{-1}\pm\frac{\sqrt{3}}{2},x\rightarrow\cos^{-1}\pm\frac{\sqrt{2}}2\biggr)$$ And thus the complete set of solution is: $$\bbox[yellow, 5px]{90^\circ, 30^\circ, 150^\circ, 45^\circ, 135^\circ}$$ As I do not have the copy of the answers, I still hope you can verify the accuracy of my solution. ## But more importantly... Seeing the values of $x$, is there a more intuitive and simpler way of finding $x$ that does away with the lengthy computation? • This post doesn't have the squares, but it could give some insight as to how one may think about this problem. – Arthur Mar 12 '18 at 13:12 • This link gives you the solutions. – Jose Arnaldo Bebita-Dris Mar 12 '18 at 13:15 • I expect there to be a much more elegant solution to this as it is an IMO problem. Now only someone has to find it... – vrugtehagel Mar 12 '18 at 13:15 • @vrugtehagel I believe the link to the solution is already an elegant answer! – John Glenn Mar 12 '18 at 13:17 • Yup, I think so too! It was shortly posted before my comment, so I hadn't seen it. I advise @JoseArnaldoBebitaDris to summarize that solution and post it as answer, to avoid this question lingering in the unanswered section of this website – vrugtehagel Mar 12 '18 at 13:21 This is a summary of the solution found in this hyperlink. We can write the LHS as a cubic function of $\cos^2 x$. This means that there are at most three values of $x$ that satisfy the equation. Hence, we look for three values of $x$ that satisfy the equation and produce three distinct $\cos^2 x$. Indeed, we find that $$\frac{\pi}{2}, \frac{\pi}{4}, \frac{\pi}{6}$$ all satisfy the equation, and produce three different values for $\cos^2 x$, namely $0, \frac{1}{2}, \frac{3}{4}$. Lastly, we solve the resulting equations $$\cos^2 x = 0$$ $$\cos^2 x = \frac{1}{2}$$ $$\cos^2 x = \frac{3}{4}$$ separately. We conclude that our solutions are: $$x=\frac{(2k+1)\pi}{2}, \frac{(2k+1)\pi}{4}, \frac{(6k+1)\pi}{6}, \frac{(6k+5)\pi}{6}, \forall k \in \mathbb{Z}.$$ You can shorten the argument by noting at the outset that $$\cos3x=4\cos^3x-3\cos x=(4\cos^2x-3)\cos x$$ so if we set $y=\cos^2x$ we get the equation $$y+(2y-1)^2+y(4y-3)^2=1$$ When we do the simplifications, we get $$2y(8y^2-10y+3)=0$$ The roots of the quadratic factor are $3/4$ and $1/2$. A different strategy is to note that $\cos x=(e^{ix}+e^{-ix})/2$, so the equation can be rewritten $$e^{2ix}+2+e^{-2ix}+e^{4ix}+2+e^{-4ix}+e^{6ix}+2+e^{-6ix}=4$$ Setting $z=e^{2ix}$ we get $$2+z+z^2+z^3+\frac{1}{z}+\frac{1}{z^2}+\frac{1}{z^3}=0$$ or as well $$z^6+z^5+z^4+2z^3+z^2+z+1=0$$ that can be rewritten (noting that $z\ne1$), $$\frac{z^7-1}{z-1}+z^3=0$$ or $z^7+z^4-z^3-1=0$ that can be factored as $$(z^3+1)(z^4-1)=0$$ Hence we get (discarding the spurious root $z=1$) $$2x=\begin{cases} \dfrac{\pi}{3}+2k\pi \\[6px] \pi+2k\pi \\[6px] \dfrac{5\pi}{3}+2k\pi \\[12px] \dfrac{\pi}{2}+2k\pi \\[6px] \pi+2k\pi \\[6px] \dfrac{3\pi}{2}+2k\pi \end{cases} \qquad\text{that is}\qquad x=\begin{cases} \dfrac{\pi}{6}+k\pi \\[6px] \dfrac{\pi}{2}+k\pi \\[6px] \dfrac{5\pi}{6}+k\pi \\[6px] \dfrac{\pi}{4}+k\pi \\[6px] \dfrac{3\pi}{4}+k\pi \end{cases}$$ • Great! An elegant solution too! – John Glenn Mar 12 '18 at 14:42 Hint: $$0=\cos^2x+\cos^22x+\cos^23x-1$$ $$=\cos(3x+x)\cos(3x-x)+\cos^22x$$ $$=\cos2x(\cos4x+\cos2x)$$
2019-07-22 05:24:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9263973832130432, "perplexity": 285.6040172360673}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527531.84/warc/CC-MAIN-20190722051628-20190722073628-00122.warc.gz"}
https://codereview.stackexchange.com/questions/68196/should-i-copy-list-both-in-constructor-and-in-getter/68198
# Should I copy list both in constructor and in getter? I have a simple immutable class: public class ColumnsWrapper implements Columns { private final List<String> columnNames; public ColumnsWrapper(List<String> columnNames) { this.columnNames = new ArrayList<String>(columnNames); } @Override public List<String> getNames() { return new ArrayList<String>(columnNames); } } Should I always save a copy of passed list in constructor and return copy of saved list in getter? This class from library project and can be used in several projects in the future. The purpose of this class is only to store immutable list of column names. • return Collections.unmodifiableList(columnNames); instead of return new ArrayList<String>(columnNames); would be more efficient. – assylias Oct 29 '14 at 8:28 • @assylias: See my comment on this answer why that is no good idea. (It is also premature optimization as there will most likely not be thousands of column names. And if there are, some other place will surely be the bottleneck.) – Nicolai Oct 29 '14 at 11:11 • @NicolaiParlog I suggested to make the change in the getNames method, not in the constructor. It is an optimisation but I would not call it premature in the sense that it does not make the code more complex and will clearly perform better (if only because it will generate less garbage) so why not do it? – assylias Oct 29 '14 at 11:14 • @assylias I am sorry, you are absolutely right. I just missed the return. :) I still advice to already make the instance owned by the class unmodifiable. This makes the class truly immutable and allows the getter to do no extra work. – Nicolai Oct 29 '14 at 20:50 • @NicolaiParlog I had not seen your answer - it makes sense indeed. – assylias Oct 29 '14 at 21:46 You claim you have a simple immutable class, but you don't. All public methods on Immutable classes should also be final. For example, you claim your class is immutable, but, I can change it with: class MyColumnsWrapper extends ColumnWrapper { private final List<String> mutableColumnNames; public ColumnsWrapper(List<String> columnNames) { super(columnNames); mutableColumnNames = columnNames; } @Override public List<String> getNames() { return mutablecolumnNames; } } ... ColumnsWrapper wrapper = new MyColumnsWrapper(mycols); ... In other words, to be immutable, you also need to have non-overridable methods. The best, and easiest way to accomplish that is to make the class final. Apart from that, yes, your class is a decent Immutable instance. Note that the immutability depends on the fact that the list consists of String values, which are also immutable. • getNames() still returns a mutable Collection. Example: getNames().remove(2) – dit Oct 28 '14 at 17:14 • @dit - yes, even in the original class, the collectionr eturned by the get is mutable, but the base class is not. – rolfl Oct 28 '14 at 17:16 • From the Question: "The purpose of this class is only to store immutable list of column names." – dit Oct 28 '14 at 17:17 • @dit - I think there's a miscommunication here, I am not sure I understand your concerns. Let's chat about it in the 2nd monitor – rolfl Oct 28 '14 at 17:19 It depends on what the goal is. Can the list never change after construction? Can it change but only the ColumnWrapper can do so? Or can it change and everyone is allowed to do that? If the list can not change after construction, consider using an ImmutableList (from google's guava). You should then declare the field columnNames and the return value of getNames() to be of that type. You then either create an immutable list during construction or use that type for the constructor argument as well. If the list can change after construction (but only ColumnsWrapper can do so), the getter should return an unmodifiableList. Note that this will cause exceptions if the client of your class tries to manipulate the list. You should then also copy during construction (as you currently do). In any way should you document the behavior with comments on the respective public members (i.e. the constructor and the getter). ## Edit Ok, so the class has to be immutable. As @rolfl explains, it can be subclassed so this is not yet the case. You can either make the class final or make the constructor private and provide a static factory method. Furthermore you have to make sure, that the list can not be modified. The easiest and most intention revealing way I know of is the ImmutableList I mentioned above. Another solution would look like this: public class ColumnsWrapper implements Columns { private final List<String> columnNamesUnmodifiable; public ColumnsWrapper(List<String> columnNames) { List<String> columnNamesCopy = new ArrayList<>(columnNames); columnNamesUnmodifiable = Collections.unmodifiableList(columnNamesCopy); } // OPTION A @Override public List<String> getNamesUnmodifiable() { return columnNamesUnmodifiable; } // OPTION B @Override public Iterable<String> getNamesUnmodifiable() { return columnNamesUnmodifiable; } } Note that I changed the name to inform callers that they will get an unmodifiable instance. I also provided another additional option (you have to choose one) with a different return type. If you are sure that callers will only iterate over the returned instance (as is often the case) the iterable will suffice. But it can not be used to add methods and since removal is an optional method (i.e. many iterators support no removal) it better conveys immutability. I any way the interface documentation should also make that clear. • I'm pretty sure you can't do that in Java. You declare two methods of the same class with no parameters, and that will fail to compile. It's also not necessary, as List<String> implements Iterable<String> – raptortech97 Oct 29 '14 at 15:54 • Of course you are right and it was not my intent to propose both methods. Rather I'd recommend to use the second. I improved the answer to clarify why. – Nicolai Oct 29 '14 at 20:57 I would use unmodifiableList in that case. That way: public class ColumnsWrapper implements Columns { private final List<String> columnNames; public ColumnsWrapper(List<String> columnNames) { this.columnNames = Collections.unmodifiableList(columnNames); } @Override public List<String> getNames() { return columnNames; } } EDIT: (in order to stop academic discussion) Create a instance of the class that way: Columns columns = new ColumnsWrapper(Arrays.asList("Column1", "Column2", "Column3")); or Columns columns = new ColumnsWrapper(new ArrayList(initialColumnList)); • This class is not immutable. Collections.unmodifiableList does not make the specified instance unmodifiable, only the returned one. But the latter is still backed by the former. This means that whoever invoked the constructor still has a reference to the modifiable list and can change the class's state. – Nicolai Oct 28 '14 at 18:31 • @NicolaiParlog who needs to make the specified instance unmodifiable? "The purpose of this class is only to store immutable list of column names" – dit Oct 28 '14 at 21:25 • You can make it immutable by doing this.columnNames = Collections.unmodifiableList(new ArrayList<String>(columnNames)); - make a new copy so you own all access to it, and then make the only view into that list be immutable. – corsiKa Oct 28 '14 at 22:39 • @dit: But the way you wrote it, the list in the field columnNames is not immutable. It is only an unmodifiable view on the constructor argument. Whoever holds the original reference on the argument can still change the column names. This makes the class mutable. – Nicolai Oct 29 '14 at 11:08 It depends on the semantics of your code: • If you want to allow modification of the array obtained from the getter (like changing one column name, adding more columns from the outside, etc.) then you should not return a copy of it • On the other hand, if you want to have the array appear as immutable to the outside then you should definitely return a copy of it in the getter too. In both cases, you should document the behavior. • The first interpretation doesn't sound much like an "immutable class" to me. – 200_success Nov 1 '14 at 9:11
2018-08-19 00:30:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20264878869056702, "perplexity": 2168.253972954846}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221213903.82/warc/CC-MAIN-20180818232623-20180819012623-00711.warc.gz"}
https://en.wikipedia.org/wiki/Wikipedia_talk:Ignore_all_rules/Archive_16
# Wikipedia talk:Ignore all rules/Archive 16 ## It didn't occur to me. No name change for WP:IAR is likely to occur at this point in time. No change to the 12words is likely to occur at this time. WP:UIAR is the most likely prospect for a change to the page, if FG and Chardish can just hang in, and keep the debate going for Two Years or so. Yes/No --Newbyguesses - Talk 23:48, 12 March 2008 (UTC) Personally, I am working on rewriting the last paragraph of UIAR so that it says enough to be useful without saying anything wrong. I haven't seen anyone challenge what is in the rest of UIAR, which suggests to me that it is pretty on-target. Once I fix that last paragraph (with continuing feedback from others), I'll formally propose putting the full text of the C/FG version on WP:IAR. With the availability of UIAR, I don't think we'll have to continue perpetuating IAR as an unexplainable rule. The explanation needs a tiny bit more tweaking, then we can proceed.--Father Goose (talk) 01:26, 13 March 2008 (UTC) ## So it just occurred to me... ...that this page needs to be moved to Wikipedia:Ignore rules and WP:IR. Written policy simply describes actual practice--nothing more, nothing less. There are three rules that we can never ignore, full stop, in this order: WP:NPOV, WP:V, and WP:BLP. WP:IAR as written is factually wrong. Lawrence § t/e 19:00, 11 March 2008 (UTC) But its Wikipedia:Ignore all rules..? WP:IR (Wikipedia:WikiProject Irish Republicanism) needs to be moved to →WP:WPIR (WikiProject). The current WP:IR can be a Disambig for WP:IAR, WP:WPFR and the newly moved WP:WPIR. Make sense?--Hu12 (talk) 19:36, 11 March 2008 (UTC) Whichever works, for the redirects. Is there any objection to this? There is simply no valid basis to ever ignore NPOV, V, or BLP that I can imagine. Does anyone have a scenario where you can? Lawrence § t/e 19:47, 11 March 2008 (UTC) If we're to rename the policy, Wikipedia:Ignore the rules sounds better and retains the same meter. Also, WP:ITR is available. —David Levy 20:12, 11 March 2008 (UTC) I disagree. I believe that those rules can be ignored in situations where doing so improves or helps to maintain the Wikipedia. I'm somewhat puzzled by your assertion that "written policy simply describes actual practice" (my emphasis), but if this is the case, we can simply be prepared to ignore those rules in such contingencies, if it becomes necessary, and our written policy will already describe our actual practice. With regards to David's suggestion, I feel that there is a problem in that 'ignore the rules' cannot be easily disambiguated from 'ignore some rules.' Since "ignore all rules" are the first three words of our canonical twelve word policy, to replace them with 'ignore the rules' is a very radical shift in the meaning of that policy. (which rules, one might later ask? The other rules, besides WP:NPOV, WP:V, and WP:BLP, another might answer.) It worries me a little bit, if the same people who are opposed to the explication of the policy - because, to my understanding of the argument, this might lead to a drift in the meaning - prove to be those who are most comfortable with radically re-writing the policy so that it means something other than what it means now. I imagine that this is a misperception on my part, but it's discomforting. — 69.49.44.11 (talk) 22:06, 11 March 2008 (UTC) For the record, I neither support nor oppose a move. I'm merely suggesting an alternative title to Wikipedia:Ignore rules that could be used if such an event occurs. —David Levy 22:19, 11 March 2008 (UTC) Understood, and thank you for the clarification. I had noted the 'if,' but wasn't sure if you meant to encourage the idea. — 69.49.44.11 (talk) 22:24, 11 March 2008 (UTC) I say, keep the "all", and let each editor struggle with wrapping their head around that concept. That's a valuable learning process. -GTBacchus(talk) 23:02, 11 March 2008 (UTC) I would like to remind everyone, especially User:Lawrence (and GTBacchus) of this from Wikipedia:Five pillars. Wikipedia's official policies and guidelines can be summarized as five pillars that define the character of the project. NOWHERE are WP:NPOV, WP:V, WP:NOR, WP:NPA and the WP:GFDL, as considered at WP:FIVE described as rules. They are PRINCIPLES. These Principles cannot be negated, except by ending the project. ALL RULES may be ignored. The specific "words" which make up WP:V, for instance, may change slightly, and some wording mayin fact be ignored if it leads to trouble. But the PRINCIPLE which is expressed in WP:V may not be ignored. (Which is not to say that a '"change of name" may not be a good thing., though it has been suggested before, with no success.) --Newbyguesses - Talk 00:58, 12 March 2008 (UTC) I think that's just right, Newbyguesses. Rules, being text documents, may be ignored. All of them. In fact, I'm a little leery of anyone who doesn't ignore WP:CIVIL - if you have to read a policy page here to figure out what it means to be civil, then something's wrong. I can ignore ${\displaystyle F={\frac {Gm_{1}m_{2}}{r^{2}}}}$ - it's only an approximation, and not useful in all circumstances, nor remotely necessary in most circumstances - but I'd better not ignore gravity. -GTBacchus(talk) 01:06, 12 March 2008 (UTC) Please expand here, GTB. I am waiting to hear you say that we cannot, without severe consequence, "ignore" our principles. --Newbyguesses - Talk 01:11, 12 March 2008 (UTC) I dunno, NBG, is this some kind of trap? If I "expand", then you might need to change the settings on the archive bot again, and program it to just wait by this talk page with a baseball bat for me to show my face. ;) The consequences of ignoring gravity are that I don't try to ski anymore. There are certain relevant and simple facts: This is an encyclopedia (as opposed to a rumour mill or a publisher of original work). Civility opens doors and smooths paths in life (as opposed to dickishness, which earns one enemies and compromises one's effectiveness in a collaborative environment). The Wikimedia Foundation owns the website, and they get to make licensing decisions with material that we submit. Ignoring these facts won't dislocate your shoulder, but you won't edit the wiki for long that way either, and you'll generate a lot of heat of your way down. (You won't get to chat up the ski bunnies at the lodge bar with fictions about your epic wipeout, either) Yes. I agree that the principles expressed at WP:5P can't really be ignored, if you wish to claim that you're working on the same project as the rest of us. Fidelity to those principles may require that the text of the pages be ignored, at times. -GTBacchus(talk) 01:28, 12 March 2008 (UTC) (e/c) Interesting sidetrack here, I'd recommend a hockey stick (referee might not be watching) rather than a baseball bat (everyone's fallen asleep). How did we wander into supremacy of principles over rules? Who disputed that? To return to the origin of the thread, no, it's ignore - all - rules. Respect the principles, but ignore all rules. It needs to be uncompromising, GTBacchus said it, "let each editor struggle with wrapping their head around that concept. That's a valuable learning process." Everyone has to individually figure out how IAR works within the five pillars. Franamax (talk) 01:50, 12 March 2008 (UTC) Under what possible scenario can we ignore NPOV to improve the encyclopedia? <boggle> Lawrence § t/e 01:36, 12 March 2008 (UTC) If the page WP:NPOV says something that isn't consistent with what you know NPOV really means, then you may ignore what the rule says, in favor of what it should say. -GTBacchus(talk) 01:41, 12 March 2008 (UTC) (ec with GtBacchus)Yes, spider. My "trap" was set for you, Lawrence Cohen!. Both GTB and myself are saying (I think) that you CAN ignore the specific wording of WP:NPOV, and WP:CIVIL, if the wording is unclear, or has recently been "updated" in a way that is unhelpful. I FULLY AGREE with you, though, that if some editor begins to edit in a way which is contrary to the PRINCIPLE of a Neutral Point Of View, then that editor is likely to be causing the project, and other editors trouble. To restore the Principle of NPOV, editors will probably have to revert, or modify the first editor's work. (Hope I aint putting words in any-one's mouth, and it wasn't a trap, anyway, just groping for enlightenment, thanks.) --Newbyguesses - Talk 01:53, 12 March 2008 (UTC) Hrm. You got me, there. Lawrence § t/e 02:19, 12 March 2008 (UTC) And I'd like to state that I'm in broad agreement with Newbyguesses and GTBacchus on this. One can try to wikilawyer WP:NPOV, for instance, to get information they don't like removed from an article (such as valid criticism or controversy). The principles of the rules can be subverted by selective interpretations of the words found on the policy pages. But IAR helps to keep the principles intact by emphasizing that the rules are just words. Ignore all rules is an overstatement, but it's an enlightening one. It has a certain historical weight to it as well: "Ignore all rules" was the first rule on Wikipedia, and hopefully it will remain the first rule on Wikipedia forever. Without it, we will devolve into an insider's club (those who enforce the rules and those who get kicked out for not obeying them. Sad to say, that scenario is already quite common.)--Father Goose (talk) 03:43, 12 March 2008 (UTC) I fully agree with Father Goose: ::::--Ignore all rules is an overstatement, but it's an enlightening one.----- (Whee, I am at an internet cafe - like Formula 1, compared to my "usual" system.) --Newbyguesses - Talk 04:01, 12 March 2008 (UTC) I'm not sure how much of an overstatement it is, as the real message is: "stop thinking in terms of rules at all". That's hard. It requires paradigm shifts. -GTBacchus(talk) 07:35, 12 March 2008 (UTC) I like the name the way it is. Really, I don't care if I get a blue shed, or a red shed, as long as it is a shed. (1 == 2)Until 16:56, 12 March 2008 (UTC) The name-change suggestion seems to have died in the water, so let's START a NEW SECTION. This section is getting too long, and has served it's purpose.(I like No firm rules as a title, but Ignore all rules has too much going for it.) And, why is no-one commenting in the BIG survey? Is it "futile"? Does no-one understand what to do? Is it wrong? Can no-one be bothered to begin? I am confused (as usual). Answers, if any, in a new section please, or at "The IAR page will look like this in two months", thanks --Newbyguesses - Talk 20:50, 12 March 2008 (UTC) It's wrong. I don't see any use in having us make proclamations as to what we think the IAR page will look like in two months' time. Even if it were a poll as to what we wanted IAR to look like, better just to leave it open-ended than to pre-formulate our answers. We've all pretty much established our respective positions and apparently feel no need to line up for a head count; the most constructive thing we can do right now is just keep talking with each other and trying to get our views in sync.--Father Goose (talk) 21:35, 12 March 2008 (UTC) Well, I was trying to move the debate forward. You misunderstand the title of the section -- The IAR page will look like SOMETHING in two month's time, whether we debate, chat, or all take a hike and go write articles. Notice, FG, that the original section with all the views about UIAR (original) is now in the archives, so that debate is now gone, with nothing to show for it. Archiver settings? --Newbyguesses - Talk 23:48, 12 March 2008 (UTC) There are no rules we can "never" ignore. There are simply certain rules which it is inconceivable to find an example where you could justifiably do it. Lawrence, your original list wasn't complete: WP:CIVILITY should be a part of that too. But of course even WP:CIVILITY can be ignored too.   Zenwhat (talk) 13:42, 17 March 2008 (UTC) It's a very good idea to ignore "WP:CIVIL", the page. I'm a little worried about anyone who reads that page. The principle of being civil, I wouldn't recommend ignoring ever. -GTBacchus(talk) 14:17, 19 March 2008 (UTC) ## Another angle Before applying any rule on Wikipedia, ask yourself whether following that rule makes sense in that context: will it help the project? If so, then apply the rule. If not, then ignore it, and help the project instead of applying the rule. I'm not promoting this as anything in particular, but it's a sort of heuristic way of expressing the second popular interpretation of IAR, the first being that you don't have to learn the rules before editing. At least, I think those are the two main interpretations. Anyway, if someone thinks a version of this rule-of-thumb might have a place in one of the essays, cool. Almost every bullet point on WIARM follows naturally from it, I think. (Spirit vs. Letter, No Lawyering, Description vs. Prescription, Mindfulness & Thoughtfulness, etc.) -GTBacchus(talk) 21:19, 18 March 2008 (UTC) I think that the last part (If not, then ignore it, and help the project instead.) implies that you were never really intending to help the project in the first place. :) SynergeticMaggot (talk) 21:25, 18 March 2008 (UTC) Huh... I intended it as, you were intending to follow the rule, but then you realized it wouldn't help. I added four words. -GTBacchus(talk) 21:43, 18 March 2008 (UTC) The way it read the first time, gave me the wrong impression. Must have slipped my mind as to the exact context. No worries. SynergeticMaggot (talk) 21:52, 18 March 2008 (UTC) I've fiddled around with the header a bit, as it was getting rather messy. Maybe we could intergrate the archives into it as well? Oh, and could someone set up the "e" icon? I can't get it to directly edit. microchip08 Find my secret page! Talk to me! I feel lonely! 20:34, 14 March 2008 (UTC) Done. As non-related advice, you should avoid using <font>. But not a big issue. --Izno (talk) 04:48, 22 March 2008 (UTC) ## KISS:12 to 2 We can knock out some more words. • If a rule prevents you from improving or maintaining Wikipedia, ignore it. New version: • Improve Wikipedia. Like its brethren, this rule has some interesting properties. :-) --Kim Bruning (talk) 05:00, 25 March 2008 (UTC) No, that wouldn't be helpful. —David Levy 05:03, 25 March 2008 (UTC) If I was suggesting it be used on this page, I would have simply boldly done so. Right now I'm just playing with the wording to see if there are more useful things that can be said. (and not necessarily here.). I'd be glad to hear your constructive input. --Kim Bruning (talk) 05:09, 25 March 2008 (UTC) Um, "Improve Wikipedia" is 2 words. I cant think of a comment that is as short as that. How about, interesting? --Newbyguesses - Talk 10:34, 25 March 2008 (UTC) What if we were forced to choose, Kim, by some future generation of radical brevitists? What if the rule was only one word? Would you throw your support behind 'Improve', or 'Wikipedia'? [My apologies to our current 'brevitists', as this is, implictly, an unfair characterization of their position. My remark is intended only for comic effect.] 69.49.44.11 (talk) 21:23, 25 March 2008 (UTC) Behind "Improve", of course. If editors end up improving the entire world by curing cancer, ending all war, and eliminating famine, then this would be an unfortunate and unavoidable side-effect, which we would simply have to live with. --Kim Bruning (talk) 22:23, 25 March 2008 (UTC) You aren't thinking laterally enough. The ideal version of IAR would contain zero words, and cause everyone looking at it to forget every rule they had ever learned.--Father Goose (talk) 01:10, 26 March 2008 (UTC) Similar to The Game then, is it? --Kim Bruning (talk) 01:45, 26 March 2008 (UTC) urk... was someone over-zealous at AFD again? Perhaps more like this game. The Game is actually at DR right now, and likely to rejoin us soon.--Father Goose (talk) 04:47, 26 March 2008 (UTC) That change would diminish the intended meaning of the policy. (1 == 2)Until 14:18, 26 March 2008 (UTC) <this page intentionally left blank> might have some interesting issues, yes. ;-) --Kim Bruning (talk) 21:25, 27 March 2008 (UTC) ## Anybody here who is neutral, with a lot of spare time? In order to: 1. Consolidate discussion. 2. Clarify what "past consensus" means. 3. Work towards a meaningful future consensus. 4. Establish who is responsible for long-term edit-warring (or if there is any such long-term edit war): Could anybody here who is a neutral party and with a lot of spare time, create a list of all proposals, arguments for such proposals, and names of those who supported such, from the 16 archived pages of discussion? Then, if anybody would like to comment on a particular proposal, we focus narrowly within that thread instead of creating more and more threads on the same issues, or even essentially trivial non-issues.   Zenwhat (talk) 15:47, 27 March 2008 (UTC) I would prefer my words not be summarized. I would say that if you read 3 or 4 sections then you have a pretty good summary of the past year. We are only going in circles here. I also think it is counterproductive to try to determine consensus based off of archives. Consensus can change. If you want to know consensus, ask a question and get your answers. I fully support the idea of narrowly defined threads with the intent of determining consensus though. (1 == 2)Until 15:55, 27 March 2008 (UTC) The good thing about summarization, though, is that it might help the discussion progress. If we can all agree on a short paragraph that describes what the debate had been about, and what people's positions had been, then we have something constructive that will help frame further discussions. Debate, after all, clarifies thought, and thought can lead to insight, and insight can lead to revelation, and revelation can lead to consensus. BRDTIRC, to coin a string. Would you be willing to summarize your own position, as you see the essential elements to be, yourself? [A BNF grammar for wikipedia debates would be sort of neat, I think. Just an idle thought.]69.49.44.11 (talk) 17:46, 27 March 2008 (UTC) It is difficult to summarize my position on the 27 odd attempted changes that apparently have no pattern. There really has been no consistent suggestion for this page. I guess my position is that since policy is meant to reflect the wide acceptance of the community, edits to policy should as well. (1 == 2)Until 18:16, 27 March 2008 (UTC) Yeah, I don't think "summarizing threads" are good, because you can't reply to them, and I doubt we can all agree on what prior debates have been about. Let's just have conversations, not metaconversations.--Father Goose (talk) 23:00, 27 March 2008 (UTC) ## "Including this one" Whatever happened to "Including this one"? I say we bring that phrase back. It really adds a lot to the "ignore all rules" principle. szyslak (t) 07:07, 19 March 2008 (UTC) How could ignoring rules to help you improve or maintain Wikipedia prevent you from improving or maintaining Wikipedia? It is a logical contradiction. (1 == 2)Until 14:55, 19 March 2008 (UTC) Yeah, this policy doesn't give instructions, it conditions the necessity of other instructions. What's there to ignore? Adding "including this one" does have a nice, whimsical sound to it, which I kind of like, but I'm not sure it really holds any meaning. -GTBacchus(talk) 15:10, 19 March 2008 (UTC) I should've known this discussion would go in this direction. :) I think the "including this one" passage would drive home the fundamental point of IAR: that improving the encyclopedia is the main concern here. Sometimes the rules get in the way of that goal, and sometimes the rules help. When the rules help, we ignore "this one". szyslak (t) 15:16, 19 March 2008 (UTC) But following other rules when they help isn't really ignoring this one, at least not in the sense of disobeying it. This rule (or "rule") doesn't say not to follow rules when they're working. It says to apply the other rules mindfully instead of mindlessly, and there's no instance in which that's a bad idea, is there? -GTBacchus(talk) 15:52, 19 March 2008 (UTC) It's overly whimsical, in my opinion. The appeal is that it points out the inherent self-referential paradox in ∃x ∈ ¬∀x [someone correct my syntax, please!], which cannot be resolved. I feel that highlighting the paradox, however, detracts from the main intent of the rules. Causal reasoning [at least, when I do it] relies often on association, and minimizes the relevance of confusing information. 69.49.44.11 (talk) 14:25, 22 March 2008 (UTC) For the purposes of beating my own drum, I felt I should mention that I myself have Ignored All the Rulez for the first time ever. Critique is welcome and appreciated. 69.49.44.11 (talk) 14:29, 22 March 2008 (UTC) it say that if the rules says you can't improve/maintain wikipedia, ignore itOmgwt..bbq (talk) 01:05, 1 April 2008 (UTC) I see no such paradox. (1 == 2)Until 15:27, 22 March 2008 (UTC) It goes something like this: 1. If 'ignore all rules' is a rule, then we should obey it. 2. The rule tells us to ignore all rules. 'Ignore all rules' is itself a rule. Therefore, we must ignore the rule 'ignore all rules'. 3. But if we ignore the rule 'ignore all rules', then we obey the rule 'ignore all rules'; thus failing to obey the rule 'ignore all rules'. 4. Therefore we can neither obey nor disobey the rule 'ignore all rules'. 69.49.44.11 (talk) 17:31, 22 March 2008 (UTC) It ceases to be a paradox when expressed more completely as "Ignore all rules [when appropriate]". It's a lot like "be moderate in everything, including moderation."--Father Goose (talk) 20:36, 22 March 2008 (UTC) (ec- Father Goose has made a much more sensible take on this, but I will leave my comment in)- Ah, yes. Like many paradoxes, there is a degee of semantic ambiguity involved. If a "rule" is defined as "that which must be obeyed" we get a full-on paradox. But, find another dictionary which defines "rule" as "optimal procedure" and we get to argue it all again. On the face of it, "Ignore all rules" has a strong whiff of paradox about it, to my way of thinking. (Consider the Cretan's paradox. [A person from Crete is said to have made the statement "All Cretans are liars"]. How is that to be parsed, if it is true it is false, and if it is false it is true! My take, is that the word "liar" used as an absolute, has no meaning. Under a strict interpretation of the word, a person is a liar IFF they have never and will never utter a true statement, which is impossible to prove, especially the future condition. If the statement is rendered less absolute, [Many Cretans often are untruthful], then the paradox is disarmed.) Words,words, words. For instance, have you or I or anyone ever seen an "all"? Define "all"? "All" is an abstraction, an inexactitude masquerading as an exactitude, for how can we define an all without reference to some excluded externality, negating the meaning of "all"? We cannot ignore 'all' rules, we can only ignore this rule, and this rule, and then this one if necessary, we never reach the point of "all" since new rules can be introduced at any time. There is no such thing as an "all" which a finite human mind can grasp, it is just a word that people use and when we use words we dont understand we get confused. Well, that's my excuse. --Newbyguesses - Talk 20:41, 22 March 2008 (UTC) Or it could be that "ignore all rules" is not a rule. -GTBacchus(talk) 22:18, 22 March 2008 (UTC) Sigh, the policy does not say "The rule tells us to ignore all rules. 'Ignore all rules' is itself a rule. Therefore, we must ignore the rule 'ignore all rules'", is says to ignore rules when they prevent you from improving or maintaining Wikipedia. As I said before. how could ignoring rules to help you improve or maintain Wikipedia prevent you from improving or maintaining Wikipedia? It is a logical contradiction. No paradox. The only way one could see a paradox is if they mistakenly only read the title and not the content of the policy. (1 == 2)Until 14:37, 23 March 2008 (UTC) Well, yes, that's true. I was just explaining why "Ignore all rules, including this one," was implicitly paradoxical. It's a technical problem with self-referencing negative statements, in general. It doesn't leave the actual 'ignore all rules' rule meaningless or useless at all. I'd just meant to advise that we not give positive attention to it as an attractive feature of the rule. There is, of course, a conflation at work between 'rule', in the pragmatic sense where we are actually trying to improve the wikipedia, and 'proposition within a system of axioms.' 69.49.44.11 (talk) 17:12, 23 March 2008 (UTC) Ahhhhh, I retract my sigh hehe. (1 == 2)Until 17:15, 23 March 2008 (UTC) Is the debate between terse and verbose on hold while we wait for another draft by Father Goose? —69.49.44.11 (talk) 17:38, 23 March 2008 (UTC) It seems to have run out of steam. (1 == 2)Until 19:25, 23 March 2008 (UTC) Don't be deceived; I'm just taking it a step at a time. I've learned not to try to get things done in a rush on Wikipedia. Haste makes waste, or something like that.--Father Goose (talk) 03:16, 24 March 2008 (UTC) I did not mean to say that you ran out of steam. (1 == 2)Until 14:33, 24 March 2008 (UTC) ## Nutshell "Rules aren't set in stone" is kind of a good nutshell. But the IAR policy doesn't need 2 nutshells [1], and If a rule prevents you from improving or maintaining Wikipedia, ignore it is my preferred choice. The other phrase, "Rules aren't set in stone" is featured on a lot of pol/guide pages anyway. --Newbyguesses - Talk 07:29, 25 March 2008 (UTC) What I find seriously annoying is that the entire policy is written like a nutshell. However, it is not technically a nutshell, given that it is not enclosed as such. David, our of curiousity, why did you say that it "isn't even an accurate summary"? Teh Rote (talk) 23:40, 3 April 2008 (UTC) ### Stop reverting Zenwhat, David, what are you doing? Do you want to get the page protected again? Someone win by stopping first, quickly! -GTBacchus(talk) 00:52, 26 March 2008 (UTC) Well Zenwhat was putting in a change that was refused by consensus not to long ago, and David is reverting the to accepted version. Just like the events leading to the last 6 page protections. It puts those who seek to reflect consensus in the position of reverting, or having the policy no longer reflect wide acceptance. (1 == 2)Until 14:21, 26 March 2008 (UTC) Yeah, I can see events unfold. I disagree with David's strategy of reverting more than once, because such behavior is more likely to lead to protection than the alternative. The alternative is to post on the talk page about the dispute, and then let someone else revert for you. That makes your edit much cleaner, much stickier, and only slightly slower to appear. -GTBacchus(talk) 18:45, 26 March 2008 (UTC) I also agree with GTB's comments about repeated reverts by the same person not being the best way of dealing with unpopular edits. With the number of people who watch this page, someone else will certainly deal with it. And if no one else deals with it, then the edit is likely non-problematic. - Chardish (talk) 19:41, 26 March 2008 (UTC) I am, in fact, trying to be more patient. After Zenwhat reverted for the second time, I did sit out and wait for someone else to revert back. That ended up being PhilKnight. Prior to that, I didn't think much of reverting Zenwhat five days after I'd reverted a different editor. —David Levy 21:05, 26 March 2008 (UTC) (just in case anyone was confused, User:PhilKnight was editing as "Addhoc" previously, at the time of the last protection.)--Newbyguesses - Talk 23:26, 26 March 2008 (UTC) David, you're right. When I looked at the history and saw your names in alternation like that, I didn't note that your first revert was actually 5 days previous, and of a different user. I guess I'm a little jumpy about edit warring on this page. Zenwhat, what in the zen were you thinking, making the same edit twice? Since when is that productive? -GTBacchus(talk) 23:09, 26 March 2008 (UTC) Let me ask a dumb question here. Why do we need a nutshell template anyway? To sum up 12 words? Come on now. This isn't a complex policy. SynergeticMaggot (talk) 14:44, 26 March 2008 (UTC) Agreed. -GTBacchus(talk) 18:45, 26 March 2008 (UTC) Also agreed. I also feel that blocking would be a superior alternative to protection. - Chardish (talk) 18:52, 26 March 2008 (UTC) Agreed, if we just keep protecting the page instead of blocking then the system becomes very gamable. (1 == 2)Until 18:55, 26 March 2008 (UTC) I mentioned the fact that blocking edit-warriors would be necessary, back when Ryan Postletehwaite (spelled incorrectly, whatever) suggested mediation. I agree with you all as well. I will stop reverting if David Levy and others agree to the same, of course. Frankly, I don't think that a page of this nature works well with the standard BRD cycle and talkpage, because it isn't clarified which side is responsible for the long-term edit war and the talkpage discusses are poorly framed. Some people have done good jobs framing the discussions occasionally, but then other times they distort the discussions to support a particular point-of-view.   Zenwhat (talk) 03:00, 27 March 2008 (UTC) "I will stop reverting if David and others agree to the same," is precisely the attitude that causes (IRL) wars to never end. The winner of an edit war (slow, fast, whatever) is the one who stops reverting first. Just follow 0RR, and things go better. What's a second identical edit supposed to do? Maybe standard BRD doesn't work here, but surely BRRRRRR is worse, right? -GTBacchus(talk) 04:02, 27 March 2008 (UTC) Well, I would just like to point out that those editors who have reverted obviously feel there are good reasons. No-one has broached Wikipedia:3RR, and it is not obligatory to observe any stricter rule than that, even if it might be less messy if 1RR was in vogue, but maybe not. If there are 10 editors who have edited the page recently, there are six million accounts that have not.--Newbyguesses - Talk 05:02, 27 March 2008 (UTC) You're right, and I may be over-reacting. I think people familiar with this page and its (especially recent) history would err on the side of less reverting. I'll stop nagging now, sorry. -GTBacchus(talk) 05:16, 27 March 2008 (UTC) "I will stop reverting if David and others agree to the same" means one can make an edit that does not have consensus and it cannot be undone. Lets not demonize reverting, it has just as much potential to be productive as another edit. By the same token a change to the page can be worse than a revert. The action that results in the policy reflecting widespread acceptance is the correct one. Edit warring is bad, but taking actions likely to require reverting is also bad. (1 == 2)Until 05:39, 27 March 2008 (UTC) I don't wish to demonize reverting. I advocate sharing the work more, but as noted above, I jumped the gun in this case, because David and PhilKnight did share the work. -GTBacchus(talk) 05:42, 27 March 2008 (UTC) It was not my intent to apply that comment to you specifically. It has been a common theme here that I wish to rebuke, and this thread seemed on topic. (1 == 2)Until 15:58, 27 March 2008 (UTC) You're right about war, GTBacchus. However, the equal threat of being blocked for edit-warring would be comparable to Mutual assured destruction, hence the reason I agree that admins should be very liberal about blocking people for edit-warring here.   Zenwhat (talk) 15:50, 27 March 2008 (UTC) I think it is entirely possible that the more disruptive parties could find themselves blocked while those who work with consensus do not. It is true that often the blocks go across the board, but sometimes(just sometimes) the admins sees the full context and can make a less sweeping reaction to a problem by removing only the instigator. We will see. (1 == 2)Until 15:54, 27 March 2008 (UTC) ## Removal of that three P essay Have a look at this before reposting to this page. I see no merit for it being included myself. SynergeticMaggot (talk) 00:01, 8 April 2008 (UTC) I agree it is not really directly relevant enough to the policy to include it. (1 == 2)Until 00:03, 8 April 2008 (UTC) ## Ignore all rules. Yes, this includes flaming other people on the internet, as long as both of the people are NOTABLE, and that it is CIVIL. —Preceding unsigned comment added by 98.227.189.232 (talk) Your sentence does not include enough information to make any sense. (1 == 2)Until 00:20, 8 April 2008 (UTC) God damn you're fast. 98.227.189.232 (talk) 00:22, 8 April 2008 (UTC) ?? (Ignore the rule about) flaming other people on the internet, ( or Ignore the rule about) Not flaming other people on the internet, as long as both of the people are NOTABLE, and that it is CIVIL. ?? --Newbyguesses (talk) 01:32, 8 April 2008 (UTC) Suggest a redirect to ignore all comments. SynergeticMaggot (talk) 01:35, 8 April 2008 (UTC) ## WP:SNOW I can see only the vaguest of connections from IAR to WP:SNOW, not sure why it would be added to the See Also section. Not doing any harm, I suppose. Comments? --Newbyguesses (talk) 02:47, 8 April 2008 (UTC) I think it makes sense to add it, because it clarifies an actual application of IAR.   Zenwhat (talk) 05:13, 8 April 2008 (UTC) I don't think it makes sense to add it. Its out of place with the rest of the essays in that section. Reviewing it will show that none of those essays even mention SNOW. SynergeticMaggot (talk) 05:42, 8 April 2008 (UTC) I don't think it belongs either. (1 == 2)Until 14:00, 8 April 2008 (UTC) I added it because this article can be kind of confusing, and people misinterpreting it can be a problem. I thought since it shows an actual example of when you would use IAR, it would be useful (or at the very least relevant).--KojiDude (Contributions) 20:49, 8 April 2008 (UTC) We have more than one essay linked to it to relieve that very problem. While WP:SNOW is an example of IAR, I still don't think it belongs on this policy page. Such a short page should not be overwhelmed by meta material. (1 == 2)Until 21:57, 8 April 2008 (UTC) WP:SNOW is a miserable page which is probably the most misused page in the entire project space, as (despite its essay status) it is routinely cited in order to hastily silence minority opinions. To link to it here is to insult this page and lend undue legitimacy to that one. - Chardish (talk) 03:04, 9 April 2008 (UTC) Well (in response to Cardish) I don't see how SNOW could be racist, but after going over the other essays it does seem kind of redundant to link it. They pretty much cover everything. I'll leave it down.--KojiDude (Contributions) 03:20, 9 April 2008 (UTC) By "minority opinions" I meant "opinions held by a minority of people" not "opinions held by minority ethnicities." - Chardish (talk) 03:48, 9 April 2008 (UTC) OH, okay, sorry dude. My bad. I get what you meant now.--KojiDude (Contributions) 03:51, 9 April 2008 (UTC) Chardish, this is kind of off-topic, but I would nominate WP:AGF as the most misused page in project space. A lot more people know about it, and think it's like a big garden, in which to gather loopholes. -GTBacchus(talk) 06:31, 9 April 2008 (UTC) Heh. Maybe not misused, but most misinterpreted. Unfortunately a lot of people believe "AGF, except when a person is being an obviously disruptive troll/vandal like that person, right there!" - which pretty much negates AGF. In other words, much like WP:SNOW harms WP:IAR, so does WP:SPADE harm WP:AGF. - Chardish (talk) 06:39, 9 April 2008 (UTC) Yeah, I used to be pretty active at SNOW, but it seems quiet lately. Has SNOW abuse been noticeable, recently? -GTBacchus(talk) 06:41, 9 April 2008 (UTC) WP:SNOW is an essay, and it is in no way a "supplement" to WP:IAR. So, no need to refer to it from this page. (Lots, and lots, of pages get mis-interpreted. Let me count the ways.)--Newbyguesses (talk) 08:01, 9 April 2008 (UTC) Consensus has it that the snowball clause is not an essay, and you even got reverted (twice now :-P ) when you tried to say it was. :-) We're not entirely sure what it *is* (possibly a kitten-eating reptile from venus?), but we cannot deny that it is alive and well, and used on wikipedia every day. :-) --Kim Bruning (talk) 12:58, 9 April 2008 (UTC) Because a large number of wikipedians do use it, and get consensus support when they do, strictly speaking it is a regular wikipedia policy or guideline (expanding on WP:BOLD and WP:IAR). However, some people have been opposing accurate classification. In short the snowball clause is a textbook case where people have been nomic-ing/politic-ing, in a deliberate attempt to block the process of documenting consensus. (With no comment on whether that is Good or Bad here.) It's one of those circumstances where the policy itself is okay but it's misapplied more often than not. Too often I see deletion discussions "closed per WP:SNOW" within the first couple hours of discussion just because no one has shown up yet who agrees with the nom. I've even seen it happen to RfAs for the same reason. - Chardish (talk) 16:15, 9 April 2008 (UTC) (ec) Everyone misapplies everything. You can WP:IAR and unclose, if you know someone is going to come along. --Kim Bruning (talk) 16:23, 9 April 2008 (UTC) (Ignore all rules) "If you are in a hole, stop digging."— Anon. "If you are going through hell, keep going." — Winston Churchill "You can always count on people to do the right thing - after they've tried everything else." — Anon (Ignore all rules)--Newbyguesses (talk) 01:01, 10 April 2008 (UTC) (outdent, @Chardish) I'd be interested in seeing some examples if you don't mind. SynergeticMaggot (talk) 16:21, 9 April 2008 (UTC) Supplement? While you can try to invent a new "type" of page and apply that label, it might not stick. WP:SNOW is an essay, it has been since its inception despite being temporarily labeled otherwise. Perhaps consensus will change that some day, but not yet. (1 == 2)Until 01:05, 10 April 2008 (UTC) (The other essays pretty much cover everything. It does seem kind of redundant to link to WP:SNow.) --Newbyguesses (talk) 01:28, 10 April 2008 (UTC) ## Proposed change Proposal here. LaraLove 23:13, 10 April 2008 (UTC) Its inclusion on another essay from the see also section is fine I suppose, if it's a constant concern. But this just seems to be spill over from the cabal deletion and has nothing to do with this policy in my opinion. SynergeticMaggot (talk) 04:24, 11 April 2008 (UTC) I oppose limiting ignore all rule's scope in this manner. Of course admins should ignore rules about admin actions if they prevent them from improving or maintaining Wikipedia. Rules are not set in stone, that goes for rules about admin actions too. Rules are meant to describe, not prescribe our best practices. And the rules cannot foresee all the situations that admins will need to deal with. We need to be creative just like regular users. If you are bothered by a specific set of actions made by an admin under the pretense of IAR then that is not a problem with the policy but a problem between the admin in question and those who dispute that actions. This all seems to me to be about a single recent incident, and not a systemic problem with admins using IAR. (1 == 2)Until 04:50, 11 April 2008 (UTC) ## Workshop page The "workshop page" in the beige box at the top of this page redirects back to this page. Anyone want to fix that? 71.174.111.205 (talk) 16:51, 11 April 2008 (UTC) It works okay for me.--Father Goose (talk) 22:27, 11 April 2008 (UTC) Fixed. (1 == 2)Until 23:49, 11 April 2008 (UTC) You just deleted it instead of fixing it. Am I crazy? It works fine, right?--Father Goose (talk) 06:16, 12 April 2008 (UTC) Well, Father Goose, you just re-added it, unless I am even more confused as usual. Yes, the link now appears to be working fine for me. --Newbyguesses (talk) 06:34, 12 April 2008 (UTC) Yes, I did re-add it, since it had been working.--Father Goose (talk) 02:04, 13 April 2008 (UTC) It redirects to this page. Someone on the talk page asked it the page was being used for anything and if there was any objection to redirecting it. Nobody objected and that person redirected it. I am under the impression that the page was so inactive it was redirected, making the link pretty useless. So I removed it. Father, if you ware going to return the link, then at least have it go somewhere. (1 == 2)Until 13:38, 12 April 2008 (UTC) Ah, I see the point of confusion. It is only the workshop talk page that redirects, not the workshop page itself. My mistake. (1 == 2)Until 13:39, 12 April 2008 (UTC) Aha. I'm inclined to say that the workshop talk page should be redirected to here, but of course not the workshop page itself. If having the talk page redirected continues to cause confusion, that redirect could always be reverted. Even if the workshop page is inactive at times, it's a useful page to keep around in general, given how contentious tinkering with the WP:IAR page itself tends to be.--Father Goose (talk) 02:04, 13 April 2008 (UTC) Some recent edits in the Wikipedia:Ignore all rules/Workshop were for the purposes of comparing UIAR and WIARM Thank you --Newbyguesses (talk) 03:41, 13 April 2008 (UTC) ## How to clear a room How about a poem for WP:IAR? With apologies to Newyorkbrad, only this following extract from the poem [2] submitted to Wikipedia talk:Pranking can be used here, I think. There's way too much red tape on wiki Sometimes that tape is rather sticky You wouldn't be wrong, not by a particle, To say we each should write an article In drafting one more policy page Which (we lose sight of this) is very Clearly something ancillary Can't we all straddle this wide fence With just a bit of common sense? ... --> I seriously think we could use that on the IAR page, it is very informative, though quite light-hearted. I know everyone hates poems, but I don't know why. --Newbyguesses (talk) 03:48, 16 April 2008 (UTC) The poem isn't policy-suitable, but I think that it's a fine addition to Wikipedia:Understanding IAR‎. —David Levy 08:58, 16 April 2008 (UTC) [3] is fine by me. Perhaps I will add NYBrad's poem (that one stanza of it) to Wikipedia:Ignore all rules/Versions, Thanks! --NewbyG 22:01, 16 April 2008 (UTC) ## Just do the freakin' merge already • refer to this edit. of the IAR page: [4] (cur) (last) 21:03, 17 April 2008 Personal use (Talk | contribs) (2,663 bytes) ({mergefrom:What ignore all rules means}) (undo) It's long overdue and there's no reason not to, other than tradition. It would be like if WP:V had existed for a long period as a one sentence policy: "If something is unverifiable, then it can be removed" and then we had an essay, Wikipedia:What verifiability means. I'm sure there would be some sentiment attached to keeping it short and sweet, but we might as well move everything that's going to be treated as a policy/guideline into the actual page. Personal use (talk) 21:03, 17 April 2008 (UTC) The entire content of IAR (which, may I remind you, is only one sentence) is already right there at the top of the fuckin' page. What's the point of merging it?--KojiDude (Contributions) 21:24, 17 April 2008 (UTC) Wikipedia:What "Ignore all rules" means is an essay not a policy or guideline - if the information within it is to be included in a policy page it should be ensured that the content reflects the consensus of the community. My gut feeling is that this should be done before any merge discussion - which is really a question of style rather than provenance - takes place. Guest9999 (talk) 21:40, 17 April 2008 (UTC) If you honestly believe that "there's no reason not to, other than tradition," I suggest that you read the archives. —David Levy 02:59, 18 April 2008 (UTC) It ain't broke. Why try to fix it? -GTBacchus(talk) 21:42, 17 April 2008 (UTC) It's not a gut feeling. Consensus will have to take place before a merge. And my gut tells me you won't have it. SynergeticMaggot (talk) 21:52, 17 April 2008 (UTC) Actually, WP:WIARM has been discussed at length and many times on this page (see Archives), and has, in my opinion been proved to have such support that it could easily qualify as a policy or guideline. I am in favour, as it were, of "merging", WIARM onto the IAR page, I think. We dont lose anything (the 12words are still there) and we gain an explanation which people can read or ignore as they wish, but all on one page. Then again, I think WP:UIAR (which also has the 12words) would be an even better proposition to replace the current 12word version at IAR! I think if either of these "moves" were to be done it would be a nett gain to the project. There have been too many calls for amplification of the IAR policy for this to continue to be ignored, if the alternative(s) are viable, which they are. It should be able to do it (merge or move, whatever) as a "cut-and-paste" merge, and afterwards preserve any revision histories and talk-pages. So: Agree that we don't need a "merge" and 'suggest we consider a merge'. Either WIARM, or UIAR should go up on the IAR page, is my suggestion. I would try WIARM, (it has been the favorite candidate for this), and then, I think it likely that UIAR would be the one to gain support over WIARM, after, presumably, much discussion. Who is up for it, or am I out on a limb here, without a paddle? --NewbyG (talk) 01:00, 18 April 2008 (UTC) I'm really not seeing that there's a problem that would be solved by merging one of those other pages here. Can it be made clear, just what we're trying to fix? Otherwise I don't see the point in adding more words. There's little excuse for not knowing what IAR means at this point, and the fact that the interpretive text is not on a "policy" page is a great illustration of the fact that we should ignore those stupid tags already. That's the beginning of understanding IAR; why deprive people of that? -GTBacchus(talk) 02:23, 18 April 2008 (UTC) Can we add the standard explanation of why we have such a rule? Gosh, that would be nice. It really would.--Father Goose (talk) 21:23, 29 February 2008 (UTC). -- Appears in the archives (Wikipedia talk:Ignore all rules/Archive 15#Wikipedia:Understanding IAR). This or a similar request has been echoed by dozens of editors, I do think we can improve the understanding of IAR by moving UIAR to IAR, or by moving WIARM to IAR. Dozens of editors, (check the older archives, please). I am not saying "there is a problem", I am saying "we have an opportunity for improvement". Let's take it. --NewbyG (talk) 02:48, 18 April 2008 (UTC) Others (myself included) have argued that such a change would not be an improvement (and would actually make the page worse). Specific reasons have been cited, and they're in the archives too. —David Levy 02:59, 18 April 2008 (UTC) ### In the archives, also (ec with David Levy) Yeah, I think I understand that many people have asked for the page to expand, and I think I'm replying that adding words to IAR would not improve its understanding. It's already clear, to those who seek clarity. If someone is thinking "what does this mean?", there's a link right there to "What IAR means," and another to "Understanding IAR". Brilliant. Some people (oddly enough, many of whom seem to already understand IAR) really want to add explanation to the page - I'll grant you that - but that doesn't necessarily make it a good idea. Would moving more text to the page really make IAR more understood, or would it make it seem more like another rule-set? The whole point is to get people to stop thinking in terms of explicit rule-sets. Wordy policy pages constitute red tape. IAR is supposed to be the anti-red-tape. Let's not red-tape it up. My very serious question is this: if you want to add words to IAR, why is it important to you to do this? Simply saying that you feel it would improve the page is not a full answer, because it doesn't explain why you choose to focus energy on improving this page, when there are millions of other pages on the wiki. What makes "improving" IAR a priority? What makes it worth arguing for? Is the page actually misunderstood at large, or is it just a case of people who understand it, but fear that others won't? -GTBacchus(talk) 03:03, 18 April 2008 (UTC) I think I understand IAR reasonably well (not a genius, just a thinker). It is not "important" to me to add text to the page. But, that text (WIARM, UIAR) has been looked over by many, who seem to find it useful. And yes, it is because many, many editors have come here to say the page is too cryptic that I think this change could be worth making. (Some, obviously, do not think so.) I think it is "win/win" to add the existing explanation to the page. UIAR does not add rules to IAR, which would be the wrong approach. Any page can be improved, I improve those pages I choose to work on. (I do, actually work in mainspace, you know, lol.) --NewbyG (talk) 03:21, 18 April 2008 (UTC) I never doubted that you work in mainspace, dude. I'm just saying that several of us have been making a case that more text would make the page worse, not better. Why are we clearly wrong? Why should useful text be moved from where it's already useful (it is, right?), to where some people are arguing that it would be harmful? How is what we've got now not a win/win situation? Is it possible that the claim that this page is "too cryptic" is based on misunderstanding, and that we want people who think it's "too cryptic" to struggle with that, and get to a point where they don't think so? Is it possible that that struggle is precisely the best effect of this page? -GTBacchus(talk) 08:01, 18 April 2008 (UTC) GTB has said it so well, I can't add anything. Dudes, it's the twelve words, grasp them, grok them. UIAR could probably become a guideline, but really, there is no way to add to the simple imprecation to ignore all rules. The struggle is the message. I'll stop now :) Franamax (talk) 08:48, 18 April 2008 (UTC) I believe that the purpose of Wikipedia policy is to educate users about consensus, not make users "struggle" with difficult problems of interpreting vague principles. - Chardish (talk) added 19:22, 18 April 2008 (UTC) I'd like to point out I'm talking about current consensus. Not ghosts from the past. Opinions, like everything else on wikipedia, are subject to change. SynergeticMaggot (talk) 12:09, 18 April 2008 (UTC) WP:CCC. Well said, SynergeticMaggot. Consensus in this thread so far seems pretty clear. BTW, did everyone else hate the bolding in my previous post? Wish I hadn't, sorry. --NewbyG (talk) 14:04, 18 April 2008 (UTC) "Consensus in this thread so far seems pretty clear?" Really? I see a fair amount of disagreement in this thread. -GTBacchus(talk) 15:54, 18 April 2008 (UTC) ### Oh, really I strongly support merging WP:UIAR and weakly to moderately support merging WP:WIARM. - Chardish (talk) 19:20, 18 April 2008 (UTC) Would it be a good idea to advertise at WP:VP/P and maybe WP:AN, and try to gauge the level of support for a merge? -GTBacchus(talk) 19:44, 18 April 2008 (UTC) I've been pondering just such a broad-participation discussion for a while. The way I'd choose to phrase it is, Should Wikipedia:Ignore all rules have explanatory text on the rule page: [5] or on a separate page, linked to as one or more essays: [6]? Bear in mind that the explanatory text can always be edited if it is felt to be wrong. How does that sound? We could set up a discussion subpage Wikipedia talk:Ignore all rules/Merge discussion and link to it via VPP, RFC, CENT, AN, etc.--Father Goose (talk) 23:09, 18 April 2008 (UTC) That sounds pretty good to me. Do you want to set it up? -GTBacchus(talk) 00:36, 19 April 2008 (UTC) Okay, I've set up a page at Wikipedia talk:Ignore all rules/Merge discussion, but I'll give it a day or two to see if anyone thinks I've worded it wrong or whatever before "launching" it.--Father Goose (talk) 01:16, 19 April 2008 (UTC) I'm still not seeing a reason to merge these to the policy. The essays serve for the less adept in self explanatory sentences. I've seen possible support for a merge, but nothing close to reasons why it should take place. SynergeticMaggot (talk) 02:00, 19 April 2008 (UTC) That's what the merge discussion will hopefully establish. I personally think including explanation with the policy is a good idea, because I didn't understand its implications, or how to put it into use, for a long, long, time. That's not because I'm "less adept"; IAR has many deep meanings which are anything but self-evident from the twelve words -- yet they are explainable. To me, this isn't really a merge discussion, but a "should we try to help users understand IAR as much as possible" discussion. I happen to think that's a no-brainer: yes, of course we should.--Father Goose (talk) 04:10, 19 April 2008 (UTC) If you don't understand IAR, then just go read UIAR. You don't need to merge it. It seems like it's just a merge for the sake of conveinience from the arguments I've heard read.--KojiDude (Contributions) 04:14, 19 April 2008 (UTC) I agree fully with trying to help users to understand IAR, however you can only ever try to help them, you can't really definitively explain. The problem with merging either of UIAR or WIARM to IAR is that the text of those essays will now gain the status of policy, i.e. it will become citable in disputes and it will become prescriptive rather than descriptive. Is that really the desired outcome? Perhaps the better course would be to propose elevating UIAR to guideline status. Franamax (talk) 06:25, 19 April 2008 (UTC) I agree. I think it should be taken up over there first. Merging an essay onto policy will require much more consensus than this has, as I've stated. SynergeticMaggot (talk) 07:04, 19 April 2008 (UTC) UIAR as a guideline... okay, let's try that.--Father Goose (talk) 07:19, 19 April 2008 (UTC) I really don't know what's wrong with leaving it as whatever it is now, and explaining to people that the truth is found in it, whatever its label. That's a good lesson in ignoring labels, which can never be taught by labeling all the good lessons as such. If everyone disagrees, I'll shut up, but seriously... what's the fascination with adding status to the essays that explain that status means nothing? -GTBacchus(talk) 10:01, 19 April 2008 (UTC) Just a thought then. How about no tags, no tags at all in wikipediaspace, from Pillars to Civil to Blocking ? ( A purely philosophical speculation.) --NewbyG (talk) 13:28, 19 April 2008 (UTC) I'm for it, but it won't fly. Not this season, anyway. -GTBacchus(talk) 17:53, 19 April 2008 (UTC) ### a page at Wikipedia talk:Ignore all rules/Merge discussion No idea. I just agreed so this conversation could be moved elsewhere. A merge to here is highly unlikely and anyone interested can go edit over on the essays. SynergeticMaggot 10:10, 19 April 2008 (UTC) I don't care about the label either, but it may have value in demonstrating that the advice in UIAR is well backed by consensus. SynergeticMaggot's motives here, on the other hand, seem entirely cynical and dismissive. On second thought, I doubt it will be made into a guideline for the very reason that the text that is in it should be on the IAR page instead. UIAR doesn't make sense as a guideline; it simply is IAR, explained plainly.--Father Goose (talk) 11:05, 19 April 2008 (UTC) (outdent)My motives are simple and this serves as an example of such. I'm merely around to help and I lean toward what I perceive as consensus (no matter how I'm perceived in the process). SynergeticMaggot 11:23, 19 April 2008 (UTC) Do you know what's "cynical and dismissive," Father Goose? Referring to this as a "'should we try to help users understand IAR as much as possible' discussion," which implies that a merger irrefutably would accomplish this and that anyone opposed to a merger seeks to prevent it. —David Levy 11:57, 19 April 2008 (UTC) Seeing as I wrote WP:WIARM, and considering how much energy I put in on this very talk page explaining IAR, I think it would be difficult to claim that I'm against explaining IAR to people clearly. Heck, anyone who edits this talk page at all is clearly in favor of understanding and explanation. Nevertheless, many of us don't support a merger. As for cynical and dismissive, we all go there on occasion, I suspect. It's so difficult to accurately gauge tone in a text-based medium that we might as well assume the best of each other and just try to move forward. -GTBacchus(talk) 17:48, 19 April 2008 (UTC) Father Goose, If you wish to demonstrate that the advice in UIAR is backed by consensus... link to it from this page (already done), cite it in discussion, apply it in context, and explain to people that it makes sense, despite the lack of "official tag", and that the lack of official status is part of the point. That's more work than just hanging a tag on the page, but most worthwhile things are difficult. -GTBacchus(talk) 17:53, 19 April 2008 (UTC) David, please don't construe my opinion about IAR as an attack against those who disagree with my opinion. I do feel that placing explanatory text on the IAR page (as long as it explains things correctly) will help users better understand and make use of IAR.--Father Goose (talk) 09:06, 20 April 2008 (UTC) You treated this opinion as factual (and stated that the discussion concerned whether we "should we try to help users understand IAR as much as possible"). In actuality, this is a "Would a merger help users understand IAR as much as possible?" discussion. —David Levy 10:43, 20 April 2008 (UTC) I still don't think this merge should happen, just like the last 3 times it was suggested and there was not consensus to do it. Taking a core policy and adding that much content is going to need more than a dozen people to form consensus for. I would suggest making a post at the village pump and see if there is wide acceptance of this idea before attempting a merge. (1 == 2)Until 01:36, 21 April 2008 (UTC) ## Recent revert I've reverted an inappropriate attempt to merge content onto this page. It appears someone cannot determine consensus. As it stands, there is no consensus for a merge. So lets stop being so jumpy, k? SynergeticMaggot (talk) 07:31, 20 April 2008 (UTC) Plus the last thread has gotten rather.... long. SynergeticMaggot (talk) 07:32, 20 April 2008 (UTC) Oh, dont say inappropriate, say contested or something, please. (lol) inferior, sub-optimal, unfortunate, that'd be fine also -- actually inappropriate is as appropriate as any, I guess, or is it? (Sorry I even spoke.) [7] 06:45, 20 April 2008 Newbyguesses (Talk | contribs) (6,958 bytes) (Understanding IAR-- IAR can be explained; IAR does not need to be a struggle; the IARpage can be edited; as discussion page indicates, there is impetus toward this approach) -- I thought it was a good edit. WP:CCC. My impression is of support for change as this edit would have been, and a roughly similar support perhaps for no change. Difficult to determine consensus, other than by discussion and editing. --NewbyG (talk) 08:00, 20 April 2008 (UTC) I say its almost a tie, if not a tie in fact. A handful oppose the merge and I think 2, maybe 3 wish it to occur. Which would mean theres no consensus at all. The split decision would actually indicate a no change, but default. SynergeticMaggot (talk) 08:11, 20 April 2008 (UTC) Looking back over the thread I only see two people who actively are suggesting this merge (I say actively because Personal use only opened the thread, and hasn't said a word since). You, NewbyG, and Chardish. And opposing (correct me if I'm wrong :)) the merge, we find KojiDude, David Levy, GTBacchus, Franamax, and of course, myself. SynergeticMaggot (talk) 08:22, 20 April 2008 (UTC) Fine, the operative word being, I think, actively. There are sleeping observers, and others ready to edit in case further editing occurs. I made an edit, and so unlikely to edit again, especially since the list of editors in y'r previous post is substantial. Thanks, --NewbyG (talk) 08:46, 20 April 2008 (UTC) No problem at all. When the sleepers awake, we can all calmly chat about it again if need be. :) SynergeticMaggot (talk) 08:48, 20 April 2008 (UTC) ## Is UIAR wrong? If anyone thinks the explanation of IAR presently located at WP:UIAR is incorrect, I'd like to hear why.--Father Goose (talk) 09:06, 20 April 2008 (UTC) Who suggested it was incorrect? SynergeticMaggot (talk) 09:10, 20 April 2008 (UTC) As yet, no one. 1==2 made some criticisms of it early on, and some changes were made in response to that, and there was some additional tweaking of its language, but it's been stable for a while. I'm just trying to find out if anybody thinks any part of it is wrong as it currently stands. If it is, we'll need to make further changes.--Father Goose (talk) 09:17, 20 April 2008 (UTC) Like any other essay/article, it will grow over time. If someone disagrees with something in particular, then my guess is they will bring it up over there. I'll take a look at it after I wake up. SynergeticMaggot (talk) 09:40, 20 April 2008 (UTC) My objection would be that UIAR currently too evenly splits the onus between breaker and enforcer, whereas I lean toward the rule-breaker having more responsibility to prove the case. I think there are serious implications here, one does not wish to confer too much authority upon those who choose to "break the rule" and defend their action with a simple because I can - there should always be the argument of because I had to do it to improve the wiki, i.e. demonstrate the net positive outcome. Also there is a poem fragment on the page at this moment which I think was a superb response to incidents of a certain day, but has a certain whimsicality not appropriate going forward. This is indeed serious business, not best addressed through rhyme. More generally, there seems to be a tension here between those who wish to introduce explication to the IAR page, and those who wish to maintain the bald statement of policy, and keep the expansions as subsidiary essays (or guidelines), so as to not dilute the primacy of the simple message. It's probably not difficult to see that I incline toward the latter. Franamax (talk) 10:31, 20 April 2008 (UTC) Re: onus between breaker and enforcer: I'll think about that for a bit. I kind of agree with you, although the converse is no less important: we don't wish to confer too much authority on those choosing to enforce a rule either -- especially not on a page dedicated to the concept of "ignoring all rules". I think maybe what we could say about that is that under most circumstances, consensus is likely to favor enforcing a rule -- provided the rule correctly describes a consensus position.--Father Goose (talk) 11:21, 20 April 2008 (UTC) My general feeling on this is that you should always assume that the rule does in fact describe a consensus position, otherwise you should be easily able to identify the accruing lack of consensus somewhere, on a talk page, on a noticeboard; else you should be able to provide a decent rationale as to why in the case of your particular action, the existing rules were not sufficient and you felt a particular necessity. Certainly there should never be a case where the rule-enforcers can prevail with the just because argument, but consensus can only change slowly, by growing burden of proof that the status-quo is not sufficient. Each individual act of rule-breaking must needs stand on its own merits, only with the accumulation of justified breakages can the rule itself come into reasonable question. Put another way - do whatever you want, but have a really good reason for doing it. Regarding the more general issue, it's probably on everyones watchlist already, but I find this interesting. Without commenting on the merits of the thread itself, I'm struck by the explicit reference to "a terse version -- in the style of WP:IAR" - so by contemplating expansion of this page, we're possibly tinkering with something that sets an example for ways to think about the wiki itself. Just a late-night thought. Franamax (talk) 11:51, 20 April 2008 (UTC) I'd say my thinking on that is that instructions should be kept to a minimum but good advice should be offered generously.--Father Goose (talk) 00:32, 21 April 2008 (UTC) The question is not "is it right or wrong", it is "does this have wide acceptance to be policy". I would say it does not have such acceptance. Attempts to make the content of that essay a policy or guideline in the past have failed due to the community rejecting the idea, as have previous proposals to merge the items. (1 == 2)Until 04:03, 21 April 2008 (UTC) All right, but that's not the question I'm asking here, which is to find out if anyone thinks the explanation given in UIAR is wrong. Franamax gave a nice bit of feedback so far... do you have any thoughts to offer on it? Your earlier criticisms of UIAR were helpful. Separately, no serious attempt to "promote" or merge UIAR with IAR has been made to date, unless I'm mistaken.--Father Goose (talk) 10:26, 21 April 2008 (UTC) (WP:WIARM) As I recall, (this could be wrong), there were some discussions of WP:WIARM on this page, often over the last six months at least, but no serious attempt to promote or merge WIARM with IAR (by editing on the project page) had been made to date, either. --NewbyG (talk) 12:37, 21 April 2008 (UTC) There is this: Wikipedia talk:UIAR#Merge. (1 == 2)Until 21:50, 22 April 2008 (UTC) Oh, yeah, Merge Y/N Hmm.-per--Newbyguesses - Talk 13:52, 8 March 2008 (UTC) Ya got me. --NewbyG (talk) 03:21, 23 April 2008 (UTC) Wikipedia talk:UIAR#Merge was a discussion of whether UIAR and WIARM should be merged, not UIAR and IAR.--Father Goose (talk) 09:51, 23 April 2008 (UTC) ### Wikipedia:Five pillars and WP:IAR I think UIAR incorporate a successful approach A) The nutshell B) The history behind IAR C) Explanation of IAR. || Wikipedia does not have firm rules besides the five general principles presented here. Be bold in editing, moving, and modifying articles. Although it should be aimed for, perfection is not required. Do not worry about messing up. All prior versions of articles are kept, so there is no way that you can accidentally damage Wikipedia or irretrievably destroy content. Remember, whatever you write here will be preserved for posterity. If some aspect of UIAR could be tightened up, it would be done by taking an even more careful reading of the no firm rules pillar from Wikipedia:Five pillars, and distilling that text and principle. --NewbyG (talk) 12:51, 21 April 2008 (UTC) That's crazy talk. The pillar is derived from this page. :-P --Kim Bruning (talk) 15:58, 21 April 2008 (UTC) Crazy, Kim? New editors are encouraged to read Wikipedia:Five pillars as the best summary of Wikipedia's principles. Not IAR, or BRD. That's how new users find out about building this encyclopedia. What's crazy about that? --NewbyG (talk) 22:17, 21 April 2008 (UTC) It's circular, you see. IAR came first, and only then 5P. If you then use 5P to figure out what to write about IAR... oh dear ... :-P --Kim Bruning (talk) 16:24, 22 April 2008 (UTC) It is not circular, because there is new human input. We don't need to follow cause and effect, we can use our discretion and go in any direction consensus takes us. (1 == 2)Until 16:48, 22 April 2008 (UTC) So you'd just copy pillar 5 here? We could do that... --Kim Bruning (talk) 19:57, 22 April 2008 (UTC) Not sure what statement you are reading, I certainly never said that. I am pretty sure I said I think we should follow consensus. (1 == 2)Until 21:33, 22 April 2008 (UTC) So what are you saying? "follow consensus" is an empty phrase. State your opinion! :-) --Kim Bruning (talk) 21:45, 22 April 2008 (UTC) ps, could you please indent properly? Else I have a heck of a time figuring out who you're talking to! ^^;; Kim, can you think of some way to explicate that "crazy talk" on the IAR page, possibly by rewording the nutshell along the lines of "predates the five pillars"? Franamax (talk) 21:06, 22 April 2008 (UTC) Everything the five pillars link to predates them (well, except maybe verifiability and reliable sources... if those are linked at all?). They're a summary, after all. :-) --Kim Bruning (talk) 21:45, 22 April 2008 (UTC) Well, pace to any other grizzled veterans here, but perhaps you are uniquely placed to make that clear on these policy pages themselves, in terse form. If newbies are encouraged to review 5P as a first step, surely somewhere they should be informed that 5P is the current summary of an historical evolution, and somewhere could easily learn how IAR is Genesis and 5P is Acts (or alt. biblical ref :). I'll now go over to 5P after I've spoken here, but I would still encourage you to think about the nutshell wording. Franamax (talk) 22:00, 22 April 2008 (UTC) I think UIAR conveys the message of "no firm rules" quite well, though it doesn't use those specific words. But then again, maybe it could convey it even better.--Father Goose (talk) 04:58, 22 April 2008 (UTC) Just putting my 2 cents in; I think UIAR is pretty great. I think it's pretty great that it's where it is, and not here. I think it's an excellent lesson for editors that the best and most useful ideas are often located in essays, on random talk pages, etc. That can help people learn not to lean too much on official policies, but to keep their eyes and ears open, and their judgment actively engaged. -GTBacchus(talk) 21:21, 21 April 2008 (UTC) ### Which discussion continues, then I am confused as to why this conversation is not taking place on WT:UIAR. (1 == 2)Until 21:39, 22 April 2008 (UTC) Any discussion at WT:UIAR ought to be about possible improvements to WP:UIAR. Any discussion about improvements to WP:IAR, such as replacing/ removing the links in the See also, or indeed concerning replacing or changing the text on the project page, in entirely appropriate here at WT:IAR. Any discussion of which came first, the chicken or the egg, or whether we are allowed to update WP:IAR with words currently in use at WP:5 (which has not been advocated) or whether we are tied down by history or precedence and which was written first, by which or ever guru of the internet, all hail, is, in my opinion, misguided, wrongly thought, and off-topic, and therefore of little or no use here. --NewbyG (talk) 22:40, 22 April 2008 (UTC) For instance- It's circular, you see. IAR came first, and only then 5P. If you then use 5P to figure out what to write about IAR... oh dear ... :-P -- This edit summary? -- Is that the equivalent of saying we can't make use of a text-book written in 2005 to write and think new thoughts about Aristotle? (For those who didn't know, Aristotle has been dead for over two thousand years, sorry to be bringing the bad news.) --NewbyG (talk) 22:56, 22 April 2008 (UTC) This discussion should be taking place on WT:UIAR. IMHO--Hu12 (talk) 05:36, 23 April 2008 (UTC) Ya, I thought this was a proposal for a merger or change in state for UIAR, so I asked about it and was told "...that's not the question I'm asking here, which is to find out if anyone thinks the explanation given in UIAR is wrong". If that really is the question at hand then WT:UIAR is the place not here. (1 == 2)Until 05:49, 23 April 2008 (UTC) This particular thread is indeed to find out if anyone thinks the explanation given there is wrong. But I ask the question here because I seek to have the explanation placed on WP:IAR itself. If you feel the explanation is wrong in any way, then that is an understandable reason to object to having it put on WP:IAR itself. From your comments earlier, I gather that you believe people will misinterpret or misrepresent parts of the explanation if it is placed on WP:IAR. Are there any specific parts of it that give you pause? And whether or not the explanation ends up directly in the policy, having an explanation of the policy which describes its meaning and use accurately (as determined by consensus) is, I believe, uncontroversially desirable. So, I'm asking everyone here who has a good grasp on the policy to help make the explanation of it consistent with a consensus view of IAR, by offering feedback or edits.--Father Goose (talk) 09:39, 23 April 2008 (UTC) Okay... back full circle. If you think there should be an explanation on IAR itself then once again I say the question is not "is it right or wrong", it is "does this have wide acceptance to be policy". Such an explanation in the policy is not uncontroversially desirable. I think the policy is fine as it is keeping itself simple, and leaving interpretation to essays. (1 == 2)Until 13:37, 23 April 2008 (UTC) We shall be answering that question later. For now, I am asking this question, which of course you are free to decline to answer. As for interpretation, I'd want to leave that to the community, not to "essays", per se, and I'd want us to provide accurate documentation somewhere of just what the community position is. UIAR is an attempt to provide that documentation. Did I get it wrong?--Father Goose (talk) 18:41, 23 April 2008 (UTC) Well I will just sit this one out unless it becomes more clear what is being proposed. I say this belongs on WT:UIAR and I am told it relates to an addition to this page, I say that we should be considering it in the light of adding to a policy and I am told it is just about what people think of WP:UIAR. Perhaps things will become more clear in time. (1 == 2)Until 18:46, 23 April 2008 (UTC) ## Pours tea with a new thread Let's not get grumpy:), there really is a lot of good discussion here, it seems to go back and forth because it is a crucial underpinning of the project and strongly held views abound. • I think FG brought up the whats-wrong-with-UIAR thread here in relation to the desire in some quarters to improve the content of IAR, so the discussion is relevant as far as improving IAR goes. Specific changes to UIAR should be addressed there. • I'd like personally to see how Kim could suggest changes to better clarify the relation of IAR and 5P, for the benefit especially of newcomers. • Side note, can we slow down the throttle on archiving right now - just in case someone new comes along? (Hello newcomer, run away screaming but if you insist on staying, here is some history to wade through :) So let's slow down and not get pissed off, it's an important subject worthy of patience. Franamax (talk) 23:36, 22 April 2008 (UTC) ## Ignoring the rules v. Ignoring a rule I think applying this policy falls under two categories: • A newbie doesn't understand Wikipedia policy. Consequently, they just do what they see as best. • A user thinks that their actions serve to better Wikipedia, and are better than folllowing policy. The first one should obviously be allowed. The second raises interesting issues. I suggest that, in order to ignore a rule, you must first understand it (not including the former case, in which, paradoxially, if you don't understand a rule, you may ignore it). The second circumstance is defined by its intent: A user, knowing full well what the position of policy is on a matter, deliberately and wilfully flies in its face. This is the sort of thing that should only ever, ever be done with a good reason, and the user ought to understand the policy that they are ignoring. This ties in closely with "The spirit of the law trumps the letter of the law." If the user understands the rationale for the rule, sees that the rationale is nonsensical in a particular case not foreseen by the rule, and therefore disregards the rule, no harm is done, and we all benefit. However, if the user simply doesn't agree with the policy, then the user can't ignore it — this would be against the consensus. So, to summarise, in cases where a user deliberately ignores a particular rule (as opposed to ignoring "the rules" as a whole), the user ought to understand the rule, and have a clear argument establishing that the circumstances are unusual enough to not have been foreseen by the consensus used to form the rule, and that the user's proposed action follows the spirit of the law, if not the letter, and improves Wikipedia as a whole. Thoughts? — Werdna talk 15:08, 23 April 2008 (UTC) Rules for "Ignore All Rules"? Pfffffffft. MessedRocker (talk) 15:52, 23 April 2008 (UTC) It's already one of the bullet points at WP:WIARM: "Ignore all rules" does not mean that every action is justifiable. It is neither a trump card nor a carte blanche. A rule-ignorer must justify how their actions improve the encyclopedia if challenged. Actually, everyone should be able to do that at all times. In cases of conflict, what counts as an improvement is decided by consensus. It's my favorite hobby horse. :-) In fact, this rule doesn't just apply to ignoring a rule, it applies to following a rule too (following a rule while you know that it harms wikipedia is not a good idea) , and (conceivably) to anything else in between. :-P Since we're talking hobby horses... I'll trot mine out a bit further; there's actually 4 questions you need to answer: • You • Why did you do it? • What would convince you to change your mind and revert yourself? • Community (and thus consensus) • Why do you think the community will support your action? we're assuming that you won't go against consensus and common sense all at once. Even if common sense didn't have consensus before, it might have consensus once you explain it :-) • What would convince the community to change its mind and revert you? You don't need to answer all these questions publicly for each action you take, but you should have answered them all for yourself before you hit submit. And... due to the way wikipedia works (there's no explicit policy) effectively anyone from the community can demand an answer to any of the above questions at any time, and if you don't successfully answer, the situation might escalate. reason: answering allows you to meatball:LimitScope and can be used as a first step towards building consensus, not answering forces the other party to meatball:ExpandScope to try and get their answers elsewhere in an attempt to force you to at least come to some settlement. I agree with Werdna here. The second case especially seems to rhyme well with my theme that WP:UIAR should indicate that the onus is on the rule-breaker to reasonably demonstrate the necessity of their action as a means toward preserving or improving the encyclopedia. I also still think none of that explication should be on the IAR page itself. Franamax (talk) 08:06, 24 April 2008 (UTC) It says "If a rule prevents you from improving or maintaining Wikipedia, ignore it." How does that not indicate the onus on the rule breaker to be maintaining or improving Wikipedia? (1 == 2)Until 13:24, 24 April 2008 (UTC) How could anyone argue that the onus isn't on each person, to account for their own actions? What suggests that it would somehow be elsewhere? -GTBacchus(talk) 14:14, 24 April 2008 (UTC) The two are the same. Ignore All Rules is based on wiki process as self-corrective and is a corollary of Being Bold . You can attempt to do whatever, but you may get reverted or overturned or worse. This only becomes a problem when people think IAR is a flash demonstration of their wikigod skills, and reversion of it therefore an outrageous attack on their judgment or an attempt to destroooooy them. 86.44.17.45 (talk) 12:02, 29 April 2008 (UTC) Wow, it's interesting to come back a few months after I leave Wikipedia and poke around the old discussion haunts. And good to know that even the reverters to the simplest IAR are actually making it clear that almost all of them agree with the interpretations I do. During my era of discussion, it was not so clear. So score one for change. --72.1.156.154 (talk) 11:47, 3 May 2008 (UTC) ## A rule "established by the community" I tossed it in. [8] What do you think?   Zenwhat (talk) Rules are rules, you don't need to clarify it. — Trust not the Penguin (T | C) 22:38, 10 May 2008 (UTC) A rule not established by the community (or more specifically, not embraced by it) should not only be ignored, but disregarded altogether. But in general, that a rule may or may not have been established by the community is irrelevant: if it prevents improvement, ignore it.--Father Goose (talk) 05:23, 11 May 2008 (UTC) Indeed it is not ignore rules established by the community, it is ignore all rules. I really think people don't get that "all" part. 1 != 2 18:04, 1 June 2008 (UTC) [9] Over fourteen days between edits, almost certainly a record for this talk page. --NewbyG (talk) 00:50, 2 June 2008 (UTC) ## The twelve words are -- If a rule prevents you from improving or maintaining Wikipedia, ignore it. • If a rule prevents you from improving or maintaining Wikipedia, ignore it. 10 May 2008 /Archive 15 I predict that the 12 words will remain on the project page, for the next two months at least. --NewbyG (talk) 12:58, 14 May 2008 (UTC) You got money riding on it or something?--Father Goose (talk) 19:04, 14 May 2008 (UTC) Time passes slowly, waiting for nothing to happen. --NewbyG (talk) 22:40, 14 May 2008 (UTC) Why not just fully protect the page? It's obviously not undergoing (or going to undergo) any kind of maintenance (be it miniscule or extensive), and most edits made to it are reverted fairly quickly.--KojiDude (C) 22:49, 14 May 2008 (UTC) I don't think its eligible for protection, for the same reason you've stated. Numerous users and admins watch this page. I'm not noticing any recent vandalism or edit warring (Full protection is used to stop edit warring between multiple users or to prevent vandalism). SynergeticMaggot (talk) 22:55, 14 May 2008 (UTC) Just because there has not been a consensus to change it in a while does not mean there will never be. I don't think we need to protect unless there are those who are actively seeking to edit war. 1 != 2 18:05, 1 June 2008 (UTC) I thought I'd try out my genial idea, that Category:Wikipedia official policy should have certain standards, by visiting this page. I am working toward several presentation aspects of all official policy pages becoming standardized, just as other noncontroversial aspects like markup and organization are standard. What I have in mind right now is: • Use the standard {{policy}} template, with new enhancements, to provide automatic categorization and a standard message. • Demote the two statements currently in the notice box into "see also"s as follows: • Add a paraphrase of Jimbo back into the notice box. Among the other ironies of this page is that editors have favored Jimbo's edit summary as content for the notice box while completely, um, ignoring what Jimbo put in the notice box himself. With the boilerplate, the notice would read: • This page documents an official English Wikipedia policy, a widely accepted standard that all users should follow. When editing this page, please ensure that your revision reflects consensus. If in doubt, consider discussing changes on the talk page. This page is fundamental to the working of Wikipedia: please pause to consider its long tradition and deep and subtle meaning. • Add a nutshell that contains the exact same twelve words as the policy. This will also quell some rumblings I recall reading here at one point that the twelve words are not given enough prominence. I hope consensus for these changes, or at least for trying them, will not be problematic. If there are concerns, please let me know how my goal of standardizing policy templates and categories should transpire. I believe all policies should be required to use the same policy template and should have a standard nutshell text box for quoting on other pages; I hope to see automated quoting in the future as well. I would appreciate it if competing proposals addressed these concerns. JJB 10:47, 4 June 2008 (UTC) Add a nutshell that contains the exact same twelve words as the policy? Ugh. —David Levy 11:37, 4 June 2008 (UTC) Sounds good, except no nutshell is needed, I think there is a general agreement about that. I fully agree that like every other policy it should say "When editing this page, please ensure that your revision reflects consensus. If in doubt, consider discussing changes on the talk page", not sure who removed it, probably someone who did not have consensus. 1 != 2 13:42, 4 June 2008 (UTC) I'm fine with standard policy tag, but I see no need to for the ugly bloat that the nonstandard addition (This page is fundamental to the working of Wikipedia: please pause to consider its long tradition and deep and subtle meaning.) brings. We used similar text only as an alternative to the standard text, so I don't know what purpose it's supposed to serve now. Oh, and the colon should be a semicolon. —David Levy 16:42, 4 June 2008 (UTC) Thanks for working on the template David. Actually, the point of the nonstandard addition is that it's nonstandard and thus teaching by example. I think that's why Jimbo put it there in the first place. The same is true for a redundant nutshell: it's one place where redundancy really would make a point. It's also very koanic. JJB 17:03, 4 June 2008 (UTC) I like the new changes, I can take or leave "This page is fundamental to the working of Wikipedia: please pause to consider its long tradition and deep and subtle meaning". I think a repetitive nutshell is counter productive. If anything the nutshell should be "Use your brain", but I don't think we need one at all. 1 != 2 17:06, 4 June 2008 (UTC) Firstly, the added text doesn't contradict any rule that I'm aware of, so it isn't an application/demonstration of this policy. Secondly, even if it did contradict a rule, it's my opinion that it doesn't improve the page (and therefore still is not an application/demonstration of this policy). The nutshell idea seems bizarre and likely to cause confusion. —David Levy 17:14, 4 June 2008 (UTC) Standards? On this page? - Standardising policy tags sounds like a good idea. -- And I generally like the new changes, though I am a bit yes-and-no with "This page is fundamental to the working of Wikipedia: please pause to consider its long tradition and deep and subtle meaning". Are those words directly from User:Jimbo or something perhaps? -- I would say that a repetitive nutshell is counter productive. -- Standards? On this page? Happy to participate in further discussion. --NewbyG (talk) 23:45, 4 June 2008 (UTC) I agree with the removal of "other versions", it seems to be a collection of things that did not gain consensus. 1 != 2 00:15, 5 June 2008 (UTC) ## Hmm... "If a rule prevents you from improving or maintaining Wikipedia, ignore it." That sounds good, but there's one thing. IP blocking is a 'rule' that prevents you from improving Wikipedia. You can't really ignore it. Otherwise... --MasterOfTheXP (talk) 23:21, 12 May 2008 (UTC) When ip's are blocked its almost always because of vandalism. And vandalism does not improve the pedia in any way, shape or form. But that doesn't stop the ip's though. :) SynergeticMaggot (talk) 23:28, 12 May 2008 (UTC) Sure you could ignore it, you could go for a walk or play a game of chess. 1 != 2 18:04, 1 June 2008 (UTC) That's very observant. :-) The "rules" mentioned IAR are the rules found in the project namespace (Wikipedia:xxxxxxx). There are also a number of rules enforced by software. People tend to forget those software-enforced-rules when they argue that "IAR would turn wikipedia into an anarchy" :-P --Kim Bruning (talk) 21:19, 2 June 2008 (UTC) I think it would be cool if someone made a clone of Wikipedia, copy all the articles, and make it completely anarchistic, with maybe a few editors correcting spelling etc. just to see how it long it lasts before the whole site just implodes on itself in a great big internet black hole. Am I crazy? Zell65 (talk) 07:01, 7 July 2008 (UTC) ## ALL rules? So does that mean that if the rule to ignore rules impedes our progress in editing, do we ignore the rule that's telling us to ignore the rule that we'd actually be following if we ingored it? This rule seems very paradoxical. Noone (talk) 22:32, 17 June 2008 (UTC) But "ignore all rules" isn't a "rule", it's just the application of common sense; so no problem! But, yes, if "ignoring rules" made constructing the encyclopaedia more difficult, you shouldn't do it; common sense again. --tiny plastic Grey Knight 08:44, 18 June 2008 (UTC) Thanks. But you do realize I asked this just as kind of a joke, right? Still, thanks! Noone (talk) 22:43, 18 June 2008 (UTC) Yes, we all do realize the paradox. That paradox is what gives Wikipedia it's life, so we take even the flippant questions seriously - they go to the heart of the project and we're happy to talk about them with whoever comes along. Think hard about your question, it's a good one, and the possible answers will help you to understand the wiki-way. Franamax (talk) 23:56, 18 June 2008 (UTC) No, not all rules, just the ones that prevent you from improving or maintaining Wikipedia. I fail to see how ignoring a rule that prevents you from maintaining or improving Wikipedia could ever prevent you from improving or maintaining Wikipedia. 1 != 2 00:31, 19 June 2008 (UTC) Unless you neither obeyed it, nor disobeyed it; nor neither obeyed nor disobeyed; nor neither obeyed nor disobeyed nor neither obeyed nor disobeyed; nor neither obeyed nor disobeyed nor neither obeyed nor disobeyed nor neither obeyed nor disobeyed nor neither obeyed nor disobeyed; nor ... you get the idea. ;-) --tiny plastic Grey Knight 06:54, 19 June 2008 (UTC) However I both obeyed it, and disobeyed it; also both obeyed and disobeyed; and also obeyed and disobeyed whilst neither obeying or disobeying; and neither obeyed or disobeyed while neither obeying nor disobeying while still obeying and disobeying obeyance and disobedience; nor ... you get the idea. Franamax (talk) 07:12, 19 June 2008 (UTC) Also, buffalo buffalo buffalo buffalo buffalo buffalo buffalo! In other words, you obeyed but then disobeyed; and disobeyed, but then you obeyed? Looking at the page for the first time in a couple of months, I think I might be more in the terse camp now. Not quite terse. Terse minus. Or plus, if you will. 69.49.44.11 (talk) 15:07, 19 June 2008 (UTC) Brevity is... 1 != 2 15:27, 19 June 2008 (UTC) ## Relevant Quotations? Lengthening? Alternative Audio? After Dentist Horace Wells used anesthesia for the first time in history to extract a tooth painlessly (1844), his associates suggested that he get a patent. He said, "Let it be free as the air." Bibliography -48 "I do my thing and you do your thing. I am not in this world to live up to your expectations and you are not in this world to live up to mine. You are you and I am I, and if by chance we find each other, it's beautiful. If not, it can't be helped." - The Gestalt prayer, by psychologist Fritz Perls Bibliography -69 "To punish me for my contempt for authority, Fate made me an authority myself." - Albert Einstein Bibliography -93 (page 24) "We have no system; we have no rules, but we have a big scrap heap." - Thomas Edison (Some of his workers called it the "dungyard.") Bibliography -79 "Get your facts first, and then you can distort them as much as you please." - Mark Twain Bibliography -9R "It is better to accomplish perfectly a very small amount of work, than to half do ten times as much." - from the book, Inquire Within, 1858 Bibliography -61 "What we ought to do now, obviously, is suspend all activity until we can hold a plebiscite to select a panel that will appoint a commission authorized to hire a new team of experts to restudy the feasibility of compiling an index of all the committees that have in the past inventoried and cataloged the various studies aimed at finding out what happened to all the policies that were scrapped when new policies were decided on by somebody else. Once that's out of the way, I think we could go full steam ahead with some preliminary plans for a new study with Federal funds of why nothing can be done right now." - North Dakota Senator I.E. Solberg Bibliography -62 "Die when I may, I want it said of me by those who knew me best that I always plucked a thistle and planted a flower where I thought a flower would grow." - Abraham Lincoln, 1865 (the year of his death) Bibliography -90 "Every day, in every way, I am getting better and better." "Tous les jours, a tous points de vue, je vais de mieux en mieux." --Emily Coue "My policy is to have no policy.", Abraham Lincoln. All quotations ruthlessly stolen from [[10]], where you can also find a hard-to-match bibliography. BrewJay (talk) 14:45, 22 June 2008 (UTC) ## Clarifying this policy I believe IAR is a great policy, and is essential, however, I don't like the current wording of the policy. The policy itself is unclear. I suggest that we give some examples of what it means, for an example, "You don't have to learn policy before editing", or "You shouldn't follow policies like a brainless robot, but should instead carefully think of the consequences of following policy, and do what you think is best for wikipedia." I also suggest we give examples of what the rule doesn't mean. For an example, it doesn't give you an excuse to break any of the 5 Pillars. However, it should be clear that these are just a few examples, and there are more examples. I see the current vagueness of the policy as a problem, although it will never be 100% clear. It could be clearer though, and it should be more clear. Some users, especially newbies who are young, might look at this, and take it very literally. There literal interpretation of the policy may lead them to do inappropriate thing. It would do everyone a favor to leave a little less room for misinterpretation.--SJP (talk) 12:37, 3 July 2008 (UTC) Well there are already more than one essay interpreting IAR, but in the end all it means is that if a rule prevents you from improving or maintaining Wikipedia then you can ignore it. Anything else is just reading into it content that just is not there. For example saying that IAR does not apply to the 5 pillars is reading too much into it. While it is unlikely that those pillars would prevent us from improving or maintaining Wikipedia, if they did stand in our way we would not sit on our thumbs and blindly follow them. If you take this policy literally you will do fine if you have the ability to determine what is and is not improving or maintaining Wikipedia. If you don't have the ability to determine what is and is not improving or maintaining Wikipedia then this rule's clarity is not the problem. I do not see the policy as unclear, it tells me what I can do when. What can I do? Ignore rules. When can I do it? When they prevent me from improving or maintaining Wikipedia. Really not sure how it could be more clear, I think additions would detract from the clarity. 1 != 2 12:40, 3 July 2008 (UTC) I respect your opinion, and I understand your opinion, but I still disagree with you:-) It would be more clear if it stated what IAR isn't an excuse for. Will you please explain how adding that information would make the policy less clear? Thanks for taking the time to respond, and to give your point of view:-)--SJP (talk) 15:39, 3 July 2008 (UTC) Well, IAR is designed to work when our preconceived notions of what our best practices are(policy) fails us. I think an attempt to include when it should and should not be used will reduce its ability to work in unforeseen circumstances. That is why it is ignore all rules, not ignore some rules. It is a safety measure to make sure we do not get bound up in precedence and to ensure the first priority is building and maintaining an encyclopedia. To attempt to lay out ahead of time when it is best used and when/how it should not be used defeats the purpose of the policy which is to allow correct action when the rules fail us due to unexpected circumstances. 1 != 2 17:20, 3 July 2008 (UTC) "I think an attempt to include when it should and should not be used will reduce its ability to work in unforeseen circumstances." I think if we dictate when it should and shouldn't be used we'll reduce its ability to work in unforeseen circumstances, but I don't think we reduce its ability if we just offer examples of when it should be applied, and when it shouldn't.--SJP (talk) 17:53, 3 July 2008 (UTC) Examples can be added to the linked essays. As Until says, adding specifics to this policy page will tend to confuse things, people may then adopt a tendency to narrowly read the examples as being part of the policy. Franamax (talk) 18:08, 3 July 2008 (UTC) "people may then adopt a tendency to narrowly read the examples as being part of the policy" I actually find myself in agreement with that statement. Some people here seem to have a tendency to strictly follow policy. For that reason I no longer agree with my suggested reform, however, if it weren't for that tendency some people have, I think it would be a good idea. Thanks for bringing that up:-)--SJP Chat 18:41, 5 July 2008 (UTC) This has been brought up before, but isn't a part of the problem the word 'ignore'? Rules should not be ignored, just not seen as a final word. Shouldn't it be more something like 'rules are not set in stone'? CitiCat 14:04, 6 July 2008 (UTC) Well, one thing is that the name "ignore all rules" has historical weight by now, and is unlikely to be changed. The other is that it reflects actual practice: most editors do ignore the rules most of the time. The rules are a fallback for resolving disputes, maintaining consistency, and so on, but most of the time, the encyclopedia works because we just hit the button and make sensible changes.--Father Goose (talk) 21:10, 6 July 2008 (UTC) No, it is perfectly appropriate for any reader to edit an article without reading any "rules". Most people are perfectly capable of understanding what belongs in an encyclopedia. —Centrxtalk • 04:14, 7 July 2008 (UTC) ## Ignore everything? So by this guideline, if keeping a neutral point of view is impossible, say because there is no-one to provide an alternate POV, should an article just assume that there is only one viewpoint on an issue, or what? This whole rule seems rather superfluous anyway. After all, if the other rules are followed to the letter, doesn't that automatically make the article good? In what case does one actually invoke this rule, without others arguing that the invoker is is simply claiming that he was following IAR to improve the article. Perhaps this whole rule should be re-worded to be less vague. Zell65 (talk) 07:17, 7 July 2008 (UTC) No, it is worded "vaguely" on purpose, it's not a rule that can ever be fully explained. It depends on your judgement - are you really really trying to improve the encyclopedia? Is a rule getting in the way of your improvement? Then ignore it. Don't worry about it, just be bold. HOWEVER, be very preared to explain to all and sundry exactly why you felt it was necessary to ignore the rule. If you're right, you will find that people agree with you (if you're really right, they might even decide to change the rule, nothing is set in stone here [except IAR and a few other basic principles]). It may (and quite likely will) turn out that the community thinks you're wrong, in which case your boldness will get swiftly reverted. In that case, it's important to just drop the IAR thing and switch over to "lesson learned", discuss it nicely, present your reasons, accept when most others don't feel the same way. There, now I just tried to give some more explanation, I bet it's still not enough. There is no good way to explain Ignore All Rules, it's a way of thinking more than a way of doing. I should obey the rules at all times, but I can also constantly question the rules and ask what the rules are for - if the rules aren't good enough, there may be some rare cases where I need to ignore them, to make a better encyclopedia. But putting in an article about the band me and Joey just made up and we might get this record deal and this guy at the local bar says maybe we can play there Friday - there's no rule you can ignore to make that article and have it stay. Franamax (talk) 08:06, 7 July 2008 (UTC) ## Suggested guideline Policies should be applied only if doing so would help fulfill the policy's stated purpose or provide the project with some benefit. Example: A stub article contains a two-sentence description copied from the subject's website. The subject wouldn't conceivably object to the copying (and let's assume the subject is notable). If there's really no possibility the copyright owner would object, there's no possibility of liability for the Foundation, impairment of the GDFL license, or the other purposes WP:COPYVIO fulfills, even if the two sentences might not be regarded as fair use. Since the stated purposes of WP:COPYVIO wouldn't be furthered by deleting the two sentences or the article containing them, we shouldn't simply apply it mechanically. Best, --Shirahadasha (talk) 06:31, 4 August 2008 (UTC) ## Kibitzing requested If I may, I'd like to ask some of the editors here to weigh in at User talk:Ling.Nut/3IAR; it's my contention that the essay User:Ling.Nut/3IAR is very far away from the common interpretation(s) of IAR, and that as a result it should not be included in the "see also" section of WP:WIARM.--Father Goose (talk) 07:04, 8 July 2008 (UTC) Huh? Rather than comment there, I'd ask anyone looking here to head on over to WP:Plagiarism and pitch in to help us build the page. Very excellent ripoff of Three Laws of Robotics - I suppose it's a fair-use paraphrase? To the point, I can't see that page having a place linked from WIARM - isn't WIARM supposed to help people to understand? Put a humor tag on top, I suppose it would be fine, overlay it with the audio from Mr. Roboto, maybe. As a serious contribution to policy, no way. Feel free to copy my response wherever you wish. Franamax (talk) 07:26, 8 July 2008 (UTC) Well, that just shows my ignorance of Mr. Asimov's works. And it would explain why it seems so out-of-sync with Wikipedia's existing guidance. Nonetheless, from what I can tell based on my conversations with Ling.Nut so far, it is meant to be a serious explanation of IAR and related ideas.--Father Goose (talk) 22:04, 8 July 2008 (UTC) That's Doctor Asimov to you! :) He made many many workses, some were stellar, some were, ummm... I have no doubt that Ling.Nut is making a sincere attempt to contribute, but they are trying to shoehorn an expansive idea into a pre-existing framework and it doesn't fit all that well. Laws 1 and 2 seem to be reversed IMO. I won't even start on the image, which seems to conflate communism with Leninism, Stalinism and Maoism and introduces an unwelcome off-site reference (yeah, better to not go down that path!). So my call is, good effort, bad idea. I'm not sure I wish to enter into extended debate on this, but if it's not already done, I'll go over and link this thread on LN's sub-page talk. Franamax (talk) 22:49, 8 July 2008 (UTC) Hi folks. Throwing the word "plagiarism" around colors the discussion in distinctly unhelpful (and highly inaccurate) ways. It is a path that leads to much heat (well, it could, I mean, but I'm not interested in that outcome) and a vast reduction in the amount of light. For merely one example out of thousands: think of the Terry Pratchett's extended and blatant (but unconscious, wink wink nudge nudge) spoof/borrowing/reworking of Fritz Leiber— weren't there even a pair of heroes in the first novel that were a patent spoof/borrowing/reworking of Fafhrd and the Gray Mouser? But that discussion is irrelevant. The word "plagiarism" should never have been typed, or at least, the "save" button should never have been pressed after typing. I won't say more on that topic; if you wanna have the last word, you can. The overarching point of WP:3IAR is that some rules/principles are unavailable for being ignored (grammar? sigh.). IAR is not the overarching principle of Wikipedia; the other four pillars of WP:5P stand in that position. Wikipedia is here to improve its content, but no one can be harmed along the way. End of story. Therefore, WP:3IAR accomplishes two goals: it stands firstly as a determined affirmation of what has always existed as the standard by which all behavior is evaluated... and secondly as a refutation of adding new crappy "I'm an admin, my opinions therefore reflect Wikipedia, therefore let's codify them" rules along the way. The first goal is given more stress than the second, but both exist. Ling.Nut (WP:3IAR) 02:11, 9 July 2008 (UTC) I find new users have a greater tendency to try to create rules that compel their views than experienced users (such as admins) do. Most admins have learned that editing policy, especially in a way that reflects their views and not the community's, tends to fail. There are some exceptions where a sufficiently large and influential group of admins have edit-warred their way to "success" (WP:SPOILER is the most prominent case of this I can think of), but thankfully that is rare. Much much more common is users (not even necessarily admins) getting their way through aggressive, tendentious behavior, whether or not it's supported by policy, policy rewrites, or most commonly, misrepresentations of policy. Admins are not nearly as powerful as you seem to think they are... the most powerful editors are those that have been around a long time and know best how to navigate (and sometimes game) the system. Admin powers are secondary to that knowledge in terms of exercising influence over Wikipedia.--Father Goose (talk) 23:50, 9 July 2008 (UTC) I'll just try a few points here: • Admins do not make rules, editors make rules. Admins enforce rules. FG has stepped on this a wee bit above, but in their comment above, change "group of admins" to "group of experienced editors, the majority of whom were admins" and it comes together better. (Bias: I agree with WP:SPOILER ;) • To say that again, admins have no special status in the making of rules - policies are formed by consensus and strength of argument. It does happen though that the more committed and experienced users in many cases are also admins. Almost by definition, admins are more experienced in the promise and pitfalls of policy, so they will tend to be more involved, but they have no exclusive right. • You, Ling.Nut, are welcome to contribute to any policy discussion you want, anywhere you want. If you think there's some rule that prevents you from wading into discussions where only admins are contributing - ignore it. You might find that no-one is responding to you, but as long as you don't get upset about that and cause a scene, you might also find that very subtly, people are adopting your ideas. • And least important - I didn't use the word plagiarism, I put up a link to WP:Plagiarism, where we can use eyeballs and thoughts. I'll continue to hit "Save" on that, until we get a good working page fleshed out. Thanks for putting in the link to attribute the inspiration for your essay. Franamax (talk) 07:37, 10 July 2008 (UTC) Might be worth adding to WP:IAR/V. —Ashley Y 03:03, 16 July 2008 (UTC) ## /Versions I agree with various others that the Versions subpage is not a useful link in its current form. I guess I'd never actually looked at until recently, my concern was always that the original "if it makes you nervous" form was represented, and it is by its own separate link. The problem with the Versions page is that it is a catch-all of "some versions, or suggestions". There is no indication of the history and level of acceptance, nor any links to why the suggestions were not adopted. As Father Goose says, it's a muddle. I don't see where it will help in understanding IAR, since it provides no context. If there's a good way to help the user understand the genesis of IAR other than saying "read the entire page history and talk archives" I'm all for it. /Versions is not the good way. Franamax (talk) 06:26, 7 August 2008 (UTC) I agree and support the link's removal. —David Levy 06:38, 7 August 2008 (UTC) I support the link's removal. The page contains some real other versions of the policy as it has been in the past, but much of it is nothing more than a list of wordings that did not gain consensus labeled as "other versions". Much of the content of the page never really was an other version of this policy(other than being placed there without prior discussion then reverted). The "other versions" link is at the top of each page and called "history". Chillum 15:33, 7 August 2008 (UTC) I disagree. There's a great deal of disagreement over the best wording for this policy, and some of the versions offer insight that the present version does not (due to the obsessive insistence on brevity). It's not supposed to be a list of historical versions, but a list of suggested versions. —Ashley Y 08:47, 8 August 2008 (UTC) Please see Wikipedia:Content forking. —David Levy 19:53, 8 August 2008 (UTC) This isn't content. And it's no more a fork than WP:WIARM. It's just more views on the subject. —Ashley Y 20:38, 8 August 2008 (UTC) 1. It isn't content? Then what the heck is it? Are you attempting to wiki-lawyer on the basis that this is a project page instead of an article? 2. WP:WIARM elaborates on the policy. the /Versions page is a disorganized dumping ground for different versions that failed to retain/achieve consensus, and there's a similar lack of consensus for it to be linked to. —David Levy 20:45, 8 August 2008 (UTC) 1. You're the one arguing for removal based on a policy intended for article space, so I think you're the one wiki-lawyering. 2. /Versions also elaborates on policy by providing different perspectives on it. —Ashley Y 20:55, 8 August 2008 (UTC) Actually, there are at least four editors asking for removal - you've been reverted a few times now. Time to talk. OR you could improve /Versions to address the concerns. Franamax (talk) 21:24, 8 August 2008 (UTC) I am talking. Look, here's me talking: /Versions elaborates on policy by providing different perspectives on it. I'm very open to ways in which it might be improved, however. —Ashley Y 21:35, 8 August 2008 (UTC) 1. Actually, I'm arguing for the link's exclusion based on the lack of consensus that it improves the page (and my opinion that it doesn't). I cited Wikipedia:Content forking to provide an explanation of why content forking is unhelpful (not to say "we can't do it because a policy says so," which is a poor argument). Would you care to explain why you believe that the principle doesn't apply here? 2. No, the /Versions page is a backdoor method of displaying non-consensus wordings. —David Levy 22:30, 8 August 2008 (UTC) 1. It's a matter of spirit vs. letter, I suppose. Wikipedia:Content forking was intended for article space: that's its spirit. You're trying to use its letter to apply it to something for which it wasn't really intended, in order to argue for a change. I believe this is known as "wiki-lawyering". Perhaps it would have merit if /Versions were claiming to be another policy. 2. Actually, it's different perspectives on the idea, only some of which were attempted to become wordings for IAR. It's not claiming to be official policy or anything, it's merely a helpful collection of views and understandings. —Ashley Y 22:38, 8 August 2008 (UTC) Again, I'm not arguing for the link's exclusion on the basis that the guideline says so. I cited it because it contains an explanation of why this is a bad idea. You then responded by pointing out that IAR isn't "content" (by which, you apparently meant "an article") which is irrelevant to whether the advice is applicable. This is, indeed, a matter of spirit vs. letter, but our positions are the opposite of what you claim. All of the wordings in question are non-consensus forks of the policy (regardless of whether they're been formally proposed as replacements), and there is no consensus that they're "helpful." —David Levy 23:09, 8 August 2008 (UTC) You're taking /Versions much too seriously. It's not supposed to be any kind of policy page, so it can't itself count as a fork. This is what Wikipedia:Content forking is all about: parallel articles on the same topic from different POVs, and it's a bad idea because such articles should typically be merged to achieve some sort of neutrality. I suppose you could argue that each entry in /Versions is a fork, but none of them make any claim to be policy. Wikipedia:Content forking isn't going to help you here without a lot of bending of its intent. —Ashley Y 23:34, 8 August 2008 (UTC) The lack of a {{policy}} tag doesn't change the fact that the /Versions page is a disorganized mishmash of forked, non-consensus text. Linking to it from the policy is no better than linking to a list of non-consensus versions of an article. —David Levy 02:41, 9 August 2008 (UTC) It rather does change the fact, actually. It's "forked, non-consensus text" only in the sense that there's no consensus for it to be policy. This is a rather different case than an article, or indeed any other policy, as the actual text is deliberately kept short while explanation is kept on different pages (such as WIARM). /Versions is merely more explanation, albeit in an elliptical form. —Ashley Y 02:53, 9 August 2008 (UTC) What, in your assessment, does the page explain? —David Levy 03:15, 9 August 2008 (UTC) It helps explain IAR, albeit in an elliptical form. People get a better understanding of IAR's spirit by considering different wordings. None of these should threaten the "official wording" here, which is the actual policy. —Ashley Y 03:26, 9 August 2008 (UTC) There is no consensus that those wordings are accurate or helpful. Anyone can post whatever interpretation they please. —David Levy 03:43, 9 August 2008 (UTC) The thing is, people come here all the time attempting to change this policy. There's a reason for that: it's so brief it's hard to understand what it means: for instance, what exactly counts as "improving or maintaining" in the absence of the rules? The various attempts people have made to change or reword this policy collectively help understand its spirit. Now of course we need one single official IAR, but the various versions reflect others' attempts to clear up what they found confusing. —Ashley Y 23:34, 8 August 2008 (UTC) We have explanatory pages for that. A context-free list of forked wordings clarifies nothing. If anything, it might increase people's confusion. —David Levy 02:41, 9 August 2008 (UTC) To be honest, I had intended /Versions to act as a lightening rod for the continual attempts to change the policy. These attempts actually include some good ideas, and I believe those are ideas worth saving. We can have the One True Wording and still let people get an idea of the variety of interpretation. —Ashley Y 23:57, 8 August 2008 (UTC) Yes, I know that's what its purpose has been. However, there haven't been a lot of lightning strikes lately, and as a list of discarded "variants", I don't think it's a useful page to link to. (I'm not necessarily saying that the variants should have been discarded, but unless they're presented in a coherent way -- not just a "list of versions or suggestions" or however you want to phrase it -- I don't agree that linking to it from IAR improves IAR.--Father Goose (talk) 00:30, 9 August 2008 (UTC) By "lately", you mean while the /Versions link was up, right? (Although there was one.) It's helpful to consider that regardless of whether there exists a better wording than the present, there may still be deficiencies in the existing wording: deficiencies that various people have perceived and attempted to address. Reading them might help to understand the spirit of the original. —Ashley Y 00:43, 9 August 2008 (UTC) I think one good wording and/or explanation is better than a dozen flawed ones. In that vein, I wrote WP:UIAR months ago, and I still feel that IAR's deficiencies could be overcome by simply placing UIAR's text on the IAR page itself.--Father Goose (talk) 02:06, 9 August 2008 (UTC) Yes, I think WP:UIAR is better than the present page. The key for me is that it mentions that consensus is important in successfully "ignoring the rules". And what its more, your page is potentially more open to any improvements and clarifications people may have, and a common understanding can evolve. /Versions might not be so necessary then. —Ashley Y 02:19, 9 August 2008 (UTC) That would convey to users that they must read a lengthy page before ignoring rules (which simply isn't so). What's the problem with the current setup? —David Levy 02:41, 9 August 2008 (UTC) There really is no problem if there are sufficient people to help interpret IAR. Currently that unfortunately happens at WP:IAR, WT:IAR, WP:WIARM, WT:WIARM, WP:UIAR and WT:UIAR. It's a constant back-and-forth of edits, reverts and talk threads. What we really need is something like the Help Desk or the Reference Desks - a neutral spot for people to come and ask questions about IAR. Franamax (talk) 03:07, 9 August 2008 (UTC) Why do you even need people to interpret IAR? Why don't you just figure out what it means, and put it on the page, like every other policy? Is the brevity really that valuable? —Ashley Y 05:03, 9 August 2008 (UTC) ## Wording of IAR There are two versions that keep being reverted back and forth, so let's discuss it here: If a rule, including this one, prevents you from improving or maintaining Wikipedia, ignore it. OR If a rule prevents you from improving or maintaining Wikipedia, ignore it. I don't see what about the first one makes it so desirable. Anyone want to elaborate? NuclearWarfare contact meMy work 20:01, 8 August 2008 (UTC) Yes, it's for clarity. It's not immediately obvious that the Ignore All Rules policy itself may be ignored. For instance, someone may still want to follow all the other policies, even though they think some of the policies prevents them from improving the encyclopedia, simply because they think following the rule of law is better for the encyclopedia than subjectivity. They may even distrust their own judgement on what is best for the encyclopedia, and therefore believe it's best to leave this to the lawmakers on all other policies besides this one. In these cases, they should be aware that they may ignore the "Ignore all rules" policy, and continue to abide by all the other Wikpedia policies. Ignoring the "Ignore all rules" policy may be best for improving and maintaining the encyclopedia. Richard Blatant (talk) 20:08, 8 August 2008 (UTC) 1. If someone believes that it's "better for the encyclopedia" to follow a rule than to ignore it, doing so is entirely consistent with IAR; one needn't ignore IAR to abide by another rule. 2. Nowhere is it remotely implied that IAR cannot be ignored, so this additional text is superfluous. 3. You referred to the addition as "non-controversial," despite the fact that it's been discussed and rejected in the past. You might not have been aware of this, but when I pointed it out, you replied that "according to 'Ignore all rules' consesensus [sic] is not necessary if it prevents me from improving the encyclopedia." I don't know whether that's a sincere rationale or an attempt to make a point, but it's been firmly established that IAR is not an invitation to unilaterally overrule consensus. (Doing so does not help to improve or maintain Wikipedia.) 4. You blindly reverted to the previous version (thereby reinstating an unrelated change). Please pay better attention to your edits. —David Levy 20:45, 8 August 2008 (UTC) There's no rule against going against consensus anyway, so what you're saying is irrelevant. Richard Blatant (talk) 21:29, 8 August 2008 (UTC) How does that address what I wrote above? —David Levy 22:30, 8 August 2008 (UTC) WP:CONS. It's kind of the ultimate rule, in terms of enforcement. If you defy everyone else by edit warring, you'll get tossed on your ass. If disagree with everyone without actually edit warring, that's okay, though consensus is still upheld in such a scenario.--Father Goose (talk) In addition it's your opinion that going against consensus "does not help to improve or maintain Wikipedia." That's a very naive statement. It's easy to imagine cases where doing something the consensus is against improves the encyclopedia. Richard Blatant (talk) 21:36, 8 August 2008 (UTC) I meant that continually edit-warring in a manner defiant of consensus doesn't help to improve or maintain Wikipedia. Whether the actual edit does is irrelevant, as it will be reverted according to consensus (and if someone persists in unilaterally reinstating it, he/she probably will be blocked). This disruptive series of events doesn't help to improve or maintain Wikipedia. —David Levy 22:30, 8 August 2008 (UTC) And if you don't think deciding this point (or protecting pages over it) will help you improve the encyclopedia, then...--Shirahadasha (talk) 20:55, 8 August 2008 (UTC) I'm not especially in favour of this particular change, but David Levy inadvertently points out the problem with this policy: it's been firmly established that IAR is not an invitation to unilaterally overrule consensus. (Doing so does not help to improve or maintain Wikipedia.) Why should anyone believe that? An editor might have their own opinion on what counts as improving or maintaining Wikipedia. Of course, the rules explain what it is to improve or maintain Wikipedia, but IAR says we can ignore them... —Ashley Y 21:06, 8 August 2008 (UTC) ### Common sense The only thing missing in this discussion is common sense - and it's linked from the IAR page. Everyone has their own opinion on what improves or maintains Wikipedia. That's why there are some explanatory essays linked - and they discuss the need for consensus and common sense. Blindly reverting to your preferred version whilst citing IAR is just not on - and blocks follow. Discussion, prudence and forbearance work much better. Semantic arguments such as this thread are probably the worst approach. Franamax (talk) 21:35, 8 August 2008 (UTC) Yes, yes, that's all explained in the rules. Which we are told we can ignore. —Ashley Y 21:37, 8 August 2008 (UTC) ...if doing so helps to improve or maintain Wikipedia. Anyone who reads the explanatory pages will know what that means, and those who believe that it's a good idea to disregard the policy's spirit in favor of exploiting an apparent loophole (by ignoring the explanations themselves) will quickly realize their error when they find themselves blocked. —David Levy 22:30, 8 August 2008 (UTC) "Common sense" tells me that a change is not an improvement because the consensus approves of a change, but simply because the change is an improvement. If it is an improvement, it is an improvement regardless of what the consensus says. Richard Blatant (talk) 21:39, 8 August 2008 (UTC) I agree with this analysis. IAR says you should go ahead and make such changes, since after all they are improvements even if everyone else disagrees. WP:Consensus says you shouldn't make such changes. The latter is what we want, I believe. —Ashley Y 21:44, 8 August 2008 (UTC) I also agree with this analysis. IAR says to go ahead and make such changes, since after all they are improvements. That's being bold. And then when everyone else disagrees, WP:Consensus says you shouldn't go on making such changes again when they have been discussed and consistently rejected. --NewbyG (talk) 02:26, 9 August 2008 (UTC) Moreover, I'm dubious that a policy exits that says consensus must be abided by in the first place. I'd like to see proof of such a policy. I don't see anything in WP:Consensus that says one has to go along with consensus. Richard Blatant (talk) 21:46, 8 August 2008 (UTC) (e/c) Here's the thing - the policy has been kept to the canonical twelve words over a long period of time precisely because it is the minimal statement. It is deliberately minimal, it's intended to make you think. Not argue in circles about metaphysical notions of ignoring rules to ignore rules - just think on your own what is right. What helps you maintain or improve the encyclopedia? It's not an invitation for you to explain the not-rules to others or to find circularity in the concept. It's a guide to help you decide what to do in the million other scenarios you will find on Wikipedia. And you are not told you can ignore the rules - no external body is telling you that, it's all of us together. As long as you think it's someone else telling you to ignore rules, you haven't understood IAR. We seek consensus, we challenge consensus, we are sometimes bold, we always discuss. We can each always use IAR to do something - once. After that we move to WP:BRD to resolve the situation. As the explanatory essays point out, IAR does not mean you're right no matter what. (And RB, perhaps you need to absorb a little more culture and ethos of Wikipedia. If you continually ignore consensus you get blocked, simple as that. Franamax (talk) 21:52, 8 August 2008 (UTC) "It is deliberately minimal, it's intended to make you think." That's ridiculous. Why make someone "think" if it can be expressed explicity? What is wrong with making it clear that the "Ignore all rules" policy itself is included in the rules that are to be ignored? Again, I don't believe that rule against going against consensus even exists on Wikipedia. I'd like to see it. Richard Blatant (talk) 21:55, 8 August 2008 (UTC) Yes, Franamax, this too is all explained in the rules, which this policy says we can ignore. —Ashley Y 21:56, 8 August 2008 (UTC) ### If doing so ...if doing so helps to improve or maintain Wikipedia. Anyone who reads the explanatory pages will know what that means, and those who believe that it's a good idea to disregard the policy's spirit in favor of exploiting an apparent loophole (by ignoring the explanations themselves) will quickly realize their error when they find themselves blocked. —David Levy 22:30, 8 August 2008 (UTC) Let me get this clear. You said "If you continually ignore consensus you get blocked, simple as that." Are you saying that there is a rule against ignoring consensus? If so, I'd like to see it. Or are you saying, that you're simply going to make up your own rule and block, because you "Ignore all rules?" Richard Blatant (talk) 22:01, 8 August 2008 (UTC) (e/c) RB, the whole idea is to make you think. IAR can never be completely laid out. In your specific instance, if all rules can be ignored, then the rule to IAR can be ignored too - that's a trivial result. It's not necessary to state it explicitly. More generally, there is a continual desire to encumber IAR with explanations of what it means - but it doesn't mean anything. Every example you add only constricts the rule to a more narrow interpretation, but it's deliberately meant to be interpreted widely and tried in all situations. Strength of reasoned argument and consensus determine whether or not your individual IAR'ing is a good thing or not. RB, secondly, consensus is the rule here. I don't have the page off the top of my head, maybe it doesn't exist. I'm not going to put effort into backing up the statement, you can easily research it yourself. Suffice to say, if you press ahead and ignore consensus to do whatever you want, especially if you justify it with IAR, you will very quickly find out how Wikipedia works - it works by consensus. (after e/c - I'm not an admin but the rule would be "blocked for disruption") AY, I can only try to explain. You are free to ignore a rule if it helps you maintain or improve the encyclopedia. When people object to your application of IAR, again, you are free to ignore it. I can only suggest that you think very carefully about what you are doing, if people object, you need to consider the possibility you're wrong (I do that all the time BTW). And consider changing your approach to make your ideas more acceptable. Just don't get stuck on "I'm right", that rarely works out. Ignoring rules is a very difficult concept - unless you find that point of mind where suddenly it's clear. I'm sorry I can't convey that idea. Changing the policy page text won't bring it any closer. Franamax (talk) 22:20, 8 August 2008 (UTC) Yes, this is explained in WP:Consensus -- which is one of the rules that this policy says we may ignore. —Ashley Y 22:32, 8 August 2008 (UTC) ...if doing so helps to improve or maintain Wikipedia. Anyone who reads the explanatory pages will know what that means, and those who believe that it's a good idea to disregard the policy's spirit in favor of exploiting an apparent loophole (by ignoring the explanations themselves) will quickly realize their error when they find themselves blocked. —David Levy 23:09, 8 August 2008 (UTC) But why should anyone pay attention to the "explanatory pages"? Surely if they were important, they'd be part of the policy? And in fact one needs to understand WP:Consensus together with WP:Bold to get a good idea of what passes as improving or maintaining Wikipedia. So sure, if a rule prevents you from improving or maintaining Wikipedia, ignore it, though you might get blocked anyway if you didn't understand the precise understanding of "improving or maintaining" used here. —Ashley Y 23:44, 8 August 2008 (UTC) Users don't need to read the explanatory pages before applying IAR. They don't even need to read IAR before applying it, and they needn't read any rules before editing. Editors acting in good faith—even in complete ignorance of the rules—generally do more good than harm. When they err, we don't block them; we correct/explain their mistakes and direct them to pages that assist them in editing constructively. The same is true here. If someone applies IAR inappropriately, we don't rush to block them; we explain the situation and direct them to the explanatory pages. Only if/when they subsequently continue down a disruptive path do they risk being blocked. —David Levy 02:41, 9 August 2008 (UTC) ### Indeed users Indeed users do not need any of that. The problem is those who use IAR to ignore a rule because they believe they are improving the encyclopedia. Oh sure, we'll correct them, and they should listen per WP:CONSENSUS, which is one of the rules... The trouble is, it's actually rules (for example, WP:CONSENSUS and WP:BOLD) that determine whether IAR is being applied "appropriately". It seems like common sense, but only to those who have internalised the rules. —Ashley Y 03:04, 9 August 2008 (UTC) You seem to believe that editors are either entitled to exploit perceived technicalities or justified in believing that they can. Fortunately, that isn't how Wikipedia works. —David Levy 03:15, 9 August 2008 (UTC) Continually editing in a manner contrary to consensus usually is disruptive. Disruptive editing leads to blocks. —David Levy 22:30, 8 August 2008 (UTC) Yes, the rules explain that very well. —Ashley Y 22:32, 8 August 2008 (UTC) IAR doesn't state that ignoring a rule requires others to do so, nor does it state that ignoring the rules never carries any consequences. —David Levy 23:09, 8 August 2008 (UTC) Yes, consequences that are specified in the rules. —Ashley Y 23:35, 8 August 2008 (UTC) ...which others have no obligation to ignore. And of course, it's standard procedure to repeatedly warn editors acting in good faith before blocking them. —David Levy 02:41, 9 August 2008 (UTC) ...warn them that they've broken the rules, which IAR said they could ignore. —Ashley Y 03:04, 9 August 2008 (UTC) Warned that they're about to be blocked for disruption, which IAR didn't guarantee against. —David Levy 03:19, 9 August 2008 (UTC) Right. If a rule prevents you from improving or maintaining Wikipedia, ignore it, except that sometimes it might be considered "disruption" according to the rules, and you might get blocked for it. I think a policy is flawed if it ever encourages anything sanctionable. —Ashley Y 03:28, 9 August 2008 (UTC) Again, we don't block people for applying IAR; we block them for continuing to cause disruption after they've been repeatedly asked to stop. To explicitly warn against that in IAR would be to suspend the assumption that we're dealing with rational humans. —David Levy 03:43, 9 August 2008 (UTC) It's interesting that whenever anyone points out the deficiencies in the current wording, its defenders always adduce the idea of consensus. Which is fine, really, but it rather suggests consensus should be mentioned in the policy... —Ashley Y 23:15, 8 August 2008 (UTC) NO!!! Because if the policy said you needed consensus, you would never ignore rules! Consensus quite often adds up after someone ignores rules - and many other people realize they did something good - then the rule gets changed. IAR conveys the imperative to improve the encyclopedia. Not to improve your own personal idea of what's good for the encyclopedia, to improve the enycyclopedia itself. You don't know it's good when you do it, you just think it is. It's only after two, five or five hundred other people chime in and say it was a good move that you can start to believe that maybe it really was a good thing to do. IAR is only ever confirmed by consensus. Franamax (talk) 23:26, 8 August 2008 (UTC) Oh yes, I agree with that. I'm a fan of WP:Bold. The trouble is when people improve the encyclopedia when they know consensus is against them. IAR says they should, because WP:Consensus is just another rule, but WP:Consensus says they shouldn't, and I think that's preferable. —Ashley Y 00:13, 9 August 2008 (UTC) ### Not sure It doesn't matter if you think consensus is against you - if you think you can make the encyclopedia better, do it! -- Are you sure you really mean that? Suppose I'm in an edit war over some article, and I hold the (quite reasonable) belief that a more accurate encyclopedia is an improved encyclopedia. Ten other editors disagree with my content change, and have demonstrated so in a revert war (against just me). So, I think consensus is against me, and I think I can make the encyclopedia better, that is, more accurate. Should I continue to revert? Oh sure, there's all the (very sensible) "stuff after that" that you mention, but that's in the (ignorable) rules. —Ashley Y 00:50, 9 August 2008 (UTC) Tell you what, Ashley, please keep ignoring WP:EDITWAR, since you've clearly got us beat here. You're completely right. You can ignore any rule you like, and there's not a damn thing we can do about it. You've successfully outwitted us all. Keep reverting the page until we learn our lesson.--Father Goose (talk) 00:47, 9 August 2008 (UTC) Um, have you considered taking a break from this policy/talk page? I'm pretty sure it's not worth making personal attacks over. —Ashley Y 00:59, 9 August 2008 (UTC) Well, keep in mind that a lot of drive-by editors come here to challenge policy and use similar arguments and go in circular directions just to be right. You could always try to ignore the rule of getting pissed off at a single flamy post and answering back, instead try a polite response (which you did do, I mean even more polite :). But back to the thread: The example you give is a good one because it embodies some of the misconceptions about IAR. The "reasonable" belief there is yours and yours alone - in that case, if you truly are reasonable, you need to consider whether your article belief truly is reasonable. Ten people disagree and not one other editor is on your side? So then, it's best to back off, seek out some other opinions, find some good reliable sources, work on the talk page, anything but get into an edit war. Think about it - if you're right, RS will support you and other people will agree; if you're wrong, push the Off button and take a walk, smell some flowers and pat a dog. Either way, sleep on it and try again tomorrow. If you take the other course of insisting you're right right now! and you just revert against ten other editors - well then you'll get blocked and you don't end up improving wikipedia anyway, do you? Franamax (talk) 01:19, 9 August 2008 (UTC) You ought to give more credit to those "drive-by editors". There's a reason this page attracts a lot more attempts to fix it than other policy pages do. My own approach, as you know, is to accept that a lot of their ideas are helpful and to at least record them for their merit. It's more in the spirit of AGF, and it helps to broaden perspectives. As for what to do in my hypothetical situation: it's mostly good advice, and discussed at length in the rules. The problem is, I believe, is that you and others (well, most of us) have been working on Wikipedia so long that you've internalised the rules as "common sense", when in fact it all needs to be learned. It's perfectly possible for ten people to be wrong and one person to be right, and to know that they are right. Internalised WP:Consensus, feeling like common sense, says "give it up if you can't achieve consensus", and this is a preferable course of action. But that's not what IAR says. It advises me to ignore WP:Consensus and improve the encyclopedia by making a change which I know makes it more accurate. I may well end up being blocked because I broke the rules, but I've been advised to ignore them. I think a policy fails if it advocates a course of action that leads to being blocked. —Ashley Y 01:40, 9 August 2008 (UTC) That post actually wasn't a personal attack, despite the sarcasm and link to WP:GIANTDICK. It was my way of pointing out that if you actually ignored every rule "...that this policy says we may ignore" in service of your points, you'd find yourself in the scenario outlined in WP:GIANTDICK. So replying to every point someone makes here with "this policy says we may ignore [that]" is empty. We know that. You know that. On Wikipedia, the only victories come from convincing your fellow editors, not outwitting them. The smart-alec replies you've been engaging in here are getting you nowhere.--Father Goose (talk) 01:53, 9 August 2008 (UTC) ### Well Well I think I'll follow Franamax's advice here. —Ashley Y 02:02, 9 August 2008 (UTC) Crucially, this policy has to exist -for- the serious editors, first and foremost. It is what grounds the most committed editors of the wiki, it's not a policy that should be oriented towards newbies. It is the fundamental principle for every one of us, experienced or not - be willing to ignore rules, be tolerant of those who ignore rules - if it improves the encyclopedia!! For newcomers, we can try to provide the explanatory essays, for the old hands, we need to leave the simple rule - IAR is a founding policy. I understand your desire to make things more clear. You can do that by improving the /Version page to the point where it's suitable for inclusion here. It will need a lot of work though. And beware of trying to explain things to newcomers. You pursue a laudable goal but consider the recently created your first article page. Observe the elaborate warnings about not creating your first article at the page, check what's shown in edit mode, check the warnings on the talk page. Now look at the page and talk page history, count for me the number of new articles created immediately adjacent to the warnings. Report back. WP:IAR is for all of Wikipedia, it can't be adjusted towards the people who only read the headings. It's too important. Franamax (talk) 03:00, 9 August 2008 (UTC) Who decides whether someone breaking a rule is improving the encyclopedia? And how can it be proved one way or the other? Richard Blatant (talk) 03:16, 9 August 2008 (UTC) Wikipedia:Consensus. And please don't argue that IAR grants users license to ignore that page. We don't write our policies/guidelines for the benefit of people who seek out and attempt to exploit loopholes to get their way. —David Levy 03:43, 9 August 2008 (UTC) Actually, the current wording of IAR does grant users license to ignore that page, and indeed someone might well do so in good faith. There have been many proposals to fix IAR to refer to CONSENSUS explicitly, but, you know... Oddly enough, Franamax was arguing earlier that IAR connotes ignoring consensus. Again, the wording could be changed to clarify this. —Ashley Y 04:17, 9 August 2008 (UTC) Not odd at all. Rules reflect consensus, rules are consensus. To ignore a rule, you must ipso facto be ignoring consensus. How else could you do it? That freedom must be there - but as has been said over and over and over, you are -not- free to ignore rules forever. Take your pick, read the essays; or ignore everything because this page told you to ignore it, march on to your block. Wikipedia editors need to have judgement, intelligence and common sense. Maybe this is the test. Franamax (talk) 04:38, 9 August 2008 (UTC) Indeed I am not free to ignore rules forever. But that's not what IAR says: it doesn't specify any limit, provided I am "improving or maintaining". And thus anyone who follows this policy exactly as written may well end up being blocked. Of course, we actually advise them to read it a particular way, but we don't put that advice actually in the policy because it would spoil its brevity, or something. —Ashley Y 04:52, 9 August 2008 (UTC) ### Continued Again, such a block won't be applied without providing a sufficient opportunity to read the explanatory pages, and the fact that that information doesn't appear on the policy pages doesn't create a loophole that enables editors to behave unreasonably. —David Levy 05:07, 9 August 2008 (UTC) IAR enables users to ignore all rules. I asked Richard not to argue that point because contextually, it refers to a scenario in which someone is deliberately being unreasonable. —David Levy 04:49, 9 August 2008 (UTC) Franamax, it seems to me that BOLD is what you first needed more than this page. I agree this page, like all policy, must be for all editors, and that's the source of the problem. It's not the fundamental principle actually, since on occasion it must yield to WP:CONSENSUS. For instance: • Don't try to improve Wikipedia when there's a clear consensus against your particular action. • Don't try to improve Wikipedia if it involves reverting an Office Action. Of course you could say (like David Levy), "you can do that, but then others can block you", but that misses the point of policy. If anything, policy shouldn't advise anything sanctionable. /Versions is mostly fine as it is, and I think the complaints come from misunderstanding. It's supposed to be no more than a set of alternative understandings. No particular entry should be taken seriously, and it's to its benefit that entries contradict each other to a certain degree. —Ashley Y 03:22, 9 August 2008 (UTC) If someone honestly believes that edit-warring against consensus or acting against office actions serves to improve or maintain Wikipedia, he/she has a fundamental misunderstanding that far exceeds anything that we can hope to address with this policy. —David Levy 03:43, 9 August 2008 (UTC) It's a "fundamental" misunderstanding only to someone who's internalised the rules. Why should anyone pay attention to an office action? Why is it such a big deal? The rules explain why. Wikipedia is an encyclopedia, on its face anything that improves its accuracy is an improvement. Actually it turns out that certain things, such as edit-warring against consensus even when you are right, do not count as improvement. This is not obvious if you don't know the rules, actually, better an article be right half the time, or clearly in a state of flux, than be stable and wrong. There are lots of other things that don't count as improvement... but you need to read the rules to find out what they are. —Ashley Y 04:28, 9 August 2008 (UTC) Why should anyone pay attention to an office action? Because we've explained to them the importance of doing so. You seem to be under the impression that our rules are a system of laws by which we govern and hand down punishments (and that by allowing people to ignore them, we eliminate our "legal" standing). This is entirely incorrect. Our rules exist simply to describe how Wikipedia works, so perceived loopholes are meaningless around here. We rely on contributors to behave reasonably, and if they don't, we needn't consult page 374, section 3, paragraph 8, line 2 to determine whether they're technically following the rules. We just do what makes sense. —David Levy 04:49, 9 August 2008 (UTC) That explanation is, in fact, one of the rules. It's in WP:CONSENSUS. You can call it a rule or an "explanation", but either way and contrary to IAR, it shouldn't be ignored even if you think you're improving Wikipedia that way. The rules exist to describe how Wikipedia works, including what will get you blocked. We rely on contributors to behave "reasonably", including interpreting "improve and maintain" in IAR in a particular way. In practice, though, "reasonably" means following the way Wikipedia works as described by the rules. It only seems like common sense when you've internalised the rules. —Ashley Y 05:01, 9 August 2008 (UTC) I don't even know what you're arguing anymore. —David Levy 05:07, 9 August 2008 (UTC) I'm arguing that WP:CONSENSUS shouldn't be ignored even if you think you're improving Wikipedia that way (contrary to IAR). —Ashley Y 05:37, 9 August 2008 (UTC) The places where we keep talking about the explanatory essays linked from the policy page - do you know what "essay", "linked" and "page" mean? Have you tried reading the essays linked from the policy page? Everything you're saying is already answered. The essential step is that you read it. Franamax (talk) 05:51, 9 August 2008 (UTC) Yes, it's all there. But why do you need explanatory essays? Why don't you just put the explanation on the page, like every other policy? —Ashley Y 05:55, 9 August 2008 (UTC) Why are you asking questions that already have been answered? —David Levy 05:57, 9 August 2008 (UTC) What's the answer? —Ashley Y 05:59, 9 August 2008 (UTC) I'm tired of repeating myself. You'll only ask again. —David Levy 06:00, 9 August 2008 (UTC) Seriously, what's the answer? Point to a diff if you like. Perhaps you are confusing you having already answered it with me having already asked it, but you didn't answer there either. —Ashley Y 06:12, 9 August 2008 (UTC) [de-indented] [11]David Levy 06:17, 9 August 2008 (UTC) "Users don't need to read the explanatory pages before applying IAR. They don't even need to read IAR before applying it, and they needn't read any rules before editing." But apparently they do. Why else is Franamax pointing to them, if they don't need to be read? There seems to be some inconsistency on this point. Whenever I point out that IAR is hard to understand, I'm told "just read the explanatory essays". So why not just put them on the policy page? Because apparently they're not necessary after all. Well, which is it? —Ashley Y 06:24, 9 August 2008 (UTC) 1. Again (and I see that it was futile to try to avoid repeating myself), we advise editors to read the explanatory essays when they appear not to understand the policy. They also are can be helpful to those who choose to read them before applying IAR, but this is not required. 2. You overlooked the other relevant reply from that diff: "That would convey to users that they must read a lengthy page before ignoring rules (which simply isn't so)." —David Levy 06:37, 9 August 2008 (UTC) 1. But still, why not put it on the page? We put helpful explanatory text on all the other policies, and not all of it is required to be read. Generally, the "nutshell" will do, unless there's something one doesn't understand. Why not do the same thing here? It makes it clear that it has some kind of consensus, which cannot be assumed from looking at the essay tag. 2. In practice, people just follow the nutshell of a policy, and only "read a lengthy page" if they're having trouble understanding it. —Ashley Y 06:48, 9 August 2008 (UTC) IAR's text isn't a summary; it's the entire policy. It really is that simple, and that needs to remain clear. —David Levy 06:56, 9 August 2008 (UTC) But every other "entire policy" has all explanation and clarification included. Why do we put this on a separate page? After all, you could do the same thing with any other policy. Consider WP:NPOV. You could replace that with the single sentence "All Wikipedia articles and other encyclopedic content must be written from a neutral point of view, representing significant views fairly, proportionately, and without bias.". And it would work as an "entire policy", but only provided everyone knew what it meant. It's the same with IAR. It works as an "entire policy" only if everyone knows what it means. But in practice, to understand exactly what "improve and maintain" means, or what applications of IAR are going to get you blocked, you might need some explanation. We have that explanation, it's even got some sort of consensus. Why not put it on the page like we do with WP:NPOV, and every other policy? —Ashley Y 07:05, 9 August 2008 (UTC) ### Relatively complicated WP:NPOV is a relatively complicated policy based on arbitrary standards. WP:IAR is advice to ignore rules when they prevent one from improving or maintaining Wikipedia. The former contains many intricacies that must be boiled down, while the latter is a refreshingly simple concept that can only be elaborated upon. IAR encourages users to edit without worrying about reading the very type of content that you seek to add to IAR. —David Levy 07:32, 9 August 2008 (UTC) Actually, NPOV is no more than "All Wikipedia articles and other encyclopedic content must be written from a neutral point of view, representing significant views fairly, proportionately, and without bias.", but, crucially, with a particular understanding of each one of those terms. It's that understanding that requires all the rest of the policy. It's the same with IAR: it only appears "refreshingly simple" because you have internalised a precise meaning of "improve and maintain". In practice it contains certain intricacies that can cause problems or being blocked: WIARM explains some of these. Now most of the time people just aren't going to run into them, but sometimes they'll be unsure and then they'll need to read what would be the rest of the policy. It should be the same as any policy, where users read the full text only if they're unsure on some point. Otherwise, they just read the nutshell and don't worry about the rest. —Ashley Y 07:58, 9 August 2008 (UTC) We'll have to agree to disagree. —David Levy 08:13, 9 August 2008 (UTC) The "essay" status, after all, suggests it's not all that reliable. —Ashley Y 05:57, 9 August 2008 (UTC) No, that isn't what "essay" means. —David Levy 06:00, 9 August 2008 (UTC) Well, the suggestion is that they might not have consensus. —Ashley Y 06:05, 9 August 2008 (UTC) ### They may "They may range from personal or minority views to statements that enjoy a wide consensus amongst Wikipedia editors." (emphasis mine) [I experienced an edit conflict when attempting to post the above, due to the revision in which you changed "don't" to "might not."] —David Levy 06:17, 9 August 2008 (UTC) "Might not" is accurate. —Ashley Y 06:24, 9 August 2008 (UTC) Right, and "might not" = "might." —David Levy 06:37, 9 August 2008 (UTC) Quite. The "essay" tag simply provides no guarantee of any kind of consensus. Now, I believe WIARM has in fact some kind of consensus, but that's not obvious from looking at it. How are they supposed to know that it's a sensible explanation? —Ashley Y 06:50, 9 August 2008 (UTC) Feel free to propose that it be labeled a guideline. I would support such a proposal. —David Levy 06:56, 9 August 2008 (UTC) Ashley, we're engaging you in good faith here, but it really is getting a little tiresome. You are arguing from specifics to generalities. You're talking about office actions and why should they be important - you might as well ask why we have to click that pesky "edit" button, it only gets in the way of us improving the encyclopedia. You're now asking for every single rule, policy and convention to be completely laid out for you (or the hypothetical new editor). Surely you understand it's not possible? IAR cannot be a page listing every way to ignore or not ignore each and every rule; no more can there be any single page that lists every rule of Wikipedia. You miss the point that IAR helps new people to work in Wikipedia, they can do what they think is right, then someone else will help them to understand. You're moving now from what I thought was a focussed discussion towards everything but the kitchen sink. If you have a specific issue, please refocus to it. Otherwise, lets continue on your talk page or mine and archive this thread. It's been an interesting discussion but I don't think it's going anywhere relevant to this page at this point. Franamax (talk) 04:55, 9 August 2008 (UTC) ### Straw I'm not that straw man. The principle of IAR is important, but it's not the most fundamental rule, and its current wording is flawed. It's not a binary choice between the current brief wording and a grand listing of every possible exception. IAR must yield to WP:CONSENSUS (the policy, not the abstract idea of consensus). Sure, it seems like any violation of WP:CONSENSUS is obviously not an improvement, but that "common sense" comes precisely from internalising the rules, either as written or by experience editing. For example, we used to have "working with others" to suggest consensus, an improvement along these lines, but people were worried that it might suggest one had to get consensus before making a change. The best I can say is, go read /Versions, and think about each entry and what it's trying to say. Sure, many of them are silly, but some offer insight not found in the twelve words. —Ashley Y 05:12, 9 August 2008 (UTC) 1. IAR enables users to ignore all rules. They needn't be familiar with a single one to improve or maintain Wikipedia. 2. Again, users become aware of the importance of honoring consensus when we explain it to them. And again, our rules have no loopholes. We don't expect users to follow rules because they have blue/green ticks above them; we expect them to follow rules when it's reasonable to do so. Anyone who believes that they can get away with something controversial via the technicality that "IAR doesn't say that I can't!" is playing a childish game that he/she cannot win. We needn't modify our guidelines and policies for such individuals' benefit. —David Levy 05:25, 9 August 2008 (UTC) 1. They needn't be familiar with a rule to improve or maintain Wikipedia, as long as, by and large, they don't actually break it. If they know what they are doing breaks WP:CONSENSUS, even if they quite reasonably think they are improving Wikipedia, they should not ignore it. 2. Yes, but that explanation is codified in a policy document, which gives it its importance. It's not about people "getting away with something controversial", it's about someone improving Wikipedia with "OMG THE TRUTH" (or whatever they use to justify their edit war) in good faith, but in violation of WP:CONSENSUS. Just offhand, can you think of any instance where WP:CONSENSUS should be ignored per IAR? —Ashley Y 05:33, 9 August 2008 (UTC) Sure, whenever someone edits Wikipedia without reading it. —David Levy 07:32, 9 August 2008 (UTC) That's rather a different sense of "ignore" than is used by IAR. Can you think of any instance where WP:CONSENSUS is understood, but should be disregarded per IAR? Or if you prefer, any instance when WP:CONSENSUS prevents you from improving or maintaining Wikipedia, and should thus be ignored? —Ashley Y 07:58, 9 August 2008 (UTC) 1. No, the non-requirement to familiarize oneself with rules before editing is an important application of IAR. 2. I'm not personally familiar with such a scenario, but that doesn't mean that one couldn't arise. —David Levy 08:13, 9 August 2008 (UTC) So I can be blocked when abiding by the rules by a person with the power to block me if he thinks blocking me improves the encyclopedia. His justifcation would be that I didn't ignore the rules when I should have. On other other hand, someone with the power to block can allow someone else break rules if he subjective likes what they're doing to the encyclopedia. I'm starting to think that this policy is really nothing more than a way for adminstrators to have special privilege to block editors arbitrarily, and have undue influence over the content of articles. Richard Blatant (talk) 03:28, 9 August 2008 (UTC) Wow. —David Levy 03:43, 9 August 2008 (UTC) Richard Blatant, do you have a point that you can sum up in ten words or less? If you have a problem with administrator powers or judgement, this is not the place to discuss it. If that's your issue, say so, we'll direct you to the appropriate forum. If you just want to argue in circles, please stop. You are not making a productive contribution, and you're getting in the way of others who wish to do so. Franamax (talk) 04:18, 9 August 2008 (UTC) Richard Blatant, are you accusing certain admins of being WP:ROUGE? ;-) --Kim Bruning (talk) 02:06, 23 October 2008 (UTC) ### One more question I was just trying to understand the policy. I just summed up my conclusion above. I think I have a pretty good grasp on what the policy is all about now. Richard Blatant (talk) 04:26, 9 August 2008 (UTC) No, you really don't. —David Levy 04:49, 9 August 2008 (UTC) If so, only because you can't really understand the policy from the current wording. —Ashley Y 05:22, 9 August 2008 (UTC) One more question. If someone gets blocked, they should ignore the rule against violating a block by coming back with a different IP and resuming editing because being blocked prevents them from improving the encyclopedia, correct? Richard Blatant (talk) 04:45, 9 August 2008 (UTC) What do you intend to achieve via such rhetoric? —David Levy 04:49, 9 August 2008 (UTC) Understanding. What do you intend to achieve with yours? Richard Blatant (talk) 04:55, 9 August 2008 (UTC) My purpose is to thoroughly understand the policy. I already said that. How am I disrupting? Are you going to block me now under "Ignore all rules"? Richard Blatant (talk) 05:04, 9 August 2008 (UTC) I urge you to read Wikipedia:Understanding IAR. If, after you've read it, you feel you still don't understand the policy, let me know; I'll have to improve it.--Father Goose (talk) 06:02, 9 August 2008 (UTC) And since you've expressed worries about being blocked several times now Richard, and since around 10% of your edits in the 40 or so days you've been registered on Wikipedia have been to this page [12] - is there anything more you'd like to tell us? Franamax (talk) 06:18, 9 August 2008 (UTC) Since I'm new, doesn't it makes sense for me to get aquainted with Wikipedia policies by asking questions? This is not the only policy page I've been asking questions about. If you have a problem with me asking questions, then don't answer them. Let someone else does that has a better attitude than you. Richard Blatant (talk) 15:26, 9 August 2008 (UTC) ## Cleanup (51 intermediate revisions not shown.) /NewbyG (talk) 04:12, 9 August 2008 (UTC) Yes, thanks for the rollback, a lot of, ummm, stuff has floated under the bridge. However, I moved back nine steps to this version which I felt was the last good one. It lacks the reference to "Jimbo says", which I feel to be an improvement. Sad to say, it seems that 42 revisions yielded only that benefit - and that benefit being only in my eyes! Sadder to say, it is almost looking like page protection may be needed shortly. I hope not, we do have some relatively good discussion happening. Franamax (talk) 04:25, 9 August 2008 (UTC) Bah. Protection is a last resort, and we're still in the first stage of dispute resolution. I'm de-sysopping the first admin that protects this page.--Father Goose (talk) 05:19, 9 August 2008 (UTC) With my magic wand, if you must know. Protection is unnecessary. If anything, the flux helps people appreciate the different approaches people have. —Ashley Y 05:22, 9 August 2008 (UTC) Indeed. The brief flurry has subsided. As long as talk is proceeding and reverts are quiescent, the wiki-world is unfolding as it should. I was worried about the trend-line. The trend no longer exists and I am heading to bed soon anyway. :) Franamax (talk) 06:06, 9 August 2008 (UTC) ## Merge I think this should be merged with Wikipedia:What "Ignore all rules" means or the other way around. All this page says is "If a rule prevents you from improving or maintaining Wikipedia, ignore it." and it is indeed very common that people take it the wrong way. What's the point of having the meaning on a separate page if this whole page is only one sentence? TheBlazikenMaster (talk) 19:08, 15 October 2008 (UTC) WP:IAR is one of the founding principles of Wikipedia. It is a very simple statement open to wide interpretation, and hopefully will never change. WIARM is one of the efforts to add explanations to IAR, however, it is not part of IAR. IAR stands all by itself without decoration. We can all try to interpret it, but we can never pin down exactly what it means - that is what makes Wikipedia a living thing. Franamax (talk) 02:43, 16 October 2008 (UTC) ## stupid policy dumb. —Preceding unsigned comment added by Phil Ian Manning (talkcontribs) 07:06, 9 September 2008 (UTC) As opposed to your intelligent eloquent argument against it? Chillum 13:31, 9 September 2008 (UTC) What is the point of this policy if EVERYTHING that doesn't follow the ordinary rules gets deleted? Can you provide an example where somebody Ignored All Rules and improved (by wiki's standards) Wikipedia? 24.218.12.158 (talk) 00:56, 2 October 2008 (UTC) Every time an anonymous or new editor adds useful information in unformatted or poor writing, it is IAR in action. 133.83.2.71 (talk) 02:14, 16 October 2008 (UTC) That is very true and we encourage it. In fact, there are many many dedicated editors who look at each and every one of those anonymous and new edits and try to improve them and find sources to back them up. It's true that sometimes, the editors get tired and delete things - ideally, the original editor would do the groundwork themself to make sure their edit sticks around. Doesn't matter though, it's the encyclopedia anyone can edit. However, it is not the encyclopedia where anyone can expect their own particular change to actually live more than a day! Franamax (talk) 02:37, 16 October 2008 (UTC) As much as I admire the above points expressing the beauty of the freedom of speech Wikipedia is trying to attain to, it is a bit of a clunky rule, if not even worse. Does ignoring all rules include ignoring the three core content policies if it "improves" Wikipedia's freedom of speech? Can I ignore the rule to ignore all the rules if that makes maintaining Wikipedia a less argumentative task? I'm concerned that this rule may give vandals the justification they desire and makes policing Wikipedia more difficult. In conclusion, I agree with the first post: Dumb.POVreferee (talk) 04:12, 19 January 2009 (UTC) The idea for IAR is that one can ignore the rules if it improves the project. In my opinion, it negates the notability guidelines in the case of articles with some citations, but still allows for deletion of articles that are patent nonsense, or vanity pages, etc. In fact, it really should negate guidelines, as it is a policy. The improvement is based on whether or not there is useful content being added. Tealwisp (talk) 04:29, 19 January 2009 (UTC) In response to POVreferee: of course, IAR only means to ignore rules within reason. It doesn't give people justification to post "PENIS PENIS PENIS" in the middle of an article and then say "but guys I was ignoring the rules." An easy way to think about it is that IAR just means "use your brain." Politizer talk/contribs 04:35, 19 January 2009 (UTC) ## Improve and maintain what? This is one of Wikipedia's most beautiful policies. Its one line is succinct and to the point and provides a framework for every single edit. However, it has a couple of related flaws: 1. Its title does not completely reflect that beautiful line. 2. When we say "improve and maintain Wikipedia", what does that mean? Lets focus on the second one. I think we all know what it doesn't mean. It doesn't mean "improve and maintain Wikipedia's reputation as an unreliable website full of trivia" (I don't mean to imply that I think it is!) It doesn't mean "improve and maintain the status quo on disputed articles" or "improve and maintain our resiliance against those who have other viewpoints". So maybe it would be helpful to add a word like "quality" here, along the lines of "maintain and improve the quality of Wikipedia". This might also help to address the first point by allowing editors to refer to this guideline not merely as an excuse to ignore a troublesome rule, but to emphasise that their edit has improved the quality of the encyclopedia, and that is what Pillar Five is really all about. Geometry guy 22:16, 14 November 2008 (UTC) Earlier wording referred to "Wikipedia's quality," and this was shortened to "Wikipedia" for the sake of brevity. I'd be fine with changing it back. —David Levy 23:08, 14 November 2008 (UTC) I think that the word "Wikipedia" describes the project very well. It is not just Wikipedia's quality we should improve and maintain, but also its quantity, safety, and and other intrinsic it may possess. Chillum 01:18, 29 January 2009 (UTC)
2018-01-18 16:48:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5319154858589172, "perplexity": 2000.531253255191}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887423.43/warc/CC-MAIN-20180118151122-20180118171122-00167.warc.gz"}
https://in.mathworks.com/help/matlab/ref/polarplot.html
# polarplot Plot line in polar coordinates ## Syntax ``polarplot(theta,rho)`` ``polarplot(theta,rho,LineSpec)`` ``polarplot(theta1,rho1,...,thetaN,rhoN)`` ``polarplot(theta1,rho1,LineSpec1,...,thetaN,rhoN,LineSpecN)`` ``polarplot(rho)`` ``polarplot(rho,LineSpec)`` ``polarplot(Z)`` ``polarplot(Z,LineSpec)`` ``polarplot(___,Name,Value)`` ``polarplot(pax,___)`` ``p = polarplot(___)`` ## Description example ````polarplot(theta,rho)` plots a line in polar coordinates, with `theta` indicating the angle in radians and `rho` indicating the radius value for each point. The inputs must be vectors with equal length or matrices with equal size. If the inputs are matrices, then `polarplot` plots columns of `rho` versus columns of `theta`. Alternatively, one of the inputs can be a vector and the other a matrix as long as the vector is the same length as one dimension of the matrix.``` example ````polarplot(theta,rho,LineSpec)` sets the line style, marker symbol, and color for the line.``` ````polarplot(theta1,rho1,...,thetaN,rhoN)` plots multiple `rho,theta` pairs.``` ````polarplot(theta1,rho1,LineSpec1,...,thetaN,rhoN,LineSpecN)` specifies the line style, marker symbol, and color for each line.``` example ````polarplot(rho)` plots the radius values in `rho` at evenly spaced angles between 0 and 2π.``` ````polarplot(rho,LineSpec)` sets the line style, marker symbol, and color for the line.``` example ````polarplot(Z)` plots the complex values in `Z`.``` ````polarplot(Z,LineSpec)` sets the line style, marker symbol, and color for the line.``` ````polarplot(___,Name,Value)` specifies properties of the chart line using one or more `Name,Value` pair arguments. The property settings apply to all the lines. You cannot specify different property values for different lines using `Name,Value` pairs.``` ````polarplot(pax,___)` uses the `PolarAxes` object specified by `pax`, instead of the current axes.``` example ````p = polarplot(___)` returns one or more chart line objects. Use `p` to set properties of a specific chart line object after it is created. For a list of properties, see Line Properties.``` ## Examples collapse all Plot a line in polar coordinates. ```theta = 0:0.01:2*pi; rho = sin(2*theta).*cos(2*theta); polarplot(theta,rho)``` Create the data to plot. ```theta = linspace(0,360,50); rho = 0.005*theta/10;``` Convert the values in `theta` from degrees to radians. Then, plot the data in polar coordinates. ```theta_radians = deg2rad(theta); polarplot(theta_radians,rho)``` Plot two lines in polar coordinates. Use a dashed line for the second line. ```theta = linspace(0,6*pi); rho1 = theta/10; polarplot(theta,rho1) rho2 = theta/12; hold on polarplot(theta,rho2,'--') hold off``` Specify only the radius values, without specifying the angle values. `polarplot` plots the radius values at equally spaced angles that span from 0 to $2\pi$. Display a circle marker at each data point. ```rho = 10:5:70; polarplot(rho,'-o')``` Create a polar plot using negative radius values. By default, `polarplot` reflects negative values through the origin. ```theta = linspace(0,2*pi); rho = sin(theta); polarplot(theta,rho)``` Change the limits of the r-axis so it ranges from -1 to 1. `rlim([-1 1])` Create a polar plot using a red line with circle markers. ```theta = linspace(0,2*pi,25); rho = 2*theta; polarplot(theta,rho,'r-o')``` Create a polar plot and return the chart line object. ```theta = linspace(0,2*pi,25); rho = 2*theta; p = polarplot(theta,rho);``` Change the line color and width and add markers. ```p.Color = 'magenta'; p.Marker = 'square'; p.MarkerSize = 8;``` Plot complex values in polar coordinates. Display markers at each point without a line connecting them. ```Z = [2+3i 2 -1+4i 3-4i 5+2i -4-2i -2+3i -2 -3i 3i-2i]; polarplot(Z,'*')``` ## Input Arguments collapse all Angle values, specified as a vector or matrix. Specify the values in radians. To convert data from degrees to radians, use `deg2rad`. To change the limits of the theta-axis, use `thetalim`. Example: `[0 pi/2 pi 3*pi/2 2*pi]` Radius values, specified as a vector or matrix. By default, negative values are reflected through 0. A point is reflected by taking the absolute value of its radius, and adding 180 degrees to its angle. To change the limits of the r-axis, use `rlim`. Example: `[1 2 3 4 5]` Complex values, specified as a vector or matrix where each element is of the form `rho*`ei*theta, or `x+iy`, where: • `rho = sqrt(x^2+y^2)` • `theta = atan(y/x)` Example: `[1+2i 3+4i 3i]` Line style, marker, and color, specified as a character vector or string containing symbols. The symbols can appear in any order. You do not need to specify all three characteristics (line style, marker, and color). For example, if you omit the line style and specify the marker, then the plot shows only the marker and no line. Example: `'--or'` is a red dashed line with circle markers Line StyleDescription `-`Solid line `--`Dashed line `:`Dotted line `-.`Dash-dot line MarkerDescription `'o'`Circle `'+'`Plus sign `'*'`Asterisk `'.'`Point `'x'`Cross `'_'`Horizontal line `'|'`Vertical line `'s'`Square `'d'`Diamond `'^'`Upward-pointing triangle `'v'`Downward-pointing triangle `'>'`Right-pointing triangle `'<'`Left-pointing triangle `'p'`Pentagram `'h'`Hexagram ColorDescription `y` yellow `m` magenta `c` cyan `r` red `g` green `b` blue `w` white `k` black `PolarAxes` object. You can modify the appearance and behavior of a `PolarAxes` object by setting its properties. For a list of properties, see PolarAxes Properties. ### Name-Value Pair Arguments Specify optional comma-separated pairs of `Name,Value` arguments. `Name` is the argument name and `Value` is the corresponding value. `Name` must appear inside quotes. You can specify several name and value pair arguments in any order as `Name1,Value1,...,NameN,ValueN`. Example: `'LineWidth',3` `Name,Value` pair settings apply to all the lines plotted. You cannot specify different `Name,Value` pairs for each line using this syntax. Instead, return the chart line objects and use dot notation to set the properties for each line. The properties listed here are only a subset. For a full list, see Line Properties. Line color, specified as an RGB triplet, a hexadecimal color code, a color name, or a short name. For a custom color, specify an RGB triplet or a hexadecimal color code. • An RGB triplet is a three-element row vector whose elements specify the intensities of the red, green, and blue components of the color. The intensities must be in the range `[0,1]`; for example, ```[0.4 0.6 0.7]```. • A hexadecimal color code is a character vector or a string scalar that starts with a hash symbol (`#`) followed by three or six hexadecimal digits, which can range from `0` to `F`. The values are not case sensitive. Thus, the color codes `'#FF8800'`, `'#ff8800'`, `'#F80'`, and `'#f80'` are equivalent. Alternatively, you can specify some common colors by name. This table lists the named color options, the equivalent RGB triplets, and hexadecimal color codes. Color NameShort NameRGB TripletHexadecimal Color CodeAppearance `'red'``'r'``[1 0 0]``'#FF0000'` `'green'``'g'``[0 1 0]``'#00FF00'` `'blue'``'b'``[0 0 1]``'#0000FF'` `'cyan'` `'c'``[0 1 1]``'#00FFFF'` `'magenta'``'m'``[1 0 1]``'#FF00FF'` `'yellow'``'y'``[1 1 0]``'#FFFF00'` `'black'``'k'``[0 0 0]``'#000000'` `'white'``'w'``[1 1 1]``'#FFFFFF'` `'none'`Not applicableNot applicableNot applicableNo color Here are the RGB triplets and hexadecimal color codes for the default colors MATLAB® uses in many types of plots. `[0 0.4470 0.7410]``'#0072BD'` `[0.8500 0.3250 0.0980]``'#D95319'` `[0.9290 0.6940 0.1250]``'#EDB120'` `[0.4940 0.1840 0.5560]``'#7E2F8E'` `[0.4660 0.6740 0.1880]``'#77AC30'` `[0.3010 0.7450 0.9330]``'#4DBEEE'` `[0.6350 0.0780 0.1840]``'#A2142F'` Line style, specified as one of the options listed in this table. Line StyleDescriptionResulting Line `'-'`Solid line `'--'`Dashed line `':'`Dotted line `'-.'`Dash-dotted line `'none'`No lineNo line Line width, specified as a positive value in points, where 1 point = 1/72 of an inch. If the line has markers, then the line width also affects the marker edges. The line width cannot be thinner than the width of a pixel. If you set the line width to a value that is less than the width of a pixel on your system, the line displays as one pixel wide. Marker symbol, specified as one of the values listed in this table. By default, the object does not display markers. Specifying a marker symbol adds markers at each data point or vertex. ValueDescription `'o'`Circle `'+'`Plus sign `'*'`Asterisk `'.'`Point `'x'`Cross `'_'`Horizontal line `'|'`Vertical line `'square'` or `'s'`Square `'diamond'` or `'d'`Diamond `'^'`Upward-pointing triangle `'v'`Downward-pointing triangle `'>'`Right-pointing triangle `'<'`Left-pointing triangle `'pentagram'` or `'p'`Five-pointed star (pentagram) `'hexagram'` or `'h'`Six-pointed star (hexagram) `'none'`No markers Marker size, specified as a positive value in points, where 1 point = 1/72 of an inch. Marker fill color, specified as `'auto'`, an RGB triplet, a hexadecimal color code, a color name, or a short name. The `'auto'` option uses the same color as the `Color` property of the parent axes. If you specify `'auto'` and the axes plot box is invisible, the marker fill color is the color of the figure. For a custom color, specify an RGB triplet or a hexadecimal color code. • An RGB triplet is a three-element row vector whose elements specify the intensities of the red, green, and blue components of the color. The intensities must be in the range `[0,1]`; for example, ```[0.4 0.6 0.7]```. • A hexadecimal color code is a character vector or a string scalar that starts with a hash symbol (`#`) followed by three or six hexadecimal digits, which can range from `0` to `F`. The values are not case sensitive. Thus, the color codes `'#FF8800'`, `'#ff8800'`, `'#F80'`, and `'#f80'` are equivalent. Alternatively, you can specify some common colors by name. This table lists the named color options, the equivalent RGB triplets, and hexadecimal color codes. Color NameShort NameRGB TripletHexadecimal Color CodeAppearance `'red'``'r'``[1 0 0]``'#FF0000'` `'green'``'g'``[0 1 0]``'#00FF00'` `'blue'``'b'``[0 0 1]``'#0000FF'` `'cyan'` `'c'``[0 1 1]``'#00FFFF'` `'magenta'``'m'``[1 0 1]``'#FF00FF'` `'yellow'``'y'``[1 1 0]``'#FFFF00'` `'black'``'k'``[0 0 0]``'#000000'` `'white'``'w'``[1 1 1]``'#FFFFFF'` `'none'`Not applicableNot applicableNot applicableNo color Here are the RGB triplets and hexadecimal color codes for the default colors MATLAB uses in many types of plots. `[0 0.4470 0.7410]``'#0072BD'` `[0.8500 0.3250 0.0980]``'#D95319'` `[0.9290 0.6940 0.1250]``'#EDB120'` `[0.4940 0.1840 0.5560]``'#7E2F8E'` `[0.4660 0.6740 0.1880]``'#77AC30'` `[0.3010 0.7450 0.9330]``'#4DBEEE'` `[0.6350 0.0780 0.1840]``'#A2142F'` ## Tips • To convert data from degrees to radians, use `deg2rad`. To convert data from radians to degrees, use `rad2deg`. • You can modify polar axes properties to customize the chart. For a list of properties, see PolarAxes Properties. • To plot additional data in the polar axes, use the ```hold on``` command. However, you cannot plot data that requires Cartesian axes in a polar chart. ### Properties Introduced in R2016a
2021-03-09 04:36:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7380669713020325, "perplexity": 2496.7689011535535}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178385984.79/warc/CC-MAIN-20210309030723-20210309060723-00389.warc.gz"}
https://jira.lsstcorp.org/browse/DM-9556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
# All NaNs in coord_ra and coord_dec columns in deepCoadd forced src tables XMLWordPrintable #### Details • Type: Bug • Status: Done • Resolution: Done • Fix Version/s: None • Component/s: • Labels: None • Story Points: 1 • Team: Data Release Production #### Description In recent runs of the stack through multiBandDriver.py, the persisted forced src tables for the coadds are not getting ra and dec set properly (all entries for the coord_ra and coord_dec columns are NaN). Looking back at a run in mid-Nov, 2016, these numbers were indeed set properly in the forced tables. Assuming this was not intentional, track down the cause and fix it such that these values get set properly for the persisted forced src tables. #### Activity No builds found. Lauren MacArthur created issue - Hide John Swinbank added a comment - - edited I'm guessing this originates in DM-8210, where, on 448121a4, the copyColumns config which included coord_ra and coord_dec was removed as a duplicate. We do still have a copyColumns in ForcedMeasurementConfig, but it doesn't mention these fields. Reading DM-8210, I'm confused about what's actually supposed to be going on here. Jim Bosch, maybe you could clarify the intention? Show John Swinbank added a comment - - edited I'm guessing this originates in DM-8210 , where, on 448121a4 , the copyColumns config which included coord_ra and coord_dec was removed as a duplicate. We do still have a copyColumns in ForcedMeasurementConfig , but it doesn't mention these fields. Reading DM-8210 , I'm confused about what's actually supposed to be going on here. Jim Bosch , maybe you could clarify the intention? Field Original Value New Value Component/s meas_base [ 10750 ] Epic Link DM-8306 [ 27828 ] Story Points 1 Hide Jim Bosch added a comment - I think I was assuming that the forced photometry tasks would explicitly set the coord fields from the reference catalog, but it appears that's not the case - reference catalog centroids are now added to the forced catalogs via the base_TransformedCentroid plugins in meas_base, which basically act like a centroiding algorithm that just looks up the coord fields from the reference catalog and copies it into the centroid fields in the forced catalog. It appears that in the past we were actually relying on copyColumns to do that. So, one way to fix this would be to just add these to copyColumns in the ForcedMeasurementTask config defaults. Another would be to add a forced-measurement version of the base_SkyCoord algorithm in meas_base, which is what we use in single-frame measurement to fill the coord fields. Or it might be best to have base_TransformedCentroid do that itself; I can't think of a reason why it shouldn't. I think any of these is fine - they're all a bit ugly, IMO, and maybe copyColumns is worst just because it can lead to the kinds of problems we saw in DM-8210 if we use it to overwrite existing columns - but the ultimate reason this is difficutl is because I think it's intrinsically tricky (but of course incredibly useful) to have the coord columns at all in what is otherwise a table containing only raw, uncalibrated quantities. Show Jim Bosch added a comment - I think I was assuming that the forced photometry tasks would explicitly set the coord fields from the reference catalog, but it appears that's not the case - reference catalog centroids are now added to the forced catalogs via the base_TransformedCentroid plugins in meas_base, which basically act like a centroiding algorithm that just looks up the coord fields from the reference catalog and copies it into the centroid fields in the forced catalog. It appears that in the past we were actually relying on copyColumns to do that. So, one way to fix this would be to just add these to copyColumns in the ForcedMeasurementTask config defaults. Another would be to add a forced-measurement version of the base_SkyCoord algorithm in meas_base , which is what we use in single-frame measurement to fill the coord fields. Or it might be best to have base_TransformedCentroid do that itself; I can't think of a reason why it shouldn't. I think any of these is fine - they're all a bit ugly, IMO, and maybe copyColumns is worst just because it can lead to the kinds of problems we saw in DM-8210 if we use it to overwrite existing columns - but the ultimate reason this is difficutl is because I think it's intrinsically tricky (but of course incredibly useful) to have the coord columns at all in what is otherwise a table containing only raw, uncalibrated quantities. Hide John Swinbank added a comment - Thanks Jim! Do we care about deblend_nchild? It looks to have been dropped in the same commit as the RA/dec. Show John Swinbank added a comment - Thanks Jim! Do we care about deblend_nchild ? It looks to have been dropped in the same commit as the RA/dec. Hide Jim Bosch added a comment - - edited I think that was correctly dropped - there is no deblending on CCD-level forced photometry right now, so I think that field would be more misleading than helpful. Show Jim Bosch added a comment - - edited I think that was correctly dropped - there is no deblending on CCD-level forced photometry right now, so I think that field would be more misleading than helpful. Hide Lauren MacArthur added a comment - Ummm...I care! I've been reading it in from the meas catalogs. They also have a few other fields I've been using that don't get persisted/copied to the forced cats, so I may not be able to avoid reading in both for my current use case, but it would be nice to have (assuming there's no fundamental reason to leave it out that I'm missing). Show Lauren MacArthur added a comment - Ummm...I care! I've been reading it in from the meas catalogs. They also have a few other fields I've been using that don't get persisted/copied to the forced cats, so I may not be able to avoid reading in both for my current use case, but it would be nice to have (assuming there's no fundamental reason to leave it out that I'm missing). Link This issue blocks DM-9907 [ DM-9907 ] Hide Paul Price added a comment - price@pap-laptop:~/LSST/meas_base (tickets/DM-9556=) $git sub-patch commit 8b143a2191cdbe980bfb725296cc743cb969ea2a Author: Paul Price Date: Fri Mar 31 11:48:21 2017 -0400 forcedMeasurement: copy coordinates Forced measurement catalogs currently have no RA,Dec. These used to be copied from the reference catalog, but commit 448121a4 removed this behaviour. Restoring it. Deliberately not restoring the 'deblend_nchild' since there is no deblending in forced measurement so that copying that field would be more misleading than helpful. diff --git a/python/lsst/meas/base/forcedMeasurement.py b/python/lsst/meas/base/forcedMeasurement.py index 52dcbb3..18879f1 100644 --- a/python/lsst/meas/base/forcedMeasurement.py +++ b/python/lsst/meas/base/forcedMeasurement.py @@ -163,7 +163,8 @@ class ForcedMeasurementConfig(BaseMeasurementConfig): copyColumns = lsst.pex.config.DictField( keytype=str, itemtype=str, doc="Mapping of reference columns to source columns", - default={"id": "objectId", "parent": "parentObjectId"} + default={"id": "objectId", "parent": "parentObjectId", + "coord_ra": "coord_ra", "coord_dec": "coord_dec"} ) checkUnitsParseStrict = lsst.pex.config.Field( Show Paul Price added a comment - price@pap-laptop:~/LSST/meas_base (tickets/DM-9556=)$ git sub-patch commit 8b143a2191cdbe980bfb725296cc743cb969ea2a Author: Paul Price <price@astro.princeton.edu> Date: Fri Mar 31 11:48:21 2017 -0400   forcedMeasurement: copy coordinates Forced measurement catalogs currently have no RA,Dec. These used to be copied from the reference catalog, but commit 448121a4 removed this behaviour. Restoring it. Deliberately not restoring the 'deblend_nchild' since there is no deblending in forced measurement so that copying that field would be more misleading than helpful.   diff --git a/python/lsst/meas/base/forcedMeasurement.py b/python/lsst/meas/base/forcedMeasurement.py index 52dcbb3..18879f1 100644 --- a/python/lsst/meas/base/forcedMeasurement.py +++ b/python/lsst/meas/base/forcedMeasurement.py @@ -163,7 +163,8 @@ class ForcedMeasurementConfig(BaseMeasurementConfig): copyColumns = lsst.pex.config.DictField( keytype=str, itemtype=str, doc="Mapping of reference columns to source columns", - default={"id": "objectId", "parent": "parentObjectId"} + default={"id": "objectId", "parent": "parentObjectId", + "coord_ra": "coord_ra", "coord_dec": "coord_dec"} ) checkUnitsParseStrict = lsst.pex.config.Field( Reviewers Jim Bosch [ jbosch ] Status To Do [ 10001 ] In Review [ 10004 ] Assignee Paul Price [ price ] Hide Jim Bosch added a comment - Looks good! Show Jim Bosch added a comment - Looks good! Status In Review [ 10004 ] Reviewed [ 10101 ] Hide John Swinbank added a comment - Please make sure that Lauren MacArthur agrees with the decision not to restore deblend_nchild before you merge. Show John Swinbank added a comment - Please make sure that Lauren MacArthur agrees with the decision not to restore deblend_nchild before you merge. Hide Lauren MacArthur added a comment - I would really like it in the coadd forced catalogs (where deblending is done, correct?) Show Lauren MacArthur added a comment - I would really like it in the coadd forced catalogs (where deblending is done, correct?) Hide Paul Price added a comment - Revised: -price@pap-laptop:~/LSST/meas_base (tickets/DM-9556=) $git sub-patch commit 94f876db450f3494ff1ef8d737c696a4586f7836 Author: Paul Price Date: Fri Mar 31 11:48:21 2017 -0400 forcedMeasurement: copy coordinates, deblend_nChild Forced measurement catalogs currently have no RA,Dec or deblend_nChild. These used to be copied from the reference catalog, but commit 448121a4 removed this behaviour. Restoring it. diff --git a/python/lsst/meas/base/forcedMeasurement.py b/python/lsst/meas/base/forcedMeasurement.py index 52dcbb3..1c97858 100644 --- a/python/lsst/meas/base/forcedMeasurement.py +++ b/python/lsst/meas/base/forcedMeasurement.py @@ -163,7 +163,8 @@ class ForcedMeasurementConfig(BaseMeasurementConfig): copyColumns = lsst.pex.config.DictField( keytype=str, itemtype=str, doc="Mapping of reference columns to source columns", - default={"id": "objectId", "parent": "parentObjectId"} + default={"id": "objectId", "parent": "parentObjectId", "deblend_nChild": "deblend_nChild", + "coord_ra": "coord_ra", "coord_dec": "coord_dec"} ) checkUnitsParseStrict = lsst.pex.config.Field( Show Paul Price added a comment - Revised: -price@pap-laptop:~/LSST/meas_base (tickets/DM-9556=)$ git sub-patch commit 94f876db450f3494ff1ef8d737c696a4586f7836 Author: Paul Price <price@astro.princeton.edu> Date: Fri Mar 31 11:48:21 2017 -0400   forcedMeasurement: copy coordinates, deblend_nChild Forced measurement catalogs currently have no RA,Dec or deblend_nChild. These used to be copied from the reference catalog, but commit 448121a4 removed this behaviour. Restoring it.   diff --git a/python/lsst/meas/base/forcedMeasurement.py b/python/lsst/meas/base/forcedMeasurement.py index 52dcbb3..1c97858 100644 --- a/python/lsst/meas/base/forcedMeasurement.py +++ b/python/lsst/meas/base/forcedMeasurement.py @@ -163,7 +163,8 @@ class ForcedMeasurementConfig(BaseMeasurementConfig): copyColumns = lsst.pex.config.DictField( keytype=str, itemtype=str, doc="Mapping of reference columns to source columns", - default={"id": "objectId", "parent": "parentObjectId"} + default={"id": "objectId", "parent": "parentObjectId", "deblend_nChild": "deblend_nChild", + "coord_ra": "coord_ra", "coord_dec": "coord_dec"} ) checkUnitsParseStrict = lsst.pex.config.Field( Hide Paul Price added a comment - Also need: price@pap-laptop:~/LSST/meas_modelfit (tickets/DM-9556=) $git sub commit f2c37e1e4543ce5b5ffe670d2c149d8c426ca6b9 Author: Paul Price Date: Fri Mar 31 13:35:24 2017 -0400 tests: adapt to ForcedMeasurementConfig changes ForcedMeasurementConfig now by default copies fields that aren't present in our basic test catalogs, so set the list of columns to copy to what we do have (the same setting as was default previously). tests/testDoubleShapeletPsfApprox.py | 1 + tests/testGeneralShapeletPsfApproxPlugins.py | 1 + 2 files changed, 2 insertions(+) Plus similar changes in meas_extensions_photometryKron. Show Paul Price added a comment - Also need: price@pap-laptop:~/LSST/meas_modelfit (tickets/DM-9556=)$ git sub commit f2c37e1e4543ce5b5ffe670d2c149d8c426ca6b9 Author: Paul Price <price@astro.princeton.edu> Date: Fri Mar 31 13:35:24 2017 -0400   tests: adapt to ForcedMeasurementConfig changes ForcedMeasurementConfig now by default copies fields that aren't present in our basic test catalogs, so set the list of columns to copy to what we do have (the same setting as was default previously).   tests/testDoubleShapeletPsfApprox.py | 1 + tests/testGeneralShapeletPsfApproxPlugins.py | 1 + 2 files changed, 2 insertions(+) Plus similar changes in meas_extensions_photometryKron. Component/s meas_extensions_convolved [ 13632 ] Component/s meas_extensions_photometryKron [ 12318 ] Component/s meas_modelfit [ 11411 ] Component/s pipe_tasks [ 10726 ] Hide Paul Price added a comment - Jim Bosch, there are four new changes to review, required in tests of forced measurement because we changed the defaults to require a field which isn't present without some extra effort. Changes are in meas_modelfit, meas_extensions_photometryKron, meas_extensions_convolved and pipe_tasks. They're pretty simple, but please have a quick look at them before I merge. Show Paul Price added a comment - Jim Bosch , there are four new changes to review, required in tests of forced measurement because we changed the defaults to require a field which isn't present without some extra effort. Changes are in meas_modelfit, meas_extensions_photometryKron, meas_extensions_convolved and pipe_tasks. They're pretty simple, but please have a quick look at them before I merge. Status Reviewed [ 10101 ] In Review [ 10004 ] Hide Jim Bosch added a comment - Looks fine. Too bad it's so fragile, but I don't see a simple way to fix that, and it's certainly out of scope here. Show Jim Bosch added a comment - Looks fine. Too bad it's so fragile, but I don't see a simple way to fix that, and it's certainly out of scope here. Status In Review [ 10004 ] Reviewed [ 10101 ] Hide Paul Price added a comment - Thanks Jim. Merged to master. Show Paul Price added a comment - Thanks Jim. Merged to master. Resolution Done [ 10000 ] Status Reviewed [ 10101 ] Done [ 10002 ] #### People Assignee: Paul Price Reporter: Lauren MacArthur Reviewers: Jim Bosch Watchers: Jim Bosch, John Swinbank, Lauren MacArthur, Paul Price
2023-04-02 12:06:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4894629120826721, "perplexity": 5767.1415890893695}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950528.96/warc/CC-MAIN-20230402105054-20230402135054-00501.warc.gz"}
https://mathoverflow.net/questions/156040/weak-assassins-and-essential-morphisms
# Weak assassins and essential morphisms Let $R$ be a commutative ring and let $M\rightarrow N$ be an essential morphism of $R$-modules. Then, $M$ and $N$ have the same associated primes. Over non-noetherian rings the notion of associated primes does not behave very well, and it is sometimes appropriate to rather consider the so-called weakly associated primes of a module. (A prime ideal of $R$ is weakly associated with an $R$-module $L$ if it is a minimal prime of the annihilator of an element of $L$; see Bourbaki, AC.IV.1 Exer. 17, for basic facts about this notion.) Unfortunately, if $M\rightarrow N$ is an essential morphism of $R$-modules then $M$ and $N$ need not have the same weakly associated primes. (This happens for example over every non-noetherian valuation ring with maximal ideal of finite type.) This leads to the following question: What are examples of (classes of) non-noetherian rings over which weakly associated primes do not change along essential morphisms? EDIT: A sufficient condition is that weakly associated primes and associated primes coincide for every module. This happens e.g. for one-dimensional valuation rings. So, a sub-question of the above is the following: What are examples of (classes of) non-noetherian rings over which weakly associated primes coincide with associated primes for every module? EDIT 2: Neil pointed out, and rightly so, that in the previous edit I wrote some nonsense. Weakly associated primes and associated primes coincide for every module for example if $R$ is a local domain such that every non-zero ideal of $R$ contains a power of the maximal ideal. What I intended to say about one-dimensional valuation rings has in fact nothing to do with valuation rings, but should rather be the following: One-dimensional local domains are an example of a class of not necessarily noetherian rings that have the property asked for in the original question, i.e., weakly associated primes do not change along essential morphisms. • Well, any zero-dimensional quasilocal ring will do... – Neil Epstein Jan 30 '14 at 3:08 • Regarding your edit: I don't think your statement about one-dimensional valuation rings is true. If $(R,m)$ is a valuation ring with value group $\mathbb{Q}$, and $I$ is the ideal of elements with values greater than the square root of 2, then the $R$-module $R/I$ has no associated primes. But $m$ is obviously a weakly associated prime. – Neil Epstein Feb 12 '14 at 22:10 • Dear @Neil, thanks - I think now I wrote what I wanted to write... – Fred Rohrer Feb 12 '14 at 22:47
2019-08-18 22:38:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8625534176826477, "perplexity": 270.1436797712704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314130.7/warc/CC-MAIN-20190818205919-20190818231919-00524.warc.gz"}
https://academ.us/list/eess/
### Interface Networks for Failure Localization in Power Systems Transmission power systems usually consist of interconnected sub-grids that are operated relatively independently. When a failure happens, it is desirable to localize its impact within the sub-grid where the failure occurs. This paper introduces three interface networks to connect sub-grids, achieving better failure localization while maintaining robust network connectivity. The proposed interface networks are validated with numerical experiments on the IEEE 118-bus test network under both DC and AC power flow models. ### Image Gradient Decomposition for Parallel and Memory-Efficient Ptychographic Reconstruction Ptychography is a popular microscopic imaging modality for many scientific discoveries and sets the record for highest image resolution. Unfortunately, the high image resolution for ptychographic reconstruction requires significant amount of memory and computations, forcing many applications to compromise their image resolution in exchange for a smaller memory footprint and a shorter reconstruction time. In this paper, we propose a novel image gradient decomposition method that significantly reduces the memory footprint for ptychographic reconstruction by tessellating image gradients and diffraction measurements into tiles. In addition, we propose a parallel image gradient decomposition method that enables asynchronous point-to-point communications and parallel pipelining with minimal overhead on a large number of GPUs. Our experiments on a Titanate material dataset (PbTiO3) with 16632 probe locations show that our Gradient Decomposition algorithm reduces memory footprint by 51 times. In addition, it achieves time-to-solution within 2.2 minutes by scaling to 4158 GPUs with a super-linear speedup at 364% efficiency. This performance is 2.7 times more memory efficient, 9 times more scalable and 86 times faster than the state-of-the-art algorithm. ### Learning Based User Scheduling in Reconfigurable Intelligent Surface Assisted Multiuser Downlink Reconfigurable intelligent surface (RIS) is capable of intelligently manipulating the phases of the incident electromagnetic wave to improve the wireless propagation environment between the base-station (BS) and the users. This paper addresses the joint user scheduling, RIS configuration, and BS beamforming problem in an RIS-assisted downlink network with limited pilot overhead. We show that graph neural networks (GNN) with permutation invariant and equivariant properties can be used to appropriately schedule users and to design RIS configurations to achieve high overall throughput while accounting for fairness among the users. As compared to the conventional methodology of first estimating the channels then optimizing the user schedule, RIS configuration and the beamformers, this paper shows that an optimized user schedule can be obtained directly from a very short set of pilots using a GNN, then the RIS configuration can be optimized using a second GNN, and finally the BS beamformers can be designed based on the overall effective channel. Numerical results show that the proposed approach can utilize the received pilots more efficiently than the conventional channel estimation based approach, and can generalize to systems with an arbitrary number of users. ### Personalized Adversarial Data Augmentation for Dysarthric and Elderly Speech Recognition Despite the rapid progress of automatic speech recognition (ASR) technologies targeting normal speech, accurate recognition of dysarthric and elderly speech remains highly challenging tasks to date. It is difficult to collect large quantities of such data for ASR system development due to the mobility issues often found among these users. To this end, data augmentation techniques play a vital role. In contrast to existing data augmentation techniques only modifying the speaking rate or overall shape of spectral contour, fine-grained spectro-temporal differences between dysarthric, elderly and normal speech are modelled using a novel set of speaker dependent (SD) generative adversarial networks (GAN) based data augmentation approaches in this paper. These flexibly allow both: a) temporal or speed perturbed normal speech spectra to be modified and closer to those of an impaired speaker when parallel speech data is available; and b) for non-parallel data, the SVD decomposed normal speech spectral basis features to be transformed into those of a target elderly speaker before being re-composed with the temporal bases to produce the augmented data for state-of-the-art TDNN and Conformer ASR system training. Experiments are conducted on four tasks: the English UASpeech and TORGO dysarthric speech corpora; the English DementiaBank Pitt and Cantonese JCCOCC MoCA elderly speech datasets. The proposed GAN based data augmentation approaches consistently outperform the baseline speed perturbation method by up to 0.91% and 3.0% absolute (9.61% and 6.4% relative) WER reduction on the TORGO and DementiaBank data respectively. Consistent performance improvements are retained after applying LHUC based speaker adaptation. ### A microstructure estimation Transformer inspired by sparse representation for diffusion MRI Diffusion magnetic resonance imaging (dMRI) is an important tool in characterizing tissue microstructure based on biophysical models, which are complex and highly non-linear. Resolving microstructures with optimization techniques is prone to estimation errors and requires dense sampling in the q-space. Deep learning based approaches have been proposed to overcome these limitations. Motivated by the superior performance of the Transformer, in this work, we present a learning-based framework based on Transformer, namely, a Microstructure Estimation Transformer with Sparse Coding (METSC) for dMRI-based microstructure estimation with downsampled q-space data. To take advantage of the Transformer while addressing its limitation in large training data requirements, we explicitly introduce an inductive bias - model bias into the Transformer using a sparse coding technique to facilitate the training process. Thus, the METSC is composed with three stages, an embedding stage, a sparse representation stage, and a mapping stage. The embedding stage is a Transformer-based structure that encodes the signal to ensure the voxel is represented effectively. In the sparse representation stage, a dictionary is constructed by solving a sparse reconstruction problem that unfolds the Iterative Hard Thresholding (IHT) process. The mapping stage is essentially a decoder that computes the microstructural parameters from the output of the second stage, based on the weighted sum of normalized dictionary coefficients where the weights are also learned. We tested our framework on two dMRI models with downsampled q-space data, including the intravoxel incoherent motion (IVIM) model and the neurite orientation dispersion and density imaging (NODDI) model. The proposed method achieved up to 11.25 folds of acceleration in scan time and outperformed the other state-of-the-art learning-based methods. ### A New Hybrid Multi-Objective Scheduling Model for Hierarchical Hub and Flexible Flow Shop Problems Technologies and lifestyles have been increasingly geared toward consumerism in recent years. Accordingly, it is both the price and the delivery time that matter most to the ultimate customers of commercial enterprises. Consequently, the importance of having an optimal delivery time is becoming increasingly evident these days. Scheduling can be used to optimize supply chains and production systems in this manner, which is one practical method for lowering costs and boosting productivity. This paper suggests a multi-objective scheduling model for hierarchical hub structures (HHS) with three levels of service. The factory and customers hub (second level) and central are on the first level in which the factory has a Flexible Flow Shop (FFS) environment. The noncentral hub (third level) is responsible for the delivery of products made in the factory to customers. Customer nodes and factories are connected separately to the second level, and the non-central hubs are connected to the third level. The model's objective is to minimize transportation and production costs and product arrival times. To validate and evaluate the model, small instances have been solved and analyzed in detail with the weighted sum and e-constraint methods. Consequently, based on the ideal mean distance (MID) metric, the two methods were compared for the designed instances. As NP-hardness causes the previously proposed methods to solve large-scale problems to be time-consuming, a meta-heuristic method was developed to solve the large-scale problem. ### Joint Acoustic Echo Cancellation and Blind Source Extraction based on Independent Vector Extraction We describe a joint acoustic echo cancellation (AEC) and blind source extraction (BSE) approach for multi-microphone acoustic frontends. The proposed algorithm blindly estimates AEC and beamforming filters by maximizing the statistical independence of a non-Gaussian source of interest and a stationary Gaussian background modeling interfering signals and residual echo. Double talk-robust and fast-converging parameter updates are derived from a global maximum-likelihood objective function resulting in a computationally efficient Newton-type update rule. Evaluation with simulated acoustic data confirms the benefit of the proposed joint AEC and beamforming filter estimation in comparison to updating both filters individually. ### A Survey of Left Atrial Appendage Segmentation and Analysis in 3D and 4D Medical Images Atrial fibrillation (AF) is a cardiovascular disease identified as one of the main risk factors for stroke. The majority of strokes due to AF are caused by clots originating in the left atrial appendage (LAA). LAA occlusion is an effective procedure for reducing stroke risk. Planning the procedure using pre-procedural imaging and analysis has shown benefits. The analysis is commonly done by manually segmenting the appendage on 2D slices. Automatic LAA segmentation methods could save an expert's time and provide insightful 3D visualizations and accurate automatic measurements to aid in medical procedures. Several semi- and fully-automatic methods for segmenting the appendage have been proposed. This paper provides a review of automatic LAA segmentation methods on 3D and 4D medical images, including CT, MRI, and echocardiogram images. We classify methods into heuristic and model-based methods, as well as into semi- and fully-automatic methods. We summarize and compare the proposed methods, evaluate their effectiveness, and present current challenges in the field and approaches to overcome them. ### Joint Power Allocation and Beamformer for mmW-NOMA Downlink Systems by Deep Reinforcement Learning The high demand for data rate in the next generation of wireless communication could be ensured by Non-Orthogonal Multiple Access (NOMA) approach in the millimetre-wave (mmW) frequency band. Joint power allocation and beamforming of mmW-NOMA systems is mandatory which could be met by optimization approaches. To this end, we have exploited Deep Reinforcement Learning (DRL) approach due to policy generation leading to an optimized sum-rate of users. Actor-critic phenomena are utilized to measure the immediate reward and provide the new action to maximize the overall Q-value of the network. The immediate reward has been defined based on the summation of the rate of two users regarding the minimum guaranteed rate for each user and the sum of consumed power as the constraints. The simulation results represent the superiority of the proposed approach rather than the Time-Division Multiple Access (TDMA) and another NOMA optimized strategy in terms of sum-rate of users. ### Application of NOMA in Vehicular Visible Light Communication Systems In the context of an increasing interest toward reducing the number of traffic accidents and of associated victims, communication-based vehicle safety applications have emerged as one of the best solutions to enhance road safety. In this area, visible light communications (VLC) have a great potential for applications due to their relatively simple design for basic functioning, efficiency, and large geographical distribution. Vehicular Visible Light Communication (VVLC) is preferred as a vehicle to everything (V2X) communications scheme. Due to its highly secure, low complexity, and radio frequency (RF) interference-free characteristics, exploiting the line of sight (LoS) propagation of visible light and usage of already existing vehicle light-emitting diodes (LEDs). This research is addressing the application of the Non-Orthogonal Multiple Access (NOMA) technique in VLC based Vehicle-to- Vehicle (V2V) communication. The proposed system is simulated in almost realistic conditions and the performance of the system is analyzed under different scenarios. ### Robust Deep Neural Object Detection and Segmentation for Automotive Driving Scenario with Compressed Image Data Deep neural object detection or segmentation networks are commonly trained with pristine, uncompressed data. However, in practical applications the input images are usually deteriorated by compression that is applied to efficiently transmit the data. Thus, we propose to add deteriorated images to the training process in order to increase the robustness of the two state-of-the-art networks Faster and Mask R-CNN. Throughout our paper, we investigate an autonomous driving scenario by evaluating the newly trained models on the Cityscapes dataset that has been compressed with the upcoming video coding standard Versatile Video Coding (VVC). When employing the models that have been trained with the proposed method, the weighted average precision of the R-CNNs can be increased by up to 3.68 percentage points for compressed input images, which corresponds to bitrate savings of nearly 48 %. ### Analysis of Neural Image Compression Networks for Machine-to-Machine Communication Video and image coding for machines (VCM) is an emerging field that aims to develop compression methods resulting in optimal bitstreams when the decoded frames are analyzed by a neural network. Several approaches already exist improving classic hybrid codecs for this task. However, neural compression networks (NCNs) have made an enormous progress in coding images over the last years. Thus, it is reasonable to consider such NCNs, when the information sink at the decoder side is a neural network as well. Therefore, we build-up an evaluation framework analyzing the performance of four state-of-the-art NCNs, when a Mask R-CNN is segmenting objects from the decoded image. The compression performance is measured by the weighted average precision for the Cityscapes dataset. Based on that analysis, we find that networks with leaky ReLU as non-linearity and training with SSIM as distortion criteria results in the highest coding gains for the VCM task. Furthermore, it is shown that the GAN-based NCN architecture achieves the best coding performance and even out-performs the recently standardized Versatile Video Coding (VVC) for the given scenario. ### Evaluation of Video Coding for Machines without Ground Truth In the emerging field of video coding for machines, video datasets with pristine video quality and high-quality annotations are required for a comprehensive evaluation. However, existing video datasets with detailed annotations are severely limited in size and video quality. Thus, current methods have to either evaluate their codecs on still images or on already compressed data. To mitigate this problem, we propose an evaluation method based on pseudo ground-truth data from the field of semantic segmentation to the evaluation of video coding for machines. Through extensive evaluation, this paper shows that the proposed ground-truth-agnostic evaluation method results in an acceptable absolute measurement error below 0.7 percentage points on the Bjontegaard Delta Rate compared to using the true ground truth for mid-range bitrates. We evaluate on the three tasks of semantic segmentation, instance segmentation, and object detection. Lastly, we utilize the ground-truth-agnostic method to measure the coding performances of the VVC compared against HEVC on the Cityscapes sequences. This reveals that the coding position has a significant influence on the task performance. ### Accelerometry-based classification of circulatory states during out-of-hospital cardiac arrest Objective: During cardiac arrest treatment, a reliable detection of spontaneous circulation, usually performed by manual pulse checks, is both vital for patient survival and practically challenging. Methods: We developed a machine learning algorithm to automatically predict the circulatory state during cardiac arrest treatment from 4-second-long snippets of accelerometry and electrocardiogram data from real-world defibrillator records. The algorithm was trained based on 917 cases from the German Resuscitation Registry, for which ground truth labels were created by a manual annotation of physicians. It uses a kernelized Support Vector Machine classifier based on 14 features, which partially reflect the correlation between accelerometry and electrocardiogram data. Results: On a test data set, the proposed algorithm exhibits an accuracy of 94.4 (93.6, 95.2)%, a sensitivity of 95.0 (93.9, 96.1)%, and a specificity of 93.9 (92.7, 95.1)%. Conclusion and significance: In application, the algorithm may be used to simplify retrospective annotation for quality management and, moreover, to support clinicians to assess circulatory state during cardiac arrest treatment. ### Distribution-Aware Graph Representation Learning for Transient Stability Assessment of Power System The real-time transient stability assessment (TSA) plays a critical role in the secure operation of the power system. Although the classic numerical integration method, \textit{i.e.} time-domain simulation (TDS), has been widely used in industry practice, it is inevitably trapped in a high computational complexity due to the high latitude sophistication of the power system. In this work, a data-driven power system estimation method is proposed to quickly predict the stability of the power system before TDS reaches the end of simulating time windows, which can reduce the average simulation time of stability assessment without loss of accuracy. As the topology of the power system is in the form of graph structure, graph neural network based representation learning is naturally suitable for learning the status of the power system. Motivated by observing the distribution information of crucial active power and reactive power on the power system's bus nodes, we thus propose a distribution-aware learning~(DAL) module to explore an informative graph representation vector for describing the status of a power system. Then, TSA is re-defined as a binary classification task, and the stability of the system is determined directly from the resulting graph representation without numerical integration. Finally, we apply our method to the online TSA task. The case studies on the IEEE 39-bus system and Polish 2383-bus system demonstrate the effectiveness of our proposed method. ### Event-Based Control for Synchronization of Stochastic Linear Systems with Application to Distributed Estimation This paper studies the synchronization of stochastic linear systems which are subject to a general class of noises, in the sense that the noises are bounded in covariance but might be correlated with the states of agents and among each other. We propose an event-based control protocol for achieving the synchronization among agents in the mean square sense and theoretically analyze the performance of it by using a stochastic Lyapunov function, where the stability of $c$-martingales is particularly developed to handle the challenges brought by the general model of noises and the event-triggering mechanism. The proposed event-based synchronization algorithm is then applied to solve the problem of distributed estimation in sensor network. Specifically, by losslessly decomposing the optimal Kalman filter, it is shown that the problem of distributed estimation can be resolved by using the algorithms designed for achieving the synchronization of stochastic linear systems. As such, an event-based distributed estimation algorithm is developed, where each sensor performs local filtering solely using its own measurement, together with the proposed event-based synchronization algorithm to fuse the local estimates of neighboring nodes. With the reduced communication frequency, the designed estimator is proved to be stable under the minimal requirements of network connectivity and collective system observability. ### Polarization Tracking in the Presence of PDL and Fast Temporal Drift In this paper, we analyze the effectiveness of polarization tracking algorithms in optical transmission systems suffering from fast state of polarization (SOP) rotations and polarization-dependent loss (PDL). While most of the gradient descent (GD)-based algorithms in the literature may require step size adjustment when the channel condition changes, we propose tracking algorithms that can perform similarly or better without parameter tuning. Numerical simulation results show higher robustness of the proposed algorithms to SOP and PDL drift compared to GD-based algorithms, making them promising candidates to be used in aerial fiber links where the SOP can potentially drift rapidly, and therefore becomes challenging to track. ### VesNet-RL: Simulation-based Reinforcement Learning for Real-World US Probe Navigation Ultrasound (US) is one of the most common medical imaging modalities since it is radiation-free, low-cost, and real-time. In freehand US examinations, sonographers often navigate a US probe to visualize standard examination planes with rich diagnostic information. However, reproducibility and stability of the resulting images often suffer from intra- and inter-operator variation. Reinforcement learning (RL), as an interaction-based learning method, has demonstrated its effectiveness in visual navigating tasks; however, RL is limited in terms of generalization. To address this challenge, we propose a simulation-based RL framework for real-world navigation of US probes towards the standard longitudinal views of vessels. A UNet is used to provide binary masks from US images; thereby, the RL agent trained on simulated binary vessel images can be applied in real scenarios without further training. To accurately characterize actual states, a multi-modality state representation structure is introduced to facilitate the understanding of environments. Moreover, considering the characteristics of vessels, a novel standard view recognition approach based on the minimum bounding rectangle is proposed to terminate the searching process. To evaluate the effectiveness of the proposed method, the trained policy is validated virtually on 3D volumes of a volunteer's in-vivo carotid artery, and physically on custom-designed gel phantoms using robotic US. The results demonstrate that proposed approach can effectively and accurately navigate the probe towards the longitudinal view of vessels. ### STAR-RIS-Assisted Hybrid NOMA mmWave Communication: Optimization and Performance Analysis Simultaneously reflecting and transmitting reconfigurable intelligent surfaces (STAR-RIS) has recently emerged as prominent technology that exploits the transmissive property of RIS to mitigate the half-space coverage limitation of conventional RIS operating on millimeter-wave (mmWave). In this paper, we study a downlink STAR-RIS-based multi-user multiple-input single-output (MU-MISO) mmWave hybrid non-orthogonal multiple access (H-NOMA) wireless network, where a sum-rate maximization problem has been formulated. The design of active and passive beamforming vectors, time and power allocation for H-NOMA is a highly coupled non-convex problem. To handle the problem, we propose an optimization framework based on alternating optimization (AO) that iteratively solves active and passive beamforming sub-problems. Channel correlations and channel strength-based techniques have been proposed for a specific case of two-user optimal clustering and decoding order assignment, respectively, for which analytical solutions to joint power and time allocation for H-NOMA have also been derived. Simulation results show that: 1) the proposed framework leveraging H-NOMA outperforms conventional OMA and NOMA to maximize the achievable sum-rate; 2) using the proposed framework, the supported number of clusters for the given design constraints can be increased considerably; 3) through STAR-RIS, the number of elements can be significantly reduced as compared to conventional RIS to ensure a similar quality-of-service (QoS). ### Energy return on investment analysis of the 2035 Belgian energy system Planning the defossilization of energy systems by facilitating high penetration of renewables and maintaining access to abundant and affordable primary energy resources is a nontrivial multi-objective problem encompassing economic, technical, environmental, and social aspects. However, so far, most long-term policies to decrease the carbon footprint of our societies consider the cost of the system as the leading indicator in the energy system models. To address this gap, we developed a new approach by adding the energy return on investment (EROI) in a whole-energy system model. We built the database with all EROI technologies and resources considered. This novel model is applied to the Belgian energy system in 2035 for several greenhouse gas emissions targets. However, moving away from fossil-based to carbon-neutral energy systems raises the issue of the uncertainty of low-carbon technologies and resource data. Thus, we conduct a global sensitivity analysis to identify the main parameters driving the variations in the EROI of the system. In this case study, the main results are threefold: (i) the EROI of the system decreases from 8.9 to 3.9 when greenhouse gas emissions are reduced by 5; (ii) the renewable fuels - mainly imported renewable gas - represent the largest share of the system primary energy mix due to the lack of endogenous renewable resources such as wind and solar; (iii) in the sensitivity analysis, the renewable fuels drive 67% of the variation of the EROI of the system for low greenhouse gas emissions scenarios. The decrease in the EROI of the system raises questions about meeting the climate targets without adverse socio-economic impact. Thus, accounting for other criteria in energy planning models that nuance the cost-based results is essential to guide policy-makers in addressing the challenges of the energy transition. ### Slimmable Video Codec Neural video compression has emerged as a novel paradigm combining trainable multilayer neural networks and machine learning, achieving competitive rate-distortion (RD) performances, but still remaining impractical due to heavy neural architectures, with large memory and computational demands. In addition, models are usually optimized for a single RD tradeoff. Recent slimmable image codecs can dynamically adjust their model capacity to gracefully reduce the memory and computation requirements, without harming RD performance. In this paper we propose a slimmable video codec (SlimVC), by integrating a slimmable temporal entropy model in a slimmable autoencoder. Despite a significantly more complex architecture, we show that slimming remains a powerful mechanism to control rate, memory footprint, computational cost and latency, all being important requirements for practical video compression. ### Controlled Mobility for C-V2X Road Safety Reception Optimization The use case of C-V2X for road safety requires real-time network connection and information exchanging between vehicles. In order to improve the reliability and safety of the system, intelligent networked vehicles need to move cooperatively to achieve network optimization. In this paper, we use the C-V2X sidelink mode 4 abstraction and the regression results of C-V2X network level simulation to formulate the optimization of packet reception rate (PRR) with fairness in the road safety scenario. Under the optimization framework, we design a controlled mobility algorithm for the transmission node to adaptively adjust its position to maximize the aggregated PRR using only one-hop information. Simulation result shows that the algorithm converges and improve the aggregated PRR and fairness for C-V2X mode broadcast messages. ### Prototype Development and Validation of a Beam-Divergence Control System for Free-Space Laser Communications Being able to dynamically control the transmitted-beam divergence can bring important advantages in free-space optical communications. Specifically, this technique can help to optimize the overall communications performance when the optimum laser-beam divergence is not fixed or known. This is the case in most realistic space laser communication systems, since the optimum beam divergence depends on multiple factors that can vary with time, such as the link distance, or cannot be accurately known, such as the actual pointing accuracy. A dynamic beam-divergence control allows to optimize the link performance for every platform, scenario, and condition. NICT is currently working towards the development of a series of versatile lasercom terminals that can fit a variety of conditions, for which the adaptive element of the transmitted beam divergence is a key element. This manuscript presents a prototype of a beam-divergence control system designed and developed by NICT and Tamron to evaluate this technique and to be later integrated within the lasercom terminals. The basic design of the prototype is introduced as well as the first validation tests that demonstrate its performance. ### High-Frequency Tunable Resistorless Memcapacitor Emulator and Application In this paper, a new design has been proposed for the realization of high-frequency memcapacitor emulators built with three OTAs. This paper also proposes the application of memcapacitor as an amplitude modulator. Furthermore, applications of memcapacitor as a filter, Oscillator point attractor, and periodic doubler are also shown. The proposed circuits can be configured in both incremental and decremental topology. The proposed circuits and their application claim that the circuit is much simpler in design and can be utilized in both topologies. The performance of all the proposed circuits has been verified on Cadence Virtuoso Spectre using standard CMOS 180nm. Furthermore, post-layout simulations and their comparison have been carried out. ### Probabilistic Estimation of Chirp Instantaneous Frequency Using Gaussian Processes We present a probabilistic approach for estimating chirp signal and its instantaneous frequency function when the true forms of the chirp and instantaneous frequency are unknown. To do so, we represent them by joint cascading Gaussian processes governed by a non-linear stochastic differential equation, and estimate their posterior distribution by using stochastic filters and smoothers. The model parameters are determined via maximum likelihood estimation. Theoretical results show that the estimation method has a bounded mean squared error. Experiments show that the method outperforms a number of baseline methods on a synthetic model, and we also apply the method to analyse a gravitational wave data. ### Efficient Path Planning and Tracking for Multi-Modal Legged-Aerial Locomotion Using Integrated Probabilistic Road Maps (PRM) and Reference Governors (RG) There have been several successful implementations of bio-inspired legged robots that can trot, walk, and hop robustly even in the presence of significant unplanned disturbances. Despite all of these accomplishments, practical control and high-level decision-making algorithms in multi-modal legged systems are overlooked. In nature, animals such as birds impressively showcase multiple modes of mobility including legged and aerial locomotion. They are capable of performing robust locomotion over large walls, tight spaces, and can recover from unpredictable situations such as sudden gusts or slippery surfaces. Inspired by these animals' versatility and ability to combine legged and aerial mobility to negotiate their environment, our main goal is to design and control legged robots that integrate two completely different forms of locomotion, ground and aerial mobility, in a single platform. Our robot, the Husky Carbon, is being developed to integrate aerial and legged locomotion and to transform between legged and aerial mobility. This work utilizes a Reference Governor (RG) based on low-level control of Husky's dynamical model to maintain the efficiency of legged locomotion, uses Probabilistic Road Maps (PRM) and 3D A* algorithms to generate an optimal path based on the energetic cost of transport for legged and aerial mobility ### Bang-Bang Control Of A Tail-less Morphing Wing Flight Bats' dynamic morphing wings are known to be extremely high-dimensional, and they employ the combination of inertial dynamics and aerodynamics manipulations to showcase extremely agile maneuvers. Bats heavily rely on their highly flexible wings and are capable of dynamically morphing their wings to adjust aerodynamic and inertial forces applied to their wing and perform sharp banking turns. There are technical hardware and control challenges in copying the morphing wing flight capabilities of flying animals. This work is majorly focused on the modeling and control aspects of stable, tail-less, morphing wing flight. A classical control approach using bang-bang control is proposed to stabilize a bio-inspired morphing wing robot called Aerobat. Robot-environment interactions based on horseshoe vortex shedding and Wagner functions is derived to realistically evaluate the feasibility of the bang-bang control, which is then implemented on the robot in experiments to demonstrate first-time closed-loop stable flights of Aerobat. ### Optimal Order of Encoding for Gaussian MIMO Multi-Receiver Wiretap Channel The Gaussian multiple-input multiple-output (MIMO) multi-receiver wiretap channel is studied in this paper. The base station broadcasts confidential messages to K intended users while keeping the messages secret from an eavesdropper. The capacity of this channel has already been characterized by applying dirty-paper coding and stochastic encoding. However, K factorial encoding orders may need to be enumerated for that, which makes the problem intractable. We prove that there exists one optimal encoding order and reduced the K factorial times to a one-time encoding. The optimal encoding order is proved by forming a secrecy weighted sum rate (WSR) maximization problem. The optimal order is the same as that for the MIMO broadcast channel without secrecy constraint, that is, the weight of users' rate in the WSR maximization problem determines the optimal encoding order. Numerical results verify the optimal encoding order. ### Blind Deconvolution with Non-smooth Regularization via Bregman Proximal DCAs Blind deconvolution is a technique to recover an original signal without knowing a convolving filter. It is naturally formulated as a minimization of a quartic objective function under some assumption. Because its differentiable part does not have a Lipschitz continuous gradient, existing first-order methods are not theoretically supported. In this letter, we employ the Bregman-based proximal methods, whose convergence is theoretically guaranteed under the $L$-smad property. We first reformulate the objective function as a difference of convex (DC) functions and apply the Bregman proximal DC algorithm (BPDCA). This DC decomposition satisfies the $L$-smad property. The method is extended to the BPDCA with extrapolation (BPDCAe) for faster convergence. When our regularizer has a sufficiently simple structure, each iteration is solved in a closed-form expression, and thus our algorithms solve large-scale problems efficiently. We also provide the stability analysis of the equilibriums and demonstrate the proposed methods through numerical experiments on image deblurring. The results show that BPDCAe successfully recovered the original image and outperformed other existing algorithms. ### Data-Driven Upper Bounds on Channel Capacity We consider the problem of estimating an upper bound on the capacity of a memoryless channel with unknown channel law and continuous output alphabet. A novel data-driven algorithm is proposed that exploits the dual representation of capacity where the maximization over the input distribution is replaced with a minimization over a reference distribution on the channel output. To efficiently compute the required divergence maximization between the conditional channel and the reference distribution, we use a modified mutual information neural estimator that takes the channel input as an additional parameter. We evaluate our approach on different memoryless channels and show that the estimated upper bounds closely converge either to the channel capacity or to best-known lower bounds. ### An Information-theoretic Method for Collaborative Distributed Learning with Limited Communication In this paper, we study the information transmission problem under the distributed learning framework, where each worker node is merely permitted to transmit a $m$-dimensional statistic to improve learning results of the target node. Specifically, we evaluate the corresponding expected population risk (EPR) under the regime of large sample sizes. We prove that the performance can be enhanced since the transmitted statistics contribute to estimating the underlying distribution under the mean square error measured by the EPR norm matrix. Accordingly, the transmitted statistics correspond to the eigenvectors of this matrix, and the desired transmission allocates these eigenvectors among the statistics such that the EPR is minimal. Moreover, we provide the analytical solution of the desired statistics for single-node and two-node transmission, where a geometrical interpretation is given to explain the eigenvector selection. For the general case, an efficient algorithm that can output the allocation solution is developed based on the node partitions. ### Unified Modeling of Multi-Domain Multi-Device ASR Systems Modern Automatic Speech Recognition (ASR) systems often use a portfolio of domain-specific models in order to get high accuracy for distinct user utterance types across different devices. In this paper, we propose an innovative approach that integrates the different per-domain per-device models into a unified model, using a combination of domain embedding, domain experts, mixture of experts and adversarial training. We run careful ablation studies to show the benefit of each of these innovations in contributing to the accuracy of the overall unified model. Experiments show that our proposed unified modeling approach actually outperforms the carefully tuned per-domain models, giving relative gains of up to 10% over a baseline model with negligible increase in the number of parameters. ### The ACM Multimedia 2022 Computational Paralinguistics Challenge: Vocalisations, Stuttering, Activity, & Mosquitoes The ACM Multimedia 2022 Computational Paralinguistics Challenge addresses four different problems for the first time in a research competition under well-defined conditions: In the Vocalisations and Stuttering Sub-Challenges, a classification on human non-verbal vocalisations and speech has to be made; the Activity Sub-Challenge aims at beyond-audio human activity recognition from smartwatch sensor data; and in the Mosquitoes Sub-Challenge, mosquitoes need to be detected. We describe the Sub-Challenges, baseline feature extraction, and classifiers based on the usual ComPaRE and BoAW features, the auDeep toolkit, and deep feature extraction from pre-trained CNNs using the DeepSpectRum toolkit; in addition, we add end-to-end sequential modelling, and a log-mel-128-BNN.
2022-05-16 17:50:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3900394141674042, "perplexity": 1395.8342845360867}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662512229.26/warc/CC-MAIN-20220516172745-20220516202745-00510.warc.gz"}
https://istopdeath.com/solve-graphically-9w2225/
# Solve Graphically 9w^2=225 Graph each side of the equation. The solution is the x-value of the point of intersection. Solve Graphically 9w^2=225
2023-02-08 08:05:12
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8568012118339539, "perplexity": 1811.3460081948876}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500719.31/warc/CC-MAIN-20230208060523-20230208090523-00656.warc.gz"}
http://www.dummies.com/how-to/content/writing-a-matrix-in-augmented-form.navId-611287.html
An alternative to writing a system of equations as the product of a coefficient matrix and variable matrix equaling an answer matrix is what's known as augmented form; this is where the coefficient matrix and the answer matrix are written in the same matrix, separated in each row by colons. This setup makes using elementary row operations to solve a matrix much simpler because you only have one matrix on your plate at a time (as opposed to three!). Mathematicians are a very lazy bunch, and they like to write as little as possible. Using augmented form cuts down on the amount that you have to write. And when you're attempting to solve a system of equations that requires many steps, you'll be thankful to be writing less! Then you can use elementary row operations to get the solution to your system. Consider this matrix equation: Written in augmented form, it looks like this:
2015-06-30 21:53:41
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8432192206382751, "perplexity": 266.53879708366554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375094501.77/warc/CC-MAIN-20150627031814-00138-ip-10-179-60-89.ec2.internal.warc.gz"}
https://physics.stackexchange.com/questions/275883/why-repulsive-term-in-lennard-jones-potential-is-not-related-with-spin-state
# why repulsive term in lennard jones potential is not related with spin state? I have read solid state physics which is wrote by kittle. it said that repulsive term is relate with pauli exclusion principle and we can descride it r^-12. but i don`t know why the two cases, for example two hydrogen atoms which have same electron spin or different spin, have the same dependence r^-12. i think that if they are the same, then more repulsive force are acted than the difference one because of pauli explosion principle. • You should think of the Lennard-Jones as a very crude approximation. There should be more dependencies as you mention, but we are being lazy with how much computation we do. – AHusain Aug 23 '16 at 4:53 • Yes, I believe that it is $r^{-12}$ just because it happends to be square of asymptotic scaling of dipolar vdw-fluctuations $r^{-6}$. And square is faster to calculate than arbitary exponent. In paper it makes equilibrium distance solvable with quadratic equation and with computer FPUs it is faster as well. – Mikael Kuisma Aug 23 '16 at 5:36
2019-08-26 07:30:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7058001756668091, "perplexity": 857.6527063877717}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027331228.13/warc/CC-MAIN-20190826064622-20190826090622-00536.warc.gz"}
https://www.cheenta.com/integer-problem-isi-b-stat-objective-problem-156/
INTRODUCING 5 - days-a-week problem solving session for Math Olympiad and ISI Entrance. Learn More Integer Problem | ISI BStat | Objective Problem 156 Try this beautiful problem Based on Integer, useful for ISI B.Stat Entrance. Integer | ISI B.Stat Entrance | Problem 156 Let n be any integer. Then $n(n + 1)(2n + 1)$ • (a) is a perfect square • (b) is an odd number • (c) is an integral multiple of 6 • (d) does not necessarily have any foregoing properties. Key Concepts Integer Perfect square numbers Odd number Answer: (c) is an integral multiple of 6 TOMATO, Problem 156 Challenges and Thrills in Pre College Mathematics Try with Hints $n(n + 1)$ is divisible by $2$ as they are consecutive integers. If $n\not\equiv 0$ (mod 3) then there arise two casess........ Case 1,, Let $n \equiv 1$ (mod 3) Then $2n + 1$ is divisible by 3. Let $n \equiv2$ (mod 3) Then$n + 1$ is divisible by $3$ Can you now finish the problem .......... Now, if $n$ is divisible by $3$, then we can say that $n(n + 1)(2n + 1)$ is always divisible by $2*3 = 6$ Therefore option (c) is the correct Subscribe to Cheenta at Youtube This site uses Akismet to reduce spam. Learn how your comment data is processed. Cheenta. Passion for Mathematics Advanced Mathematical Science. Taught by olympians, researchers and true masters of the subject. CAREERTEAM support@cheenta.com
2021-06-22 23:59:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7174950242042542, "perplexity": 4220.891141916599}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488525399.79/warc/CC-MAIN-20210622220817-20210623010817-00597.warc.gz"}
http://people.ece.cornell.edu/land/courses/ece4760/FinalProjects/f2018/amm452_fb262/amm452_fb262/amm452_fb262/index.html
# Self-Balancing Robot ## Introduction    top The inverted pendulum is an interesting case in the study of control systems because of its unstable nature. A pendulum is considered inverted when its center of mass is placed above its pivot point, meaning that its only equilibrium point is when its center of mass is directly above the pivot point. This is an unstable system because if it deviates any arbitrarily small amount from equilibrium the forces acting on the system will cause it to move even further away. This problem is of particular interest in the study of control systems likely because it's a simple example of an unstable system, and it's very obvious when equilibrium is being maintained successfully. If we limit the system to a single degree of freedom by locking the pole to a single axis of rotation, the system is simple enough that we only need to monitor a single value in order to control the system, the angle between the pivot, center of mass, and the vertical axis. Recently, this topic has become relevant in many new practical applications, typically in the form of two-wheeled robots that are able to maintain balance and move around easily. Personal transportation machines such as the Segway PT and the self-balancing scooters often mislabelled as "hoverboards" both solve the problem of controlling an inverted pendulum well enough that they can handle supporting a human being on top of them. Our project for the course ECE 4760 was to build our own self-balancing two-wheeled robot. Our robot is controlled by a PIC32 microcontroller, uses an inertial measurement unit to keep track of its pitch value and two motors to drive the wheels at the bottom of the robot, and it attempts to maintain equilibrium by using a PID controller to counteract its pitch value by driving the wheels such that they move beneath the center of mass. Working on this project allowed us to learn about the many challenges involved in controlling an unstable system, about how to efficiently debug a system by using a mixture of study and experimentation, and about the principles behind many good filtering systems that we used to filter data from our inertial measurement unit. An image of the final version of our robot. ### F. Code Listing • main.c - our main code file, contains all threads and ISRs, as well as the main function. • i2c_lib.h - used to interface with IMU with I2C protocol, adapted from a previous project's I2C library. • kalman_lib.h - our implementation of a Kalman filter. Ultimately not used in this project, but we included it here anyways. • finalProject.X.zip - a zipped folder containing our full MPLabX project. ### G. Acknowledgements    top We'd like to thank Bruce Land and his teaching assistants for their patience, support, and advice throughout this project.
2019-01-17 02:36:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26652470231056213, "perplexity": 655.9967159692317}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583658681.7/warc/CC-MAIN-20190117020806-20190117042806-00265.warc.gz"}
https://deepai.org/publication/stochastic-bandits-with-delayed-composite-anonymous-feedback
# Stochastic Bandits with Delayed Composite Anonymous Feedback We explore a novel setting of the Multi-Armed Bandit (MAB) problem inspired from real world applications which we call bandits with "stochastic delayed composite anonymous feedback (SDCAF)". In SDCAF, the rewards on pulling arms are stochastic with respect to time but spread over a fixed number of time steps in the future after pulling the arm. The complexity of this problem stems from the anonymous feedback to the player and the stochastic generation of the reward. Due to the aggregated nature of the rewards, the player is unable to associate the reward to a particular time step from the past. We present two algorithms for this more complicated setting of SDCAF using phase based extensions of the UCB algorithm. We perform regret analysis to show sub-linear theoretical guarantees on both the algorithms. ## Authors • 10 publications • 3 publications • ### Adaptive Algorithms for Multi-armed Bandit with Composite and Anonymous Feedback We study the multi-armed bandit (MAB) problem with composite and anonymo... 12/13/2020 ∙ by Siwei Wang, et al. ∙ 0 • ### Linear Bandits with Feature Feedback This paper explores a new form of the linear bandit problem in which the... 03/09/2019 ∙ by Urvashi Oswal, et al. ∙ 0 • ### Multi-Player Bandits Models Revisited Multi-player Multi-Armed Bandits (MAB) have been extensively studied in ... 11/07/2017 ∙ by Lilian Besson, et al. ∙ 0 • ### Smooth Sequential Optimisation with Delayed Feedback Stochastic delays in feedback lead to unstable sequential learning using... 06/21/2021 ∙ by Srivas Chennu, et al. ∙ 0 • ### Dynamic Spectrum Access using Stochastic Multi-User Bandits A stochastic multi-user multi-armed bandit framework is used to develop ... 01/12/2021 ∙ by Meghana Bande, et al. ∙ 0 • ### Dueling Bandits with Qualitative Feedback We formulate and study a novel multi-armed bandit problem called the qua... 09/14/2018 ∙ by Liyuan Xu, et al. ∙ 0 • ### Bandits with adversarial scaling We study "adversarial scaling", a multi-armed bandit model where rewards... 03/04/2020 ∙ by Thodoris Lykouris, et al. ∙ 0 ##### This week in AI Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday. ## 1 Introduction Multi-Armed Bandits (MAB) have been a well studied problem in machine learning theory for capturing the exploration-exploitation trade off in online decision making. MAB has applications to domains like e-commerce, computational advertising, clinical trials, recommendation systems, etc. In most of the real world applications, assumptions of the original theoretical MAB model like immediate rewards, non-stochasticity of the rewards, etc do not hold. A more natural setting is when the rewards of pulling bandit arms are delayed in the future since the effects of the actions are not always immediately observed. Pike-Burke et al. (2018) first explored this setting assuming stochastic rewards for pulling an arm which are obtained at some specific time step in the future. This setting is called delayed, aggregated, anonymous feedback (DAAF). The complexity of this problem stems from anonymous feedback to the model due to its inability to distinguish the origin of rewards obtained at a particular time from any of the previous time steps. This work was extended by adding a relaxation to the temporal specificity of observing the reward at one specific time in the future by Cesa-Bianchi et al. (2018). The reward for pulling an arm can now be possibly spread adversarially over multiple time steps in the future. However they made an added assumption on the non-stochasticity of the rewards from each arm, thereby observing the same total reward for pulling the same arm each time. This scenario of non-stochastic composite anonymous feedback (CAF) can be applied to several applications, but it still does not cover the entire spectrum of applications. Consider a setting of a clinical trial where the benefits of different medicines on improving patient health are observed. CAF offers a more natural extension to this scenario than DAAF since the benefits from a medicine can be spread over multiple steps after taking it rather than achieving it all at once at a single time step in the future. However, the improvements effects of the medicine might be different for different patients and thus assuming the same total health improvement for each time using a specific medicine is not very realistic. Inspired from this real world setting, we suggest that a more general bandit setting will be using CAF with the non-stochastic assumption dropped. We study such a MAB setting with stochastic delayed composite anonymous feedback (SDCAF). In our model, a player interacts with an environment of K actions (or arms) in a sequential fashion. At each time step the player selects an action which leads to a reward generated at random from an underlying reward distribution which is spread over a fixed number of time steps after pulling the arm. More precisely, we assume that the loss for choosing an action at a time is adversarially spread over at most d consecutive time steps in the future. At the end of each round, the player observes only the sum of all the rewards that arrive in that round. Crucially, the player does not know which of the past plays have contributed to this aggregated reward. Extending algorithms from the theoretical model of SDCAF to practical applications, involves obtaining guarantees on the rewards obtained from them. The goal is to maximize the cumulative reward from plays of the bandit, or equivalently to minimize the regret (the total difference between the reward of the optimal action and the actions taken). We present two algorithms for solving this setting which involve running a modified version of the UCB algorithm Auer et al. (2002) in phases where the same arm is pulled multiple times in a particular phase. This is motivated by the aim to reduce the error in approximating the mean reward of a particular arm due to extra and missing reward components from adjacent arm pulls. We prove sub-linear regret bounds for both these algorithms. ### 1.1 Related Work Online learning with delayed feedback has been studied in the non-bandit setting by Weinberger and Ordentlich (2006); Mesterharm (2005); Langford et al. (2009); Joulani et al. (2013); Quanrud and Khashabi (2015); Joulani et al. (2016); Garrabrant et al. (2016) and in the bandit setting by Neu et al. (2010); Joulani et al. (2013); Mandel et al. (2015); Cesa-Bianchi et al. (2016); Vernade et al. (2017); Pike-Burke et al. (2018). Dudík et al. (2011) consider stochastic contextual bandits with a constant delay and Desautels et al. (2014) consider Gaussian Process bandits with a bounded stochastic delay. The general observation that delay causes an additive regret penalty in stochastic bandits and a multiplicative one in adversarial bandits is made in Joulani et al. (2013) . The delayed composite loss function of our setting generalizes the composite loss function setting of Dekel et al. (2014). ## 2 Problem Definition There are actions or arms in the set . Each action is associated with a reward distribution supported in , with mean . Let be a random variable which stands for the total reward obtained on pulling arm at time . is drawn from the distribution , and may be spread over a maximum of time steps in an adversarial manner. is defined by the sum of -many components for , where denotes the reward obtained at time on pulling arm at time . Let denote the action chosen by the player at the beginning of round . If , then the player obtains reward component at time , at time , and so on until time . The reward that the player observes at time t is the combined reward obtained at time . which is the sum of past loss contributions, where for all and when . ## 3 Algorithms We present two algorithms for this setting of SDCAF in Algorithm 1 and Algorithm 2 respectively. For Algorithm 2 we only specific the additional inputs and initialization over Algorithm 1. We first provide the intuition behind the algorithms and then provide a formal regret analysis. Algorithm 1 is a modified version of the standard UCB algorithm and is run in phases where the same arm is pulled multiple times along with maintaining an upper confidence bound on the reward from each arm. More specifically, each phase consists of two steps. In step 1, the arm with maximum upper confidence bound is selected. In step 2, the selected arm is pulled times repeatedly. We track all time steps where arm is played till phase in the set . The rewards obtained are used to update the running estimate of the arm mean . The intuition behind running the algorithm in phases is to gather sufficient rewards from a single arm so as to have a good estimate of the arm mean reward. This helps us bound the error in our reward estimate due to extra rewards from the previous phase and missing rewards which seep into the next phase due to delay. For every phase of the algorithm, the selected arm is pulled for a fixed number of times . From our regret analysis, setting achieves sub-linear regret. Algorithm 2 is a modified version improved-UCB algorithm from Auer and Ortner (2010) run in phases where a set of active arms is maintained, which is pruned based on the arm mean estimates. Each phase consists of two steps where in Step 1, each active arm is played repeatedly for steps. We track all time steps where arm was played in the first phases in the set . In Step 2, a new estimate of the arm mean reward is calculated as the average of the observations from time steps in . Arm is eliminated if the calculated estimate is smaller than . The number of times each arm is pulled depends on which is chosen such that the confidence bounds on the estimation error of hold with a given probability. This algorithm is adapted from Pike-Burke et al. (2018) but we get rid of the bridge period from the original algorithm as it does not impair the validity of confidence bounds in our analysis. We now provide regret analysis for the algorithms and specify the choice of parameters and . ### 3.1 Regret Analysis for Algorithm 1 The regret analysis closely follows from that of the UCB algorithm described in Lattimore and Szepesvá (2018). Without loss of generality we assume that the first arm is optimal. Thus we have , and define . We assume that the algorithm runs for phases. Let denote the number of times arm is played till phase . We bound for each sub-optimal arm . For this we show that the following good event holds with a high probability bound Gi={μ1≤minm∈[n]B1(m,δ)}∩{^μi(ui)+√2Ti(ui)log(1δ)≤μ1} Here, is the event that is never underestimated by the upper confidence bound of the first arm, while at the same time the upper confidence bound for the mean of arm , after observations are taken from this arm, is below the payoff of the optimal arm. We make a claim that if occurs, then . Since we always have , the following holds E[Ti(n)]=E[1[Gi]Ti(n)]+E[1[Gci]Ti(n)]≤Ti(ui)+P(Gci)T Next we bound the probability of occurrence of the complement event . ###### Lemma 1. If is an unbiased estimator of for phase, then the error from the estimated mean can be bound as where is the set of time steps when arm was played and . The proof of Lemma 1 follows from the fact that in each phase the missing and extra reward components can be paired up and the maximum difference that we can obtain between them is at most one. We use Lemma-1 to bound and obtain . This gives us an upper bound on the number of times a sub-optimal arm is played . ###### Theorem 1. For the choice of , the regret of Algorithm 1 is bounded by . The proof of Theorem 1 proceeds by plugging in the upper bound on in the UCB regret analysis. We refer the readers to Appendix A for the detailed regret analysis of Algorithm 1 and proofs of Lemma 1 and Theorem 1 . ### 3.2 Regret Analysis for Algorithm 2 The regret analysis for this algorithm is adapted from appendix F of Pike-Burke et al. (2018). We first present a lemma to bound the difference between estimators for the arm mean reward. ###### Lemma 2. If is an unbiased estimator for in phase, then we can bound the difference of with the estimator used in Algorithm 2 as where each arm is pulled times till phase , is the set of time steps when arm was played and . The proof proceeds with a similar argument as for Lemma 1. Details can be found in Appendix B. Choice of : We use Algorithm 2 with for some large constants . The exact expression is given in Appendix B. We use Lemma-2 to bound the probability that a suboptimal arm still remains in the set of active arms . ###### Lemma 3. For the above choice of the estimates satisfy the following property: For every arm and phase , with probability either or . Now we present a theorem to upper bound the regret of Algorithm 2 using Lemma 3 ###### Theorem 2. For the choice of the regret of Algorithm 2 is bounded by . The proof of Theorem 2 closely follows the analysis of improved UCB from Auer and Ortner (2010) using Lemma 3. Each sub-optimal arm is eliminated in phase which a contributes a regret term of at most . We use our choice of and sum over all sub-optimal arms to get the mentioned sub-linear regret bound. We refer the readers to Appendix B for the detailed regret analysis of Algorithm 2 and proofs of Lemma 2, 3 and Theorem 2. ## 4 Conclusion and Future Work We have studied an extension of the multi-armed bandit problem to stochastic bandits with delayed, composite, anonymous feedback. This setting is considerably difficult since the rewards are stochastically generated and spread in an adversarial fashion over time steps in the future which makes it hard to identify the optimal arm. We show that, surprisingly, it is possible to develop a simple phase based extension of the standard UCB algorithm that performs comparable to those for the simpler delayed feedback setting, where the assignment of rewards to arm plays is known. We suggest two possible directions of extending our work: the first being when the delay parameter is not perfectly known and the second being extending the setting to involve contextual bandits. ## References • P. Auer, N. Cesa-Bianchi, and P. Fischer (2002) Finite-time analysis of the multiarmed bandit problem. Mach. Learn. 47 (2-3), pp. 235–256. External Links: ISSN 0885-6125, Link, Document Cited by: §1. • P. Auer and R. Ortner (2010) UCB revisited: improved regret bounds for the stochastic multi-armed bandit problem. Periodica Mathematica Hungarica 61 (1), pp. 55–65. External Links: ISSN 1588-2829, Document, Link Cited by: Appendix B, §3.2, §3. • N. Cesa-Bianchi, C. Gentile, Y. Mansour, and A. Minora (2016) Delay and cooperation in nonstochastic bandits. In Proceedings of the 29th Conference on Learning Theory, COLT 2016, New York, USA, June 23-26, 2016, pp. 605–622. External Links: Link Cited by: §1.1. • N. Cesa-Bianchi, C. Gentile, and Y. Mansour (2018) Nonstochastic bandits with composite anonymous feedback. In Proceedings of the 31st Conference On Learning Theory, S. Bubeck, V. Perchet, and P. Rigollet (Eds.), Proceedings of Machine Learning Research, Vol. 75, , pp. 750–773. External Links: Link Cited by: §1. • O. Dekel, J. Ding, T. Koren, and Y. Peres (2014) Online learning with composite loss functions. CoRR abs/1405.4471. External Links: Link, 1405.4471 Cited by: §1.1. • T. Desautels, A. Krause, and J. W. Burdick (2014) Parallelizing exploration-exploitation tradeoffs in gaussian process bandit optimization. Journal of Machine Learning Research 15, pp. 4053–4103. External Links: Link Cited by: §1.1. • M. Dudík, D. J. Hsu, S. Kale, N. Karampatziakis, J. Langford, L. Reyzin, and T. Zhang (2011) Efficient optimal learning for contextual bandits. CoRR abs/1106.2369. External Links: Link, 1106.2369 Cited by: §1.1. • S. Garrabrant, N. Soares, and J. Taylor (2016) Asymptotic convergence in online learning with unbounded delays. CoRR abs/1604.05280. External Links: Link, 1604.05280 Cited by: §1.1. • P. Joulani, A. Gyorgy, and C. Szepesvari (2013) Online learning under delayed feedback. In Proceedings of the 30th International Conference on Machine Learning, S. Dasgupta and D. McAllester (Eds.), Proceedings of Machine Learning Research, Vol. 28, Atlanta, Georgia, USA, pp. 1453–1461. External Links: Link Cited by: §1.1. • P. Joulani, A. György, and C. Szepesvári (2016) Delay-tolerant online convex optimization: unified analysis and adaptive-gradient algorithms. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence , • J. Langford, A. J. Smola, and M. Zinkevich (2009) Slow learners are fast. In Proceedings of the 22Nd International Conference on Neural Information Processing Systems, NIPS’09, USA, pp. 2331–2339. External Links: ISBN 978-1-61567-911-9, Link Cited by: §1.1. • T. Lattimore and C. Szepesvá (2018) Bandit algorithms. Cambridge University Press, 2018. External Links: Link Cited by: §3.1. • T. Mandel, Y. Liu, E. Brunskill, and Z. Popovic (2015) The queue method: handling delay, heuristics, prior data, and evaluation in bandits . In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, January 25-30, 2015, Austin, Texas, USA., pp. 2849–2856. External Links: Link Cited by: §1.1. • C. Mesterharm (2005) On-line learning with delayed label feedback. In ALT, Cited by: §1.1. • G. Neu, A. György, C. Szepesvári, and A. Antos (2010) Online markov decision processes under bandit feedback . In Advances in Neural Information Processing Systems 23: 24th Annual Conference on Neural Information Processing Systems 2010. Proceedings of a meeting held 6-9 December 2010, Vancouver, British Columbia, Canada., pp. 1804–1812. External Links: Link Cited by: §1.1. • C. Pike-Burke, S. Agrawal, C. Szepesvari, and S. Grunewalder (2018) Bandits with delayed, aggregated anonymous feedback. In Proceedings of the 35th International Conference on Machine Learning, J. Dy and A. Krause (Eds.), Proceedings of Machine Learning Research, Vol. 80, Stockholmsmässan, Stockholm Sweden, pp. 4105–4113. External Links: Link Cited by: Appendix B, §1.1, §1, §3.2, §3. • K. Quanrud and D. Khashabi (2015) Online learning with adversarial delays. In Advances in Neural Information Processing Systems 28, C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett (Eds.), pp. 1270–1278. External Links: Link Cited by: §1.1. • C. Vernade, O. Cappé, and V. Perchet (2017) Stochastic bandit models for delayed conversions. In Proceedings of the Thirty-Third Conference on Uncertainty in Artificial Intelligence, UAI 2017, Sydney, Australia, August 11-15, 2017, External Links: Link Cited by: §1.1. • M. J. Weinberger and E. Ordentlich (2006) On delayed prediction of individual sequences. IEEE Trans. Inf. Theor. 48 (7), pp. 1959–1976. External Links: ISSN 0018-9448, Link, Document Cited by: §1.1. ## Appendix A Regret Analysis for Algorithm 1 Let , represent the means of the reward distributions . Without loss of generality we assume that the first arm is optimal so that . We define . Algorithm 1 runs in phases of pulling the same arm for time steps and thus the regret over phases can be written as Rn=k∑i=1ΔiE[Ti(n)] (1) Where denotes number of times arm was played in phases. We bound the for each sub-optimal arm . Let to be a good event for each arm defined as follows Gi={μ1≤minm∈[n]B1(m,δ)}∩{^μi(ui)+√2Ti(ui)log(1δ)≤μ1} where is a constant to be chosen later. So is the event that is never underestimated by the upper confidence bound of the first arm, while at the same time the upper confidence bound for the mean of the arm after observations are taken from this arm is below the payoff of the optimal arm. Two things are shown : • If occurs, then . • The complement event occurs with low probability. Since we always have , following holds E[Ti(n)]=E[1[Gi]Ti(n)]+E[1[Gci]Ti(n)]≤Ti(ui)+P(Gci)T (2) Next step we assume that holds and show that . Suppose . Then arm was played more that times over phases, so there must exist a phase where and . But using the definition of , we have . Hence , which is a contradiction. Therefore if occurs, then . Now we bound . The event is as follows: (3) Using a union bound the probability of term of can be upper bounded as P(I)=P(μ1≥minm∈[n]B1(m,δ))≤n∑m=1P(μ1≥^μ1(m)+ ⎷2log(1δ)T1(m)) See 1 ###### Proof. Consider the following estimator for the mean of the rewards generated from arm till phases : ¯μi(m)=∑t∈Si(m)Rt(i)Ti(m) where . It can be seen . If arm was played in phase , then we have ∣∣∣∑t∈Si(m)∖Si(m−1)(Rt(i)−Xt)∣∣∣≤d where is the delay parameter over which the rewards are distributed. Because the missing and extra reward components can be paired up and the maximum difference we can obtain is at most one. After phases, suppose an arm was played times. Then we can bound |^μi(m)−¯μi(m)|≤d×zk×z≤dk (4) since in each phase an arm is pulled times. This gives and . ∎ Plugging this in our bound for gives P(I)≤n∑m=1P(μ1≥¯μ1(m)−dk+ ⎷2log(1δ)T1(m)) We then choose such that following holds for all −dk+ ⎷2log(1δ)T1(m)≥a ⎷2log(1δ)T1(m) Since , is selected as k=d(1−a) ⎷T2log(1δ) (5) Using this choice of and the fact that rewards are obtained from distributions which are subgaussian, we bound further as follows P(I)≤n∑m=1P(μ1≥¯μ1(m)+a ⎷2log(1δ)T1(m))≤n∑m=1δa2=nδa2 (6) The next step is to bound the probability of term in (3). Note . Using (4) we get P(II)=P(^μi(ui)+ ⎷2log(1δ)Ti(ui)≥μi)=P(^μi(ui)+ ⎷2log(1δ)Ti(ui)≥μi+Δi)≤P(¯μi(ui)+dk+ ⎷2log(1δ)Ti(ui)≥μi+Δi)P(II)≤P(¯μi(ui)−μi≥Δi−dk− ⎷2log(1δ)Ti(ui)) Because of our choice of in (5) we have dk≤(1−a)√2log(1δ)T≤(1−a) ⎷2log(1δ)Ti(ui)dk≤ ⎷2log(1δ)Ti(ui) Now we show that can be chosen in some sense such that following inequality holds Δi−dk− ⎷2log(1δ)Ti(ui)>cΔi(1−c)Δi>dk+ ⎷2log(1δ)Ti(ui)(1−c)Δi>2 ⎷2log(1δ)Ti(ui)Ti(ui)>8log(1δ)(1−c)2Δ2i (7) We assume that arm is played in number of phases. Hence . This gives us that we can choose . Using this choice of and the sub-gaussian assumption we can bound P(II)≤P(μi(ui)−μi>cΔi)≤exp(−c2Δ2iTi(ui)22) (8) Taking (6) and (8), we have P(Gci)≤nδa2+exp(−c2Δ2iTi(ui)22) When substituted in (2) we obtain E[Ti(n)]≤Ti(ui)+T(nδa2+exp(−c2Δ2iTi(ui)22)) (9) Making the assumption that and the choice of from (7), equation (9) leads to E[Ti(n)]≤16log(T)a2(1−c)2Δ2i+k+1+T1−16c2(1−c)2a2=16log(T)a2(1−c)2Δ2i+d(1−a) ⎷T2log(1δ)+1+T1−16c2(1−c)2a2 (10) All that remains to be chosen is . We choose somewhat arbitrarily such that , and accordingly choose such that last term in (10) does not contribute in polynomial dependence. We choose , so that . This leads to E[Ti(n)]≤289log(T)4Δ2i+d2√Tlog(T)+2 (11) See 1 ###### Proof. From (11) we have that for each sub-optimal arm we can bound E[Ti(n)]≤289log(T)4Δ2i+d2√Tlog(T)+2 Therefore using the basic regret decomposition again, we have Rn=K∑i=1ΔiE[Ti(n)]=∑i:Δi<ΔΔiE[Ti(n)]+∑i:Δi≥ΔΔiE[Ti(n)]≤TΔ+∑i:Δi≥Δ(289log(T)4Δi+dΔi2√Tlog(T)+2Δi)≤TΔ+289Klog(T)4Δ+(d2√Tlog(T)+2)∑iΔi≤17√TKlog(T)+d∑iΔi2√Tlog(T)+2∑iΔi where the first inequality follows because and the last by choosing . The term can be upper bounded by since each . Thus we get the regret bound . ∎ ## Appendix B Regret Analysis for Algorithm 2 See 2 ###### Proof. Since the rewards are spread over time steps in an adversarial way, in the worst case the first rewards collected for arm in phase would have components from previous arms. Similarly for the last arm pulls, the reward components would seep into the next arm pull. Defining and as the first and last points of playing arm in phase , we have (12) because we can pair up some of the missing and extra reward components, and in each pair the difference is at most one. Then since and using (12) we get 1nm∣∣ ∣∣∑t∈Sj(m)Rt(j)−∑t∈Sj(m)Xt∣∣ ∣∣≤m(d−1)nm. (13) Define and recall that , where . ∎ See 3 ###### Proof. For any , P(|Xj(m)−μj|>a)≤P(|Xj(m)−~μj(m)|+|~μj(m)−μj|>a)≤P(|~μj(m)−μj|>a−m(d−1)nm)≤2exp⎧⎨⎩−2nm(a−m(d−1)nm)2⎫⎬⎭ where the first inequality is from triangle inequality and the last from Hoeffding’s inequality since are independent samples from , the reward distribution of arm . In particular choosing guarantees that . Setting nm=⎡⎢ ⎢⎢12~Δ2m(√log(T~Δ2m)+√log(T~Δ2m)+4~Δmm(d−1))2⎤⎥ ⎥⎥ ensures that . ∎ See 2 ###### Proof. Using Theorem 32 from [16], which uses analysis of improved UCB from [2] we substitute the value of to get following bound on regret In particular, optimizing with respect to gives the worst case regret of which is sublinear in . ∎
2021-09-27 08:04:23
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8458000421524048, "perplexity": 1335.348247138316}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058373.45/warc/CC-MAIN-20210927060117-20210927090117-00375.warc.gz"}
http://mathhelpforum.com/advanced-statistics/50234-binomial-probability.html
# Math Help - binomial probability 1. ## binomial probability - Find the binomial probability: P(25 < B(45, 0.2) < 35) I tried: =BINOMDIST(34,45,0.2,1)-BINOMDIST(26,45,0.2,1) And it comes out as error. Can someone please tell me what I'm doing wrong. __________________ In a new sheet, enter the numbers 0 through n = 30 in column A. Type 0 in A1, 1 in A2 and 2 in A3, then highlight the three cells, and drag the small square in the bottom right corner until cell A31 which should read 30. In cell B1 calculate P(X = 0) where X is a binomial random variable with n = 30 and p = 0.4 . Use the command =BINOMDIST(A1,30,0.4,FALSE). Copy down to cell B31. Can someone please explain to me what I do in cell B1? Like what exactly am I supposed to type? 2. Originally Posted by brumby_3 - Find the binomial probability: P(25 < B(45, 0.2) < 35) I tried: =BINOMDIST(34,45,0.2,1)-BINOMDIST(26,45,0.2,1) And it comes out as error. Can someone please tell me what I'm doing wrong. __________________ In a new sheet, enter the numbers 0 through n = 30 in column A. Type 0 in A1, 1 in A2 and 2 in A3, then highlight the three cells, and drag the small square in the bottom right corner until cell A31 which should read 30. In cell B1 calculate P(X = 0) where X is a binomial random variable with n = 30 and p = 0.4 . Use the command =BINOMDIST(A1,30,0.4,FALSE). Copy down to cell B31. Can someone please explain to me what I do in cell B1? Like what exactly am I supposed to type? The fourth parameter in BINOMDIS should be TRUE for the cumulative distribution. RonL 3. it still doesn't work. I get the same answer which is 4.92803E-09 ?????? lol And same with the second half of my question - I keep getting a funny looking number with an E in it. 4. Originally Posted by brumby_3 it still doesn't work. I get the same answer which is 4.92803E-09 ?????? lol And same with the second half of my question - I keep getting a funny looking number with an E in it. Ahh.. you said it comes out as error, not that you thought the answer was wrong! What makes you think that that is wrong? The mean number of successes in 45 trials with p=0.2 is 9, 25 is a long way above 9, so far in fact that for most purposes the cumulative probability of 25 or fewer successes is indistinguishable from 1. (Note BINOMDIS runs out of prescission before you get to 34 triaks in this case, and the cumulative returned is greater than 1, but that should not metter) Also: P(25 < B(45, 0.2) < 35)=Q(34)-Q(25) where Q denote the cumulative distribution, not: P(25 < B(45, 0.2) < 35)=Q(34)-Q(26) which is what you have. RonL 5. So it's right? Even with the E- in it? O_O 6. Originally Posted by brumby_3 it still doesn't work. I get the same answer which is 4.92803E-09 ?????? lol And same with the second half of my question - I keep getting a funny looking number with an E in it. That is not an error it is standard exponential notation (same as most calculators and all numerical programming languages): 4.92803E-09 means $4.92803 \times 10^{-9}$ RonL
2015-09-03 03:33:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5470986366271973, "perplexity": 1021.356238869406}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645298065.68/warc/CC-MAIN-20150827031458-00302-ip-10-171-96-226.ec2.internal.warc.gz"}
https://www.nag.com/numeric/py/nagdoc_latest/_modules/naginterfaces/library/interp.html
# Source code for naginterfaces.library.interp # -*- coding: utf-8 -*- r""" Module Summary -------------- Interfaces for the NAG Mark 28.5 interp Chapter. interp - Interpolation This module is concerned with the interpolation of a function of one or more variables. When provided with the value of the function (and possibly one or more of its lowest-order derivatives) at each of a number of values of the variable(s), the NAG Library functions provide either an interpolating function or an interpolated value. For some of the interpolating functions, there are supporting NAG Library functions to evaluate, differentiate or integrate them. Functionality Index ------------------- **Derivative** of interpolant from :meth:dim1_monotonic: :meth:dim1_monotonic_deriv from :meth:dim2_scat_shep: :meth:dim2_scat_shep_eval from :meth:dim3_scat_shep: :meth:dim3_scat_shep_eval from :meth:dim4_scat_shep: :meth:dim4_scat_shep_eval from :meth:dim5_scat_shep: :meth:dim5_scat_shep_eval from :meth:dimn_scat_shep: :meth:dimn_scat_shep_eval **Evaluation** of interpolant from :meth:dim1_monotonic: :meth:dim1_monotonic_eval from :meth:dim1_ratnl: :meth:dim1_ratnl_eval from :meth:dim2_scat: :meth:dim2_scat_eval from :meth:dim2_scat_shep: :meth:dim2_scat_shep_eval from :meth:dim3_scat_shep: :meth:dim3_scat_shep_eval from :meth:dim4_scat_shep: :meth:dim4_scat_shep_eval from :meth:dim5_scat_shep: :meth:dim5_scat_shep_eval from :meth:dimn_scat_shep: :meth:dimn_scat_shep_eval from triangulation from :meth:dim2_triangulate: :meth:dim2_triang_bary_eval using variables computed by :meth:dim1_monconv_disc: :meth:dim1_monconv_eval **Extrapolation** one variable monotonic convex: :meth:dim1_monconv_disc piecewise cubic: :meth:dim1_monotonic polynomial data with or without derivatives: :meth:dim1_cheb general data: :meth:dim1_aitken rational function: :meth:dim1_ratnl Integration (definite) of interpolant from :meth:dim1_monotonic: :meth:dim1_monotonic_intg **Interpolated values** :math:d variables from interpolant from :meth:dimn_scat_shep: :meth:dimn_scat_shep_eval modified Shepard method, Linear or Cubic: :meth:dimn_grid five variables from interpolant from :meth:dim5_scat_shep: :meth:dim5_scat_shep_eval four variables from interpolant from :meth:dim4_scat_shep: :meth:dim4_scat_shep_eval one variable from interpolant from :meth:dim1_monotonic: :meth:dim1_monotonic_eval from interpolant from :meth:dim1_monotonic (including derivative): :meth:dim1_monotonic_deriv from polynomial equally spaced data: :meth:dim1_everett general data: :meth:dim1_aitken from rational function: :meth:dim1_ratnl_eval using variables computed by :meth:dim1_monconv_disc: :meth:dim1_monconv_eval three variables from interpolant from :meth:dim3_scat_shep: :meth:dim3_scat_shep_eval two variables barycentric, from triangulation from :meth:dim2_triangulate: :meth:dim2_triang_bary_eval from interpolant from :meth:dim2_scat: :meth:dim2_scat_eval from interpolant from :meth:dim2_scat_shep: :meth:dim2_scat_shep_eval **Interpolating function** :math:d variables modified Shepard method: :meth:dimn_scat_shep five variables modified Shepard method: :meth:dim5_scat_shep four variables modified Shepard method: :meth:dim4_scat_shep one variable cubic spline: :meth:dim1_spline monotonic convex piecewise polynomial: :meth:dim1_monconv_disc other piecewise polynomial: :meth:dim1_monotonic polynomial data with or without derivatives: :meth:dim1_cheb rational function: :meth:dim1_ratnl three variables modified Shepard method: :meth:dim3_scat_shep two variables bicubic spline: :meth:dim2_spline_grid modified Shepard method: :meth:dim2_scat_shep other piecewise polynomial: :meth:dim2_scat triangulation: :meth:dim2_triangulate For full information please refer to the NAG Library document https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01intro.html """ # NAG Copyright 2017-2022. [docs]def dim1_aitken(a, b, x): r""" dim1_aitken interpolates a function of one variable at a given point :math:x from a table of function values :math:y_i evaluated at equidistant or non-equidistant points :math:x_i, for :math:\textit{i} = 1,2,\ldots,n+1, using Aitken's technique of successive linear interpolations. .. _e01aa-py2-py-doc: For full information please refer to the NAG Library document for e01aa https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01aaf.html .. _e01aa-py2-py-parameters: **Parameters** **a** : float, array-like, shape :math:\left(n+1\right) :math:\mathrm{a}[\textit{i}-1] must contain the :math:x-component of the :math:\textit{i}\ th data point, :math:x_{\textit{i}}, for :math:\textit{i} = 1,2,\ldots,n+1. **b** : float, array-like, shape :math:\left(n+1\right) :math:\mathrm{b}[\textit{i}-1] must contain the :math:y-component (function value) of the :math:\textit{i}\ th data point, :math:y_{\textit{i}}, for :math:\textit{i} = 1,2,\ldots,n+1. **x** : float The point :math:x at which the interpolation is required. Note that :math:\mathrm{x} may lie outside the interval defined by the minimum and maximum values in :math:\mathrm{a}, in which case an extrapolated value will be computed; extrapolated results should be treated with considerable caution since there is no information on the behaviour of the function outside the defined interval. **Returns** **a** : float, ndarray, shape :math:\left(n+1\right) :math:\mathrm{a}[\textit{i}-1] contains the value :math:x_{\textit{i}}-x, for :math:\textit{i} = 1,2,\ldots,n+1. **b** : float, ndarray, shape :math:\left(n+1\right) The contents of :math:\mathrm{b} are unspecified. **c** : float, ndarray, shape :math:\left(n\times \left(n+1\right)/2\right) :math:\mathrm{c}[0],\ldots,\mathrm{c}[n-1] contain the first set of linear interpolations, :math:\mathrm{c}[n],\ldots,\mathrm{c}[2\times n-2] contain the second set of linear interpolations, :math:\mathrm{c}[2n-1],\ldots,\mathrm{c}[3\times n-4] contain the third set of linear interpolations, :math:\vdots :math:\mathrm{c}[{n\times \left(n+1\right)/2}-1] contains the interpolated function value at the point :math:x. .. _e01aa-py2-py-errors: **Raises** **NagValueError** (errno :math:6) On entry, error in parameter :math:\textit{n}. Constraint: :math:n > 0. .. _e01aa-py2-py-notes: **Notes** dim1_aitken interpolates a function of one variable at a given point :math:x from a table of values :math:x_i and :math:y_i, for :math:i = 1,2,\ldots,n+1 using Aitken's method (see Fröberg (1970)). The intermediate values of linear interpolations are stored to enable an estimate of the accuracy of the results to be made. .. _e01aa-py2-py-references: **References** Fröberg, C E, 1970, Introduction to Numerical Analysis, Addison--Wesley """ raise NotImplementedError [docs]def dim1_everett(p, a): r""" dim1_everett interpolates a function of one variable at a given point :math:x from a table of function values evaluated at equidistant points, using Everett's formula. .. _e01ab-py2-py-doc: For full information please refer to the NAG Library document for e01ab https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01abf.html .. _e01ab-py2-py-parameters: **Parameters** **p** : float The point :math:p at which the interpolated function value is required, i.e., :math:p = \left(x-x_0\right)/h with :math:{-1.0} < p < 1.0. **a** : float, array-like, shape :math:\left(2\times n\right) :math:\mathrm{a}[\textit{i}-1] must be set to the function value :math:y_{{\textit{i}-n}}, for :math:\textit{i} = 1,2,\ldots,2n. **Returns** **a** : float, ndarray, shape :math:\left(2\times n\right) The contents of :math:\mathrm{a} are unspecified. **g** : float, ndarray, shape :math:\left(2\times n+1\right) The array contains .. rst-class:: nag-rules-none nag-align-left +------------------------+------------------------------------------------------------------------------+ |:math:y_0 |in :math:\mathrm{g}[0] | +------------------------+------------------------------------------------------------------------------+ |:math:y_1 |in :math:\mathrm{g}[1] | +------------------------+------------------------------------------------------------------------------+ |:math:\delta^{{2r}}y_0|in :math:\mathrm{g}[2r] | +------------------------+------------------------------------------------------------------------------+ |:math:\delta^{{2r}}y_1|in :math:\mathrm{g}[2\textit{r}+1], for :math:\textit{r} = 1,2,\ldots,n-1.| +------------------------+------------------------------------------------------------------------------+ The interpolated function value :math:y_p is stored in :math:\mathrm{g}[2n]. .. _e01ab-py2-py-errors: **Raises** **NagValueError** (errno :math:1) On entry, :math:\mathrm{p} = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:\mathrm{p}\leq 1.0. (errno :math:1) On entry, :math:\mathrm{p} = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:\mathrm{p}\geq {-1.0}. (errno :math:2) On entry, :math:n = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:n > 0. .. _e01ab-py2-py-notes: **Notes** dim1_everett interpolates a function of one variable at a given point .. math:: x = x_0+ph\text{,} where :math:-1\leq p\leq 1 and :math:h is the interval of differencing, from a table of values :math:x_m = x_0+mh and :math:y_m where :math:m = -\left(n-1\right),-\left(n-2\right),\ldots,-1,0,1,\ldots,n. The formula used is that of Fröberg (1970), neglecting the remainder term: .. math:: y_p = \sum_{{r = 0}}^{{n-1}}\left(\frac{{1-p+r}}{{2r+1}}\right)\delta^{{2r}}y_0+\sum_{{r = 0}}^{{n-1}}\left(\frac{{p+r}}{{2r+1}}\right)\delta^{{2r}}y_1\text{.} The values of :math:\delta^{{2r}}y_0 and :math:\delta^{{2r}}y_1 are stored on exit from the function in addition to the interpolated function value :math:y_p. .. _e01ab-py2-py-references: **References** Fröberg, C E, 1970, Introduction to Numerical Analysis, Addison--Wesley """ raise NotImplementedError [docs]def dim1_cheb(xmin, xmax, x, y, ip, lwrk, liwrk, itmin=0, itmax=0): r""" dim1_cheb constructs the Chebyshev series representation of a polynomial interpolant to a set of data which may contain derivative values. .. _e01ae-py2-py-doc: For full information please refer to the NAG Library document for e01ae https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01aef.html .. _e01ae-py2-py-parameters: **Parameters** **xmin** : float The lower and upper end points, respectively, of the interval :math:\left[x_{\mathrm{min}}, x_{\mathrm{max}}\right]. If they are not determined by your problem, it is recommended that they be set respectively to the smallest and largest values among the :math:x_i. **xmax** : float The lower and upper end points, respectively, of the interval :math:\left[x_{\mathrm{min}}, x_{\mathrm{max}}\right]. If they are not determined by your problem, it is recommended that they be set respectively to the smallest and largest values among the :math:x_i. **x** : float, array-like, shape :math:\left(m\right) The value of :math:x_{\textit{i}}, for :math:\textit{i} = 1,2,\ldots,m. The :math:\mathrm{x}[i-1] need not be ordered. **y** : float, array-like, shape :math:\left(n\right) The given values of the dependent variable, and derivatives, as follows: The first :math:p_1+1 elements contain :math:y_1,y_1^{\left(1\right)},\ldots,y_1^{\left(p_1\right)} in that order. The next :math:p_2+1 elements contain :math:y_2,y_2^{\left(1\right)},\ldots,y_2^{\left(p_2\right)} in that order. :math:\quad \text{ }\quad \vdots The last :math:p_m+1 elements contain :math:y_m,y_m^{\left(1\right)},\ldots,y_m^{\left(p_m\right)} in that order. **ip** : int, array-like, shape :math:\left(m\right) :math:p_{\textit{i}}, the order of the highest-order derivative whose value is given at :math:x_{\textit{i}}, for :math:\textit{i} = 1,2,\ldots,m. If the value of :math:y only is given for some :math:x_i then the corresponding value of :math:\mathrm{ip}[i-1] must be zero. **lwrk** : int The dimension of the array :math:\mathrm{wrk}. **liwrk** : int The dimension of the array :math:\mathrm{iwrk}. **itmin** : int, optional Respectively the minimum and maximum number of iterations to be performed by the function (for full details see Further Comments <https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01aef.html#fcomments4>__). Setting :math:\mathrm{itmin} and/or :math:\mathrm{itmax} negative or zero invokes default value(s) of :math:2 and/or :math:10, respectively. The default values will be satisfactory for most problems, but occasionally significant improvement will result from using higher values. **itmax** : int, optional Respectively the minimum and maximum number of iterations to be performed by the function (for full details see Further Comments <https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01aef.html#fcomments4>__). Setting :math:\mathrm{itmin} and/or :math:\mathrm{itmax} negative or zero invokes default value(s) of :math:2 and/or :math:10, respectively. The default values will be satisfactory for most problems, but occasionally significant improvement will result from using higher values. **Returns** **a** : float, ndarray, shape :math:\left(n\right) :math:\mathrm{a}[\textit{i}] contains the coefficient :math:a_{\textit{i}} in the Chebyshev series representation of :math:q\left(x\right), for :math:\textit{i} = 0,\ldots,n-1. **wrk** : float, ndarray, shape :math:\left(\mathrm{lwrk}\right) Used as workspace, but see also Further Comments <https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01aef.html#fcomments5>__. **iwrk** : int, ndarray, shape :math:\left(\mathrm{liwrk}\right) Used as workspace, but see also Further Comments <https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01aef.html#fcomments5>__. .. _e01ae-py2-py-errors: **Raises** **NagValueError** (errno :math:1) On entry, :math:\mathrm{liwrk} is too small. :math:\mathrm{liwrk} = \langle\mathit{\boldsymbol{value}}\rangle. Minimum possible dimension: :math:\langle\mathit{\boldsymbol{value}}\rangle. (errno :math:1) On entry, :math:\mathrm{lwrk} is too small. :math:\mathrm{lwrk} = \langle\mathit{\boldsymbol{value}}\rangle. Minimum possible dimension: :math:\langle\mathit{\boldsymbol{value}}\rangle. (errno :math:1) On entry, :math:n = \langle\mathit{\boldsymbol{value}}\rangle and :math:m+\mathrm{ip}[0]+\mathrm{ip}[1] + \cdots +\mathrm{ip}[m-1] = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:n = m+\mathrm{ip}[0]+\mathrm{ip}[1] + \cdots +\mathrm{ip}[m-1]. (errno :math:1) On entry, :math:m = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:m\geq 1. (errno :math:2) On entry, :math:\textit{I} = \langle\mathit{\boldsymbol{value}}\rangle and :math:\mathrm{ip}[\textit{I}-1] = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:\mathrm{ip}[\textit{I}-1]\geq 0. (errno :math:3) On entry, :math:\textit{I} = \langle\mathit{\boldsymbol{value}}\rangle, :math:\textit{J} = \langle\mathit{\boldsymbol{value}}\rangle and :math:\mathrm{x}[\textit{I}-1] = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:\mathrm{x}[\textit{I}-1]\neq \mathrm{x}[\textit{J}-1]. (errno :math:3) On entry, :math:\textit{I} = \langle\mathit{\boldsymbol{value}}\rangle, :math:\mathrm{x}[\textit{I}-1] = \langle\mathit{\boldsymbol{value}}\rangle, :math:\mathrm{xmin} = \langle\mathit{\boldsymbol{value}}\rangle and :math:\mathrm{xmax} = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:\mathrm{xmin}\leq \mathrm{x}[\textit{I}-1]\leq \mathrm{xmax}. (errno :math:3) On entry, :math:\mathrm{xmin} = \langle\mathit{\boldsymbol{value}}\rangle and :math:\mathrm{xmax} = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:\mathrm{xmin} < \mathrm{xmax}. (errno :math:5) The computation has been terminated because the iterative process appears to be diverging. **Warns** **NagAlgorithmicWarning** (errno :math:4) Not all the performance indices are less than eight times the machine precision, although :math:\mathrm{itmax} iterations have been performed. A more accurate solution may possibly be obtained by increasing :math:\mathrm{itmax} and recalling the function. .. _e01ae-py2-py-notes: **Notes** Let :math:m distinct values :math:x_{\textit{i}} of an independent variable :math:x be given, with :math:x_{\mathrm{min}}\leq x_{\textit{i}}\leq x_{\mathrm{max}}, for :math:\textit{i} = 1,2,\ldots,m. For each value :math:x_i, suppose that the value :math:y_i of the dependent variable :math:y together with the first :math:p_i derivatives of :math:y with respect to :math:x are given. Each :math:p_i must, therefore, be a non-negative integer, with the total number of interpolating conditions, :math:n, equal to :math:m+\sum_{{i = 1}}^mp_i. dim1_cheb calculates the unique polynomial :math:q\left(x\right) of degree :math:n-1 (or less) which is such that :math:q^{\left(\textit{k}\right)}\left(x_{\textit{i}}\right) = y_{\textit{i}}^{\left(\textit{k}\right)}, for :math:\textit{k} = 0,1,\ldots,p_{\textit{i}}, for :math:\textit{i} = 1,2,\ldots,m. Here :math:q^{\left(0\right)}\left(x_i\right) means :math:q\left(x_i\right). This polynomial is represented in Chebyshev series form in the normalized variable :math:\bar{x}, as follows: .. math:: q\left(x\right) = \frac{1}{2}a_0T_0\left(\bar{x}\right)+a_1T_1\left(\bar{x}\right)+ \cdots +a_{{n-1}}T_{{n-1}}\left(\bar{x}\right)\text{,} where .. math:: \bar{x} = \frac{{2x-x_{\mathrm{min}}-x_{\mathrm{max}}}}{{x_{\mathrm{max}}-x_{\mathrm{min}}}} so that :math:-1\leq \bar{x}\leq 1 for :math:x in the interval :math:x_{\mathrm{min}} to :math:x_{\mathrm{max}}, and where :math:T_i\left(\bar{x}\right) is the Chebyshev polynomial of the first kind of degree :math:i with argument :math:\bar{x}. (The polynomial interpolant can subsequently be evaluated for any value of :math:x in the given range by using :meth:fit.dim1_cheb_eval2 <naginterfaces.library.fit.dim1_cheb_eval2>. Chebyshev series representations of the derivative(s) and integral(s) of :math:q\left(x\right) may be obtained by (repeated) use of :meth:fit.dim1_cheb_deriv <naginterfaces.library.fit.dim1_cheb_deriv> and :meth:fit.dim1_cheb_integ <naginterfaces.library.fit.dim1_cheb_integ>.) The method used consists first of constructing a divided-difference table from the normalized :math:\bar{x} values and the given values of :math:y and its derivatives with respect to :math:\bar{x}. The Newton form of :math:q\left(x\right) is then obtained from this table, as described in Huddleston (1974) and Krogh (1970), with the modification described in Further Comments <https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01aef.html#fcomments2>__. The Newton form of the polynomial is then converted to Chebyshev series form as described in Conversion to Chebyshev Form <https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01aef.html#fcomments3>__. Since the errors incurred by these stages can be considerable, a form of iterative refinement is used to improve the solution. This refinement is particularly useful when derivatives of rather high order are given in the data. In reasonable examples, the refinement will usually terminate with a certain accuracy criterion satisfied by the polynomial (see Accuracy <https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01aef.html#accuracy>__). In more difficult examples, the criterion may not be satisfied and refinement will continue until the maximum number of iterations (as specified by the input argument :math:\mathrm{itmax}) is reached. In extreme examples, the iterative process may diverge (even though the accuracy criterion is satisfied): if a certain divergence criterion is satisfied, the process terminates at once. In all cases the function returns the 'best' polynomial achieved before termination. For the definition of 'best' and details of iterative refinement and termination criteria, see Further Comments <https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01aef.html#fcomments4>__. .. _e01ae-py2-py-references: **References** Huddleston, R E, 1974, CDC 6600 routines for the interpolation of data and of data with derivatives, SLL-74-0214, Sandia Laboratories (Reprint) Krogh, F T, 1970, Efficient algorithms for polynomial interpolation and numerical differentiation, Math. Comput. (24), 185--190 """ raise NotImplementedError [docs]def dim1_spline(x, y): r""" dim1_spline determines a cubic spline interpolant to a given set of data. .. _e01ba-py2-py-doc: For full information please refer to the NAG Library document for e01ba https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01baf.html .. _e01ba-py2-py-parameters: **Parameters** **x** : float, array-like, shape :math:\left(m\right) :math:\mathrm{x}[\textit{i}-1] must be set to :math:x_{\textit{i}}, the :math:\textit{i}\ th data value of the independent variable :math:x, for :math:\textit{i} = 1,2,\ldots,m. **y** : float, array-like, shape :math:\left(m\right) :math:\mathrm{y}[\textit{i}-1] must be set to :math:y_{\textit{i}}, the :math:\textit{i}\ th data value of the dependent variable :math:y, for :math:\textit{i} = 1,2,\ldots,m. **Returns** **lamda** : float, ndarray, shape :math:\left(m+4\right) The value of :math:\lambda_{\textit{i}}, the :math:\textit{i}\ th knot, for :math:\textit{i} = 1,2,\ldots,m+4. **c** : float, ndarray, shape :math:\left(m\right) The coefficient :math:c_{\textit{i}} of the B-spline :math:N_{\textit{i}}\left(x\right), for :math:\textit{i} = 1,2,\ldots,m. The remaining elements of the array are not used. .. _e01ba-py2-py-errors: **Raises** **NagValueError** (errno :math:1) On entry, :math:\textit{lck} = \langle\mathit{\boldsymbol{value}}\rangle and :math:m = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:\textit{lck}\geq m+4. (errno :math:1) On entry, :math:m = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:m\geq 4. (errno :math:2) On entry, :math:\textit{I} = \langle\mathit{\boldsymbol{value}}\rangle, :math:\mathrm{x}[\textit{I}-1] = \langle\mathit{\boldsymbol{value}}\rangle and :math:\mathrm{x}[\textit{I}-2] = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:\mathrm{x}[\textit{I}-1] > \mathrm{x}[\textit{I}-2]. .. _e01ba-py2-py-notes: **Notes** In the NAG Library the traditional C interface for this routine uses a different algorithmic base. Please contact NAG if you have any questions about compatibility. dim1_spline determines a cubic spline :math:s\left(x\right), defined in the range :math:x_1\leq x\leq x_m, which interpolates (passes exactly through) the set of data points :math:\left(x_{\textit{i}}, y_{\textit{i}}\right), for :math:\textit{i} = 1,2,\ldots,m, where :math:m\geq 4 and :math:x_1 < x_2 < \cdots < x_m. Unlike some other spline interpolation algorithms, derivative end conditions are not imposed. The spline interpolant chosen has :math:m-4 interior knots :math:\lambda_5,\lambda_6,\ldots,\lambda_m, which are set to the values of :math:x_3,x_4,\ldots,x_{{m-2}} respectively. This spline is represented in its B-spline form (see Cox (1975)): .. math:: s\left(x\right) = \sum_{{i = 1}}^mc_iN_i\left(x\right)\text{,} where :math:N_i\left(x\right) denotes the normalized B-spline of degree :math:3, defined upon the knots :math:\lambda_i,\lambda_{{i+1}},\ldots,\lambda_{{i+4}}, and :math:c_i denotes its coefficient, whose value is to be determined by the function. The use of B-splines requires eight additional knots :math:\lambda_1, :math:\lambda_2, :math:\lambda_3, :math:\lambda_4, :math:\lambda_{{m+1}}, :math:\lambda_{{m+2}}, :math:\lambda_{{m+3}} and :math:\lambda_{{m+4}} to be specified; dim1_spline sets the first four of these to :math:x_1 and the last four to :math:x_m. The algorithm for determining the coefficients is as described in Cox (1975) except that :math:QR factorization is used instead of :math:LU decomposition. The implementation of the algorithm involves setting up appropriate information for the related function :meth:fit.dim1_spline_knots <naginterfaces.library.fit.dim1_spline_knots> followed by a call of that function. (See :meth:fit.dim1_spline_knots <naginterfaces.library.fit.dim1_spline_knots> for further details.) Values of the spline interpolant, or of its derivatives or definite integral, can subsequently be computed as detailed in Further Comments <https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01baf.html#fcomments>__. .. _e01ba-py2-py-references: **References** Cox, M G, 1975, An algorithm for spline interpolation, J. Inst. Math. Appl. (15), 95--108 Cox, M G, 1977, A survey of numerical methods for data and function approximation, The State of the Art in Numerical Analysis, (ed D A H Jacobs), 627--668, Academic Press """ raise NotImplementedError [docs]def dim1_monotonic(x, f): r""" dim1_monotonic computes a monotonicity-preserving piecewise cubic Hermite interpolant to a set of data points. .. _e01be-py2-py-doc: For full information please refer to the NAG Library document for e01be https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01bef.html .. _e01be-py2-py-parameters: **Parameters** **x** : float, array-like, shape :math:\left(n\right) :math:\mathrm{x}[\textit{r}-1] must be set to :math:x_{\textit{r}}, the :math:\textit{r}\ th value of the independent variable (abscissa), for :math:\textit{r} = 1,2,\ldots,n. **f** : float, array-like, shape :math:\left(n\right) :math:\mathrm{f}[\textit{r}-1] must be set to :math:f_{\textit{r}}, the :math:\textit{r}\ th value of the dependent variable (ordinate), for :math:\textit{r} = 1,2,\ldots,n. **Returns** **d** : float, ndarray, shape :math:\left(n\right) Estimates of derivatives at the data points. :math:\mathrm{d}[r-1] contains the derivative at :math:\mathrm{x}[r-1]. .. _e01be-py2-py-errors: **Raises** **NagValueError** (errno :math:1) On entry, :math:n = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:n\geq 2. (errno :math:2) On entry, :math:r = \langle\mathit{\boldsymbol{value}}\rangle, :math:\mathrm{x}[r-2] = \langle\mathit{\boldsymbol{value}}\rangle and :math:\mathrm{x}[r-1] = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:\mathrm{x}[r-2] < \mathrm{x}[r-1] for all :math:r. .. _e01be-py2-py-notes: **Notes** In the NAG Library the traditional C interface for this routine uses a different algorithmic base. Please contact NAG if you have any questions about compatibility. dim1_monotonic estimates first derivatives at the set of data points :math:\left(x_{\textit{r}}, f_{\textit{r}}\right), for :math:\textit{r} = 1,2,\ldots,n, which determine a piecewise cubic Hermite interpolant to the data, that preserves monotonicity over ranges where the data points are monotonic. If the data points are only piecewise monotonic, the interpolant will have an extremum at each point where monotonicity switches direction. The estimates of the derivatives are computed by a formula due to Brodlie, which is described in Fritsch and Butland (1984), with suitable changes at the boundary points. The function is derived from function PCHIM in Fritsch (1982). Values of the computed interpolant, and of its first derivative and definite integral, can subsequently be computed by calling :meth:dim1_monotonic_eval, :meth:dim1_monotonic_deriv and :meth:dim1_monotonic_intg, as described in Further Comments <https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01bef.html#fcomments>__. .. _e01be-py2-py-references: **References** Fritsch, F N, 1982, PCHIP final specifications, Report UCID-30194, Lawrence Livermore National Laboratory Fritsch, F N and Butland, J, 1984, A method for constructing local monotone piecewise cubic interpolants, SIAM J. Sci. Statist. Comput. (5), 300--304 """ raise NotImplementedError [docs]def dim1_monotonic_eval(x, f, d, px): r""" dim1_monotonic_eval evaluates a piecewise cubic Hermite interpolant at a set of points. .. _e01bf-py2-py-doc: For full information please refer to the NAG Library document for e01bf https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01bff.html .. _e01bf-py2-py-parameters: **Parameters** **x** : float, array-like, shape :math:\left(n\right) :math:\textit{n}, :math:\mathrm{x}, :math:\mathrm{f} and :math:\mathrm{d} must be unchanged from the previous call of :meth:dim1_monotonic. **f** : float, array-like, shape :math:\left(n\right) :math:\textit{n}, :math:\mathrm{x}, :math:\mathrm{f} and :math:\mathrm{d} must be unchanged from the previous call of :meth:dim1_monotonic. **d** : float, array-like, shape :math:\left(n\right) :math:\textit{n}, :math:\mathrm{x}, :math:\mathrm{f} and :math:\mathrm{d} must be unchanged from the previous call of :meth:dim1_monotonic. **px** : float, array-like, shape :math:\left(m\right) The :math:m values of :math:x at which the interpolant is to be evaluated. **Returns** **pf** : float, ndarray, shape :math:\left(m\right) :math:\mathrm{pf}[\textit{i}-1] contains the value of the interpolant evaluated at the point :math:\mathrm{px}[\textit{i}-1], for :math:\textit{i} = 1,2,\ldots,m. .. _e01bf-py2-py-errors: **Raises** **NagValueError** (errno :math:1) On entry, :math:n = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:n\geq 2. (errno :math:2) On entry, :math:r = \langle\mathit{\boldsymbol{value}}\rangle, :math:\mathrm{x}[r-2] = \langle\mathit{\boldsymbol{value}}\rangle and :math:\mathrm{x}[r-1] = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:\mathrm{x}[r-2] < \mathrm{x}[r-1] for all :math:r. (errno :math:3) On entry, :math:m = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:m\geq 1. **Warns** **NagAlgorithmicWarning** (errno :math:4) Warning -- some points in array :math:\mathrm{px} lie outside the range :math:\mathrm{x}[0] \cdots \mathrm{x}[n-1]. Values at these points are unreliable because computed by extrapolation. .. _e01bf-py2-py-notes: **Notes** In the NAG Library the traditional C interface for this routine uses a different algorithmic base. Please contact NAG if you have any questions about compatibility. dim1_monotonic_eval evaluates a piecewise cubic Hermite interpolant, as computed by :meth:dim1_monotonic, at the points :math:\mathrm{px}[\textit{i}-1], for :math:\textit{i} = 1,2,\ldots,m. If any point lies outside the interval from :math:\mathrm{x}[0] to :math:\mathrm{x}[n-1], a value is extrapolated from the nearest extreme cubic, and a warning is returned. The function is derived from function PCHFE in Fritsch (1982). .. _e01bf-py2-py-references: **References** Fritsch, F N, 1982, PCHIP final specifications, Report UCID-30194, Lawrence Livermore National Laboratory """ raise NotImplementedError [docs]def dim1_monotonic_deriv(x, f, d, px): r""" dim1_monotonic_deriv evaluates a piecewise cubic Hermite interpolant and its first derivative at a set of points. .. _e01bg-py2-py-doc: For full information please refer to the NAG Library document for e01bg https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01bgf.html .. _e01bg-py2-py-parameters: **Parameters** **x** : float, array-like, shape :math:\left(n\right) :math:\textit{n}, :math:\mathrm{x}, :math:\mathrm{f} and :math:\mathrm{d} must be unchanged from the previous call of :meth:dim1_monotonic. **f** : float, array-like, shape :math:\left(n\right) :math:\textit{n}, :math:\mathrm{x}, :math:\mathrm{f} and :math:\mathrm{d} must be unchanged from the previous call of :meth:dim1_monotonic. **d** : float, array-like, shape :math:\left(n\right) :math:\textit{n}, :math:\mathrm{x}, :math:\mathrm{f} and :math:\mathrm{d} must be unchanged from the previous call of :meth:dim1_monotonic. **px** : float, array-like, shape :math:\left(m\right) The :math:m values of :math:x at which the interpolant is to be evaluated. **Returns** **pf** : float, ndarray, shape :math:\left(m\right) :math:\mathrm{pf}[\textit{i}-1] contains the value of the interpolant evaluated at the point :math:\mathrm{px}[\textit{i}-1], for :math:\textit{i} = 1,2,\ldots,m. **pd** : float, ndarray, shape :math:\left(m\right) :math:\mathrm{pd}[\textit{i}-1] contains the first derivative of the interpolant evaluated at the point :math:\mathrm{px}[\textit{i}-1], for :math:\textit{i} = 1,2,\ldots,m. .. _e01bg-py2-py-errors: **Raises** **NagValueError** (errno :math:1) On entry, :math:n = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:n\geq 2. (errno :math:2) On entry, :math:r = \langle\mathit{\boldsymbol{value}}\rangle, :math:\mathrm{x}[r-2] = \langle\mathit{\boldsymbol{value}}\rangle and :math:\mathrm{x}[r-1] = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:\mathrm{x}[r-2] < \mathrm{x}[r-1] for all :math:r. (errno :math:3) On entry, :math:m = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:m\geq 1. **Warns** **NagAlgorithmicWarning** (errno :math:4) Warning -- some points in array :math:\mathrm{px} lie outside the range :math:\mathrm{x}[0] \cdots \mathrm{x}[n-1]. Values at these points are unreliable because computed by extrapolation. .. _e01bg-py2-py-notes: **Notes** In the NAG Library the traditional C interface for this routine uses a different algorithmic base. Please contact NAG if you have any questions about compatibility. dim1_monotonic_deriv evaluates a piecewise cubic Hermite interpolant, as computed by :meth:dim1_monotonic, at the points :math:\mathrm{px}[\textit{i}-1], for :math:\textit{i} = 1,2,\ldots,m. The first derivatives at the points are also computed. If any point lies outside the interval from :math:\mathrm{x}[0] to :math:\mathrm{x}[n-1], values of the interpolant and its derivative are extrapolated from the nearest extreme cubic, and a warning is returned. If values of the interpolant only, and not of its derivative, are required, :meth:dim1_monotonic_eval should be used. The function is derived from function PCHFD in Fritsch (1982). .. _e01bg-py2-py-references: **References** Fritsch, F N, 1982, PCHIP final specifications, Report UCID-30194, Lawrence Livermore National Laboratory """ raise NotImplementedError [docs]def dim1_monotonic_intg(x, f, d, a, b): r""" dim1_monotonic_intg evaluates the definite integral of a piecewise cubic Hermite interpolant over the interval :math:\left[a, b\right]. .. _e01bh-py2-py-doc: For full information please refer to the NAG Library document for e01bh https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01bhf.html .. _e01bh-py2-py-parameters: **Parameters** **x** : float, array-like, shape :math:\left(n\right) :math:\textit{n}, :math:\mathrm{x}, :math:\mathrm{f} and :math:\mathrm{d} must be unchanged from the previous call of :meth:dim1_monotonic. **f** : float, array-like, shape :math:\left(n\right) :math:\textit{n}, :math:\mathrm{x}, :math:\mathrm{f} and :math:\mathrm{d} must be unchanged from the previous call of :meth:dim1_monotonic. **d** : float, array-like, shape :math:\left(n\right) :math:\textit{n}, :math:\mathrm{x}, :math:\mathrm{f} and :math:\mathrm{d} must be unchanged from the previous call of :meth:dim1_monotonic. **a** : float The interval :math:\left[a, b\right] over which integration is to be performed. **b** : float The interval :math:\left[a, b\right] over which integration is to be performed. **Returns** **pint** : float The value of the definite integral of the interpolant over the interval :math:\left[a, b\right]. .. _e01bh-py2-py-errors: **Raises** **NagValueError** (errno :math:1) On entry, :math:n = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:n\geq 2. (errno :math:2) On entry, :math:r = \langle\mathit{\boldsymbol{value}}\rangle, :math:\mathrm{x}[r-2] = \langle\mathit{\boldsymbol{value}}\rangle and :math:\mathrm{x}[r-1] = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:\mathrm{x}[r-2] < \mathrm{x}[r-1] for all :math:r. **Warns** **NagAlgorithmicWarning** (errno :math:3) Warning -- either :math:\mathrm{a} or :math:\mathrm{b} is outside the range :math:\mathrm{x}[0] \cdots \mathrm{x}[n-1]. The result has been computed by extrapolation and is unreliable. :math:\mathrm{a} = \langle\mathit{\boldsymbol{value}}\rangle :math:\mathrm{b} = \langle\mathit{\boldsymbol{value}}\rangle. .. _e01bh-py2-py-notes: **Notes** In the NAG Library the traditional C interface for this routine uses a different algorithmic base. Please contact NAG if you have any questions about compatibility. dim1_monotonic_intg evaluates the definite integral of a piecewise cubic Hermite interpolant, as computed by :meth:dim1_monotonic, over the interval :math:\left[a, b\right]. If either :math:a or :math:b lies outside the interval from :math:\mathrm{x}[0] to :math:\mathrm{x}[n-1] computation of the integral involves extrapolation and a warning is returned. The function is derived from function PCHIA in Fritsch (1982). .. _e01bh-py2-py-references: **References** Fritsch, F N, 1982, PCHIP final specifications, Report UCID-30194, Lawrence Livermore National Laboratory """ raise NotImplementedError [docs]def dim1_monconv_disc(negfor, yfor, x, y, lam=0.2): r""" dim1_monconv_disc computes, for a given set of data points, the forward values and other values required for monotone convex interpolation as defined in Hagan and West (2008). This form of interpolation is particularly suited to the construction of yield curves in Financial Mathematics but can be applied to any data where it is desirable to preserve both monotonicity and convexity. .. _e01ce-py2-py-doc: For full information please refer to the NAG Library document for e01ce https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01cef.html .. _e01ce-py2-py-parameters: **Parameters** **negfor** : bool Determines whether or not to allow negative forward rates. :math:\mathrm{negfor} = \mathbf{True} Negative forward rates are permitted. :math:\mathrm{negfor} = \mathbf{False} Forward rates calculated must be non-negative. **yfor** : bool Determines whether the array :math:\mathrm{y} contains values, :math:y, or discrete forward rates :math:f^d. :math:\mathrm{yfor} = \mathbf{True} :math:\mathrm{y} contains the discrete forward rates :math:f_i^d, for :math:\textit{i} = 1,2,\ldots,n. :math:\mathrm{yfor} = \mathbf{False} :math:\mathrm{y} contains the values :math:y_i, for :math:\textit{i} = 1,2,\ldots,n. **x** : float, array-like, shape :math:\left(n\right) :math:x, the (possibly unordered) set of data points. **y** : float, array-like, shape :math:\left(n\right) If :math:\mathrm{yfor} = \mathbf{True}, the discrete forward rates :math:f_i^d corresponding to the data points :math:x_i, for :math:\textit{i} = 1,2,\ldots,n. If :math:\mathrm{yfor} = \mathbf{False}, the data values :math:y_i corresponding to the data points :math:x_i, for :math:\textit{i} = 1,2,\ldots,n. **lam** : float, optional :math:\lambda, the amelioration (smoothing) parameter. Forward rates are first computed using (2) <https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01cef.html#eqn2>__ and then, if :math:\lambda > 0, a limiting filter is applied which depends on neighbouring discrete forward values. This filter has a smoothing effect on the curve that increases with :math:\lambda. **Returns** **comm** : dict, communication object Communication structure. .. _e01ce-py2-py-errors: **Raises** **NagValueError** (errno :math:1) On entry, :math:n = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:n\geq 2. (errno :math:2) On entry, :math:\mathrm{lam} = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:0.0\leq \mathrm{lam}\leq 1.0. (errno :math:3) On entry, :math:\mathrm{x} contains duplicate data points. .. _e01ce-py2-py-notes: **Notes** dim1_monconv_disc computes, for a set of data points, :math:\left(x_i, y_i\right), for :math:\textit{i} = 1,2,\ldots,n, the discrete forward rates, :math:f_i^d, and the instantaneous forward rates, :math:f_i, which are used in a monotone convex interpolation method that attempts to preserve both the monotonicity and the convexity of the original data. The monotone convex interpolation method is due to Hagan and West and is described in Hagan and West (2006), Hagan and West (2008) and West (2011). The discrete forward rates are defined simply, for ordered data, by .. math:: \begin{array}{l} f_1^d = y_1\text{;} \\ f_i^d = \frac{{x_iy_i-x_{{i-1}}y_{{i-1}}}}{{x_i-x_{{i-1}}}} \text{, for } i = 2,3,\ldots,n\text{.}\end{array} The discrete forward rates, if pre-computed, may be supplied instead of :math:y, in which case the original values :math:y are computed using the inverse of (1) <https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01cef.html#eqn1>__. The data points :math:x_i need not be ordered on input (though :math:y_i must correspond to :math:x_i); a set of ordered and scaled values :math:\xi_i are calculated from :math:x_i and stored. In its simplest form, the instantaneous forward rates, :math:f_i, at the data points are computed as linear interpolations of the :math:f_i^d: .. math:: \begin{array}{l} f_i = \frac{{x_i-x_{{i-1}}}}{{x_{{i+1}}-x_{{i-1}}}} f_{{i+1}}^d + \frac{{x_{{i+1}}-x_i}}{{x_{{i+1}}-x_{{i-1}}}} f_i^d \text{, for } i = 2,3,\ldots,{n-1} \\f_1 = f_2^d - \frac{1}{2} \left(f_2-f_2^d\right) \\ f_n = f_n^d - \frac{1}{2} \left(f_{{n-1}}-f_n^d\right)\text{.} \end{array} If it is required, as a constraint, that these values should never be negative then a limiting filter is applied to :math:f as described in Section 3.6 of West (2011). An ameliorated (smoothed) form of this linear interpolation for the forward rates is implemented using the amelioration (smoothing) parameter :math:\lambda. For :math:\lambda \equiv 0, equation (2) <https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01cef.html#eqn2>__ is used (with possible post-process filtering); for :math:0 < \lambda \leq 1, the ameliorated method described fully in West (2011) is used. The values computed by dim1_monconv_disc are used by :meth:dim1_monconv_eval to compute, for a given value :math:\hat{x}, the monotone convex interpolated (or extrapolated) value :math:\hat{y}\left(\hat{x}\right) and the corresponding instantaneous forward rate :math:f; the curve gradient at :math:\hat{x} can be derived as :math:y^{\prime } = \left(f-\hat{y}\right)/\hat{x} for :math:\hat{x}\neq 0. .. _e01ce-py2-py-references: **References** Hagan, P S and West, G, 2006, Interpolation methods for curve construction, Applied Mathematical Finance (13(2)), 89--129 Hagan, P S and West, G, 2008, Methods for constructing a yield curve, WILLMOTT Magazine (May), 70--81 West, G, 2011, The monotone convex method of interpolation, Financial Modelling Agency """ raise NotImplementedError [docs]def dim1_monconv_eval(x, comm): r""" dim1_monconv_eval evaluates a monotonic convex interpolant at a set of points. .. _e01cf-py2-py-doc: For full information please refer to the NAG Library document for e01cf https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01cff.html .. _e01cf-py2-py-parameters: **Parameters** **x** : float, array-like, shape :math:\left(m\right) :math:x, the points at which the interpolant is to be evaluated. **comm** : dict, communication object, modified in place Communication structure. This argument must have been initialized by a prior call to :meth:dim1_monconv_disc. **Returns** **val** : float, ndarray, shape :math:\left(m\right) The values of the interpolant at :math:x. **fwd** : float, ndarray, shape :math:\left(m\right) The values of the forward rates at :math:x. .. _e01cf-py2-py-errors: **Raises** **NagValueError** (errno :math:1) Either :meth:dim1_monconv_disc was not called first or the communication array has become corrupted. .. _e01cf-py2-py-notes: **Notes** dim1_monconv_eval evaluates a monotonic convex interpolant, as setup by :meth:dim1_monconv_disc, at the points :math:x. The function is derived from the work of Hagan and West and is described in Hagan and West (2006), Hagan and West (2008) and West (2011). .. _e01cf-py2-py-references: **References** Hagan, P S and West, G, 2006, Interpolation methods for curve construction, Applied Mathematical Finance (13(2)), 89--129 Hagan, P S and West, G, 2008, Methods for constructing a yield curve, WILLMOTT Magazine (May), 70--81 West, G, 2011, The monotone convex method of interpolation, Financial Modelling Agency """ raise NotImplementedError [docs]def dim2_spline_grid(x, y, f): r""" dim2_spline_grid computes a bicubic spline interpolating surface through a set of data values, given on a rectangular grid in the :math:x-:math:y plane. .. _e01da-py2-py-doc: For full information please refer to the NAG Library document for e01da https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01daf.html .. _e01da-py2-py-parameters: **Parameters** **x** : float, array-like, shape :math:\left(\textit{mx}\right) :math:\mathrm{x}[\textit{q}-1] and :math:\mathrm{y}[\textit{r}-1] must contain :math:x_{\textit{q}}, for :math:\textit{q} = 1,2,\ldots,m_x, and :math:y_{\textit{r}}, for :math:\textit{r} = 1,2,\ldots,m_y, respectively. **y** : float, array-like, shape :math:\left(\textit{my}\right) :math:\mathrm{x}[\textit{q}-1] and :math:\mathrm{y}[\textit{r}-1] must contain :math:x_{\textit{q}}, for :math:\textit{q} = 1,2,\ldots,m_x, and :math:y_{\textit{r}}, for :math:\textit{r} = 1,2,\ldots,m_y, respectively. **f** : float, array-like, shape :math:\left(\textit{mx}\times \textit{my}\right) :math:\mathrm{f}[m_y\times \left(\textit{q}-1\right)+\textit{r}-1] must contain :math:f_{{\textit{q},\textit{r}}}, for :math:\textit{r} = 1,2,\ldots,m_y, for :math:\textit{q} = 1,2,\ldots,m_x. **Returns** **px** : int :math:\mathrm{px} and :math:\mathrm{py} contain :math:m_x+4 and :math:m_y+4, the total number of knots of the computed spline with respect to the :math:x and :math:y variables, respectively. **py** : int :math:\mathrm{px} and :math:\mathrm{py} contain :math:m_x+4 and :math:m_y+4, the total number of knots of the computed spline with respect to the :math:x and :math:y variables, respectively. **lamda** : float, ndarray, shape :math:\left(\textit{mx}+4\right) :math:\mathrm{lamda} contains the complete set of knots :math:\lambda_i associated with the :math:x variable **mu** : float, ndarray, shape :math:\left(\textit{my}+4\right) :math:\mathrm{mu} contains the complete set of knots :math:\mu_i associated with the :math:y variable **c** : float, ndarray, shape :math:\left(\textit{mx}\times \textit{my}\right) The coefficients of the spline interpolant. :math:\mathrm{c}[m_y\times \left(i-1\right)+j-1] contains the coefficient :math:c_{{ij}} described in :ref:Notes <e01da-py2-py-notes>. .. _e01da-py2-py-errors: **Raises** **NagValueError** (errno :math:1) On entry, :math:\textit{my} = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:\textit{my}\geq 4. (errno :math:1) On entry, :math:\textit{mx} = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:\textit{mx}\geq 4. (errno :math:2) On entry, the :math:\mathrm{x} or the :math:\mathrm{y} mesh points are not in strictly ascending order. (errno :math:3) An intermediate set of linear equations is singular -- the data is too ill-conditioned to compute :math:B-spline coefficients. .. _e01da-py2-py-notes: **Notes** In the NAG Library the traditional C interface for this routine uses a different algorithmic base. Please contact NAG if you have any questions about compatibility. dim2_spline_grid determines a bicubic spline interpolant to the set of data points :math:\left(x_{\textit{q}}, y_{\textit{r}}, f_{{\textit{q},\textit{r}}}\right), for :math:\textit{r} = 1,2,\ldots,m_y, for :math:\textit{q} = 1,2,\ldots,m_x. The spline is given in the B-spline representation .. math:: s\left(x, y\right) = \sum_{{i = 1}}^{m_x}\sum_{{j = 1}}^{m_y}c_{{ij}}M_i\left(x\right)N_j\left(y\right)\text{,} such that .. math:: s\left(x_q, y_r\right) = f_{{q,r}}\text{,} where :math:M_i\left(x\right) and :math:N_j\left(y\right) denote normalized cubic B-splines, the former defined on the knots :math:\lambda_i to :math:\lambda_{{i+4}} and the latter on the knots :math:\mu_j to :math:\mu_{{j+4}}, and the :math:c_{{ij}} are the spline coefficients. These knots, as well as the coefficients, are determined by the function, which is derived from the function B2IRE in Anthony et al. (1982). The method used is described in Further Comments <https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01daf.html#fcomments2>__. For further information on splines, see Hayes and Halliday (1974) for bicubic splines and de Boor (1972) for normalized B-splines. Values and derivatives of the computed spline can subsequently be computed by calling :meth:fit.dim2_spline_evalv <naginterfaces.library.fit.dim2_spline_evalv>, :meth:fit.dim2_spline_evalm <naginterfaces.library.fit.dim2_spline_evalm> or :meth:fit.dim2_spline_derivm <naginterfaces.library.fit.dim2_spline_derivm> as described in Evaluation of Computed Spline <https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01daf.html#fcomments3>__. .. _e01da-py2-py-references: **References** Anthony, G T, Cox, M G and Hayes, J G, 1982, DASL -- Data Approximation Subroutine Library, National Physical Laboratory Cox, M G, 1975, An algorithm for spline interpolation, J. Inst. Math. Appl. (15), 95--108 de Boor, C, 1972, On calculating with B-splines, J. Approx. Theory (6), 50--62 Hayes, J G and Halliday, J, 1974, The least squares fitting of cubic spline surfaces to general data sets, J. Inst. Math. Appl. (14), 89--103 """ raise NotImplementedError [docs]def dim2_triangulate(x, y): r""" dim2_triangulate generates a triangulation for a given set of two-dimensional points using the method of Renka and Cline. .. _e01ea-py2-py-doc: For full information please refer to the NAG Library document for e01ea https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01eaf.html .. _e01ea-py2-py-parameters: **Parameters** **x** : float, array-like, shape :math:\left(n\right) The :math:x coordinates of the :math:n data points. **y** : float, array-like, shape :math:\left(n\right) The :math:y coordinates of the :math:n data points. **Returns** **triang** : int, ndarray, shape :math:\left(7\times n\right) A data structure defining the computed triangulation, in a form suitable for passing to :meth:dim2_triang_bary_eval. Details of how the triangulation is encoded in :math:\mathrm{triang} are given in Further Comments <https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01eaf.html#fcomments>__. These details are most likely to be of use when plotting the computed triangulation. .. _e01ea-py2-py-errors: **Raises** **NagValueError** (errno :math:1) On entry, :math:n = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:n\geq 3. (errno :math:2) On entry, all the :math:\left(x, y\right) pairs are collinear. .. _e01ea-py2-py-notes: **Notes** dim2_triangulate creates a Thiessen triangulation with a given set of two-dimensional data points as nodes. This triangulation will be as equiangular as possible (Cline and Renka (1984)). See Renka and Cline (1984) for more detailed information on the algorithm, a development of that by Lawson (1977). The code is derived from Renka (1984). The computed triangulation is returned in a form suitable for passing to :meth:dim2_triang_bary_eval which, for a set of nodal function values, computes interpolated values at a set of points. .. _e01ea-py2-py-references: **References** Cline, A K and Renka, R L, 1984, A storage-efficient method for construction of a Thiessen triangulation, Rocky Mountain J. Math. (14), 119--139 Lawson, C L, 1977, Software for :math:C^1 surface interpolation, Mathematical Software III, (ed J R Rice), 161--194, Academic Press Renka, R L, 1984, Algorithm 624: triangulation and interpolation of arbitrarily distributed points in the plane, ACM Trans. Math. Software (10), 440--442 Renka, R L and Cline, A K, 1984, A triangle-based :math:C^1 interpolation method, Rocky Mountain J. Math. (14), 223--237 """ raise NotImplementedError [docs]def dim2_triang_bary_eval(x, y, f, triang, px, py): r""" dim2_triang_bary_eval performs barycentric interpolation, at a given set of points, using a set of function values on a scattered grid and a triangulation of that grid computed by :meth:dim2_triangulate. .. _e01eb-py2-py-doc: For full information please refer to the NAG Library document for e01eb https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01ebf.html .. _e01eb-py2-py-parameters: **Parameters** **x** : float, array-like, shape :math:\left(n\right) The coordinates of the :math:\textit{r}\ th data point, :math:\left(x_r, y_r\right), for :math:\textit{r} = 1,2,\ldots,n. :math:\mathrm{x} and :math:\mathrm{y} must be unchanged from the previous call of :meth:dim2_triangulate. **y** : float, array-like, shape :math:\left(n\right) The coordinates of the :math:\textit{r}\ th data point, :math:\left(x_r, y_r\right), for :math:\textit{r} = 1,2,\ldots,n. :math:\mathrm{y} and :math:\mathrm{y} must be unchanged from the previous call of :meth:dim2_triangulate. **f** : float, array-like, shape :math:\left(n\right) The function values :math:f_{\textit{r}} at :math:\left(x_{\textit{r}}, y_{\textit{r}}\right), for :math:\textit{r} = 1,2,\ldots,n. **triang** : int, array-like, shape :math:\left(7\times n\right) The triangulation computed by the previous call of :meth:dim2_triangulate. See Further Comments <https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01eaf.html#fcomments>__ for details of how the triangulation used is encoded in :math:\mathrm{triang}. **px** : float, array-like, shape :math:\left(m\right) The coordinates :math:\left(\textit{px}_{\textit{i}}, \textit{py}_{\textit{i}}\right), for :math:\textit{i} = 1,2,\ldots,m, at which interpolated function values are sought. **py** : float, array-like, shape :math:\left(m\right) The coordinates :math:\left(\textit{px}_{\textit{i}}, \textit{py}_{\textit{i}}\right), for :math:\textit{i} = 1,2,\ldots,m, at which interpolated function values are sought. **Returns** **pf** : float, ndarray, shape :math:\left(m\right) The interpolated values :math:F\left(\textit{px}_{\textit{i}}, \textit{py}_{\textit{i}}\right), for :math:\textit{i} = 1,2,\ldots,m. .. _e01eb-py2-py-errors: **Raises** **NagValueError** (errno :math:1) On entry, :math:n = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:n\geq 3. (errno :math:2) On entry, :math:m = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:m\geq 1. (errno :math:3) On entry, the triangulation information held in the array :math:\mathrm{triang} does not specify a valid triangulation of the data points. :math:\mathrm{triang} has been corrupted since the call to :meth:dim2_triangulate. **Warns** **NagAlgorithmicWarning** (errno :math:4) At least one evaluation point lies outside the nodal triangulation. For each such point the value returned in :math:\mathrm{pf} is that corresponding to a node on the closest boundary line segment. .. _e01eb-py2-py-notes: **Notes** dim2_triang_bary_eval takes as input a set of scattered data points :math:\left(x_{\textit{r}}, y_{\textit{r}}, f_{\textit{r}}\right), for :math:\textit{r} = 1,2,\ldots,n, and a Thiessen triangulation of the :math:\left(x_r, y_r\right) computed by :meth:dim2_triangulate, and interpolates at a set of points :math:\left(\textit{px}_i, \textit{py}_i\right), for :math:\textit{i} = 1,2,\ldots,m. If the :math:i\ th interpolation point :math:\left(\textit{px}_i, \textit{py}_i\right) is equal to :math:\left(x_r, y_r\right) for some value of :math:r, the returned value will be equal to :math:f_r; otherwise a barycentric transformation will be used to calculate the interpolant. For each point :math:\left(\textit{px}_i, \textit{py}_i\right), a triangle is sought which contains the point; the vertices of the triangle and :math:f_r values at the vertices are then used to compute the value :math:F\left(\textit{px}_i, \textit{py}_i\right). If any interpolation point lies outside the triangulation defined by the input arguments, the returned value is the value provided, :math:f_s, at the closest node :math:\left(x_s, y_s\right). dim2_triang_bary_eval must only be called after a call to :meth:dim2_triangulate. .. _e01eb-py2-py-references: **References** Cline, A K and Renka, R L, 1984, A storage-efficient method for construction of a Thiessen triangulation, Rocky Mountain J. Math. (14), 119--139 Lawson, C L, 1977, Software for :math:C^1 surface interpolation, Mathematical Software III, (ed J R Rice), 161--194, Academic Press Renka, R L, 1984, Algorithm 624: triangulation and interpolation of arbitrarily distributed points in the plane, ACM Trans. Math. Software (10), 440--442 Renka, R L and Cline, A K, 1984, A triangle-based :math:C^1 interpolation method, Rocky Mountain J. Math. (14), 223--237 """ raise NotImplementedError [docs]def dim1_ratnl(x, f): r""" dim1_ratnl produces, from a set of function values and corresponding abscissae, the coefficients of an interpolating rational function expressed in continued fraction form. .. _e01ra-py2-py-doc: For full information please refer to the NAG Library document for e01ra https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01raf.html .. _e01ra-py2-py-parameters: **Parameters** **x** : float, array-like, shape :math:\left(n\right) :math:\mathrm{x}[\textit{i}-1] must be set to the value of the :math:\textit{i}\ th data abscissa, :math:x_{\textit{i}}, for :math:\textit{i} = 1,2,\ldots,n. **f** : float, array-like, shape :math:\left(n\right) :math:\mathrm{f}[\textit{i}-1] must be set to the value of the data ordinate, :math:f_{\textit{i}}, corresponding to :math:x_{\textit{i}}, for :math:\textit{i} = 1,2,\ldots,n. **Returns** **m** : int :math:m, the number of terms in the continued fraction representation of :math:R\left(x\right). **a** : float, ndarray, shape :math:\left(n\right) :math:\mathrm{a}[\textit{j}-1] contains the value of the parameter :math:a_{\textit{j}} in :math:R\left(x\right), for :math:\textit{j} = 1,2,\ldots,m. The remaining elements of :math:\mathrm{a}, if any, are set to zero. **u** : float, ndarray, shape :math:\left(n\right) :math:\mathrm{u}[\textit{j}-1] contains the value of the parameter :math:u_{\textit{j}} in :math:R\left(x\right), for :math:\textit{j} = 1,2,\ldots,m-1. The :math:u_j are a permuted subset of the elements of :math:\mathrm{x}. The remaining :math:n-m+1 locations contain a permutation of the remaining :math:x_i, which can be ignored. .. _e01ra-py2-py-errors: **Raises** **NagValueError** (errno :math:1) On entry, :math:n = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:n > 0. (errno :math:2) On entry, :math:\mathrm{x}[\textit{I}-1] is very close to :math:\mathrm{x}[\textit{J}-1]: :math:\textit{I} = \langle\mathit{\boldsymbol{value}}\rangle, :math:\mathrm{x}[\textit{I}-1] = \langle\mathit{\boldsymbol{value}}\rangle, :math:\textit{J} = \langle\mathit{\boldsymbol{value}}\rangle and :math:\mathrm{x}[\textit{J}-1] = \langle\mathit{\boldsymbol{value}}\rangle. (errno :math:3) A continued fraction of the required form does not exist. .. _e01ra-py2-py-notes: **Notes** dim1_ratnl produces the parameters of a rational function :math:R\left(x\right) which assumes prescribed values :math:f_i at prescribed values :math:x_i of the independent variable :math:x, for :math:\textit{i} = 1,2,\ldots,n. More specifically, dim1_ratnl determines the parameters :math:a_j, for :math:\textit{j} = 1,2,\ldots,m and :math:u_j, for :math:\textit{j} = 1,2,\ldots,m-1, in the continued fraction .. math:: R\left(x\right) = a_1+R_m\left(x\right) where .. math:: R_i\left(x\right) = \frac{{a_{{m-i+2}}\left(x-u_{{m-i+1}}\right)}}{{1+R_{{i-1}}\left(x\right)}}\text{, for }i = m,m-1,\ldots,2\text{,} and .. math:: R_1\left(x\right) = 0\text{,} such that :math:R\left(x_i\right) = f_i, for :math:\textit{i} = 1,2,\ldots,n. The value of :math:m in (1) <https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01raf.html#eqn1>__ is determined by the function; normally :math:m = n. The values of :math:u_j form a reordered subset of the values of :math:x_i and their ordering is designed to ensure that a representation of the form (1) <https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01raf.html#eqn1>__ is determined whenever one exists. The subsequent evaluation of (1) <https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01raf.html#eqn1>__ for given values of :math:x can be carried out using :meth:dim1_ratnl_eval. The computational method employed in dim1_ratnl is the modification of the Thacher--Tukey algorithm described in Graves--Morris and Hopkins (1981). .. _e01ra-py2-py-references: **References** Graves--Morris, P R and Hopkins, T R, 1981, Reliable rational interpolation, Numer. Math. (36), 111--128 """ raise NotImplementedError [docs]def dim1_ratnl_eval(a, u, x): r""" dim1_ratnl_eval evaluates continued fractions of the form produced by :meth:dim1_ratnl. .. _e01rb-py2-py-doc: For full information please refer to the NAG Library document for e01rb https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01rbf.html .. _e01rb-py2-py-parameters: **Parameters** **a** : float, array-like, shape :math:\left(m\right) :math:\mathrm{a}[\textit{j}-1] must be set to the value of the parameter :math:a_{\textit{j}} in the continued fraction, for :math:\textit{j} = 1,2,\ldots,m. **u** : float, array-like, shape :math:\left(m\right) :math:\mathrm{u}[\textit{j}-1] must be set to the value of the parameter :math:u_{\textit{j}} in the continued fraction, for :math:\textit{j} = 1,2,\ldots,m-1. (The element :math:\mathrm{u}[m-1] is not used). **x** : float The value of :math:x at which the continued fraction is to be evaluated. **Returns** **f** : float The value of the continued fraction corresponding to the value of :math:x. .. _e01rb-py2-py-errors: **Raises** **NagValueError** (errno :math:1) :math:\mathrm{x} corresponds to a pole of :math:R\left(x\right), or is very close. :math:\mathrm{x} = \langle\mathit{\boldsymbol{value}}\rangle. .. _e01rb-py2-py-notes: **Notes** dim1_ratnl_eval evaluates the continued fraction .. math:: R\left(x\right) = a_1+R_m\left(x\right) where .. math:: R_i\left(x\right) = \frac{{a_{{m-i+2}}\left(x-u_{{m-i+1}}\right)}}{{1+R_{{i-1}}\left(x\right)}}\text{, for }i = m,m-1,\ldots,2\text{.} and .. math:: R_1\left(x\right) = 0 for a prescribed value of :math:x. dim1_ratnl_eval is intended to be used to evaluate the continued fraction representation (of an interpolatory rational function) produced by :meth:dim1_ratnl. .. _e01rb-py2-py-references: **References** Graves--Morris, P R and Hopkins, T R, 1981, Reliable rational interpolation, Numer. Math. (36), 111--128 """ raise NotImplementedError [docs]def dim2_scat(x, y, f): r""" dim2_scat generates a two-dimensional surface interpolating a set of scattered data points, using the method of Renka and Cline. .. _e01sa-py2-py-doc: For full information please refer to the NAG Library document for e01sa https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01saf.html .. _e01sa-py2-py-parameters: **Parameters** **x** : float, array-like, shape :math:\left(m\right) The coordinates of the :math:\textit{r}\ th data point, for :math:\textit{r} = 1,2,\ldots,m. The data points are accepted in any order, but see Further Comments <https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01saf.html#fcomments>__. **y** : float, array-like, shape :math:\left(m\right) The coordinates of the :math:\textit{r}\ th data point, for :math:\textit{r} = 1,2,\ldots,m. The data points are accepted in any order, but see Further Comments <https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01saf.html#fcomments>__. **f** : float, array-like, shape :math:\left(m\right) The coordinates of the :math:\textit{r}\ th data point, for :math:\textit{r} = 1,2,\ldots,m. The data points are accepted in any order, but see Further Comments <https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01saf.html#fcomments>__. **Returns** **triang** : int, ndarray, shape :math:\left(7\times m\right) A data structure defining the computed triangulation, in a form suitable for passing to :meth:dim2_scat_eval. **grads** : float, ndarray, shape :math:\left(2, m\right) The estimated partial derivatives at the nodes, in a form suitable for passing to :meth:dim2_scat_eval. The derivatives at node :math:\textit{r} with respect to :math:x and :math:y are contained in :math:\mathrm{grads}[0,\textit{r}-1] and :math:\mathrm{grads}[1,\textit{r}-1] respectively, for :math:\textit{r} = 1,2,\ldots,m. .. _e01sa-py2-py-errors: **Raises** **NagValueError** (errno :math:1) On entry, :math:m = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:m\geq 3. (errno :math:2) All nodes are collinear. There is no unique solution. (errno :math:3) On entry, :math:\left({\mathrm{x}[\textit{I}-1]}, {\mathrm{y}[\textit{I}-1]}\right) = \left({\mathrm{x}[\textit{J}-1]}, {\mathrm{y}[\textit{J}-1]}\right), for :math:\textit{I},\textit{J} = \langle\mathit{\boldsymbol{value}}\rangle\langle\mathit{\boldsymbol{value}}\rangle, :math:\mathrm{x}[\textit{I}-1], :math:\mathrm{y}[\textit{I}-1] = \langle\mathit{\boldsymbol{value}}\rangle\langle\mathit{\boldsymbol{value}}\rangle. .. _e01sa-py2-py-notes: **Notes** In the NAG Library the traditional C interface for this routine uses a different algorithmic base. Please contact NAG if you have any questions about compatibility. dim2_scat constructs an interpolating surface :math:F\left(x, y\right) through a set of :math:m scattered data points :math:\left(x_{\textit{r}}, y_{\textit{r}}, f_{\textit{r}}\right), for :math:\textit{r} = 1,2,\ldots,m, using a method due to Renka and Cline. In the :math:\left(x, y\right) plane, the data points must be distinct. The constructed surface is continuous and has continuous first derivatives. The method involves firstly creating a triangulation with all the :math:\left(x, y\right) data points as nodes, the triangulation being as nearly equiangular as possible (see Cline and Renka (1984)). Then gradients in the :math:x- and :math:y-directions are estimated at node :math:\textit{r}, for :math:\textit{r} = 1,2,\ldots,m, as the partial derivatives of a quadratic function of :math:x and :math:y which interpolates the data value :math:f_r, and which fits the data values at nearby nodes (those within a certain distance chosen by the algorithm) in a weighted least squares sense. The weights are chosen such that closer nodes have more influence than more distant nodes on derivative estimates at node :math:r. The computed partial derivatives, with the :math:f_r values, at the three nodes of each triangle define a piecewise polynomial surface of a certain form which is the interpolant on that triangle. See Renka and Cline (1984) for more detailed information on the algorithm, a development of that by Lawson (1977). The code is derived from Renka (1984). The interpolant :math:F\left(x, y\right) can subsequently be evaluated at any point :math:\left(x, y\right) inside or outside the domain of the data by a call to :meth:dim2_scat_eval. Points outside the domain are evaluated by extrapolation. .. _e01sa-py2-py-references: **References** Cline, A K and Renka, R L, 1984, A storage-efficient method for construction of a Thiessen triangulation, Rocky Mountain J. Math. (14), 119--139 Lawson, C L, 1977, Software for :math:C^1 surface interpolation, Mathematical Software III, (ed J R Rice), 161--194, Academic Press Renka, R L, 1984, Algorithm 624: triangulation and interpolation of arbitrarily distributed points in the plane, ACM Trans. Math. Software (10), 440--442 Renka, R L and Cline, A K, 1984, A triangle-based :math:C^1 interpolation method, Rocky Mountain J. Math. (14), 223--237 """ raise NotImplementedError [docs]def dim2_scat_eval(x, y, f, triang, grads, px, py, ist=1): r""" dim2_scat_eval evaluates at a given point the two-dimensional interpolant function computed by :meth:dim2_scat. .. _e01sb-py2-py-doc: For full information please refer to the NAG Library document for e01sb https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01sbf.html .. _e01sb-py2-py-parameters: **Parameters** **x** : float, array-like, shape :math:\left(m\right) :math:\mathrm{x} must be unchanged from the previous call of :meth:dim2_scat **y** : float, array-like, shape :math:\left(m\right) :math:\mathrm{y} must be unchanged from the previous call of :meth:dim2_scat **f** : float, array-like, shape :math:\left(m\right) :math:\mathrm{f} must be unchanged from the previous call of :meth:dim2_scat **triang** : int, array-like, shape :math:\left(7\times m\right) :math:\mathrm{triang} must be unchanged from the previous call of :meth:dim2_scat **grads** : float, array-like, shape :math:\left(2, m\right) :math:\mathrm{grads} must be unchanged from the previous call of :meth:dim2_scat **px** : float The point :math:\left({px}, {py}\right) at which the interpolant is to be evaluated. **py** : float The point :math:\left({px}, {py}\right) at which the interpolant is to be evaluated. **ist** : int, optional The index of the starting node in the search for a triangle containing the point :math:\left({px}, {py}\right). On the first call to dim2_scat_eval, :math:\mathrm{ist} must be set to :math:1. For efficiency on subsequent calls to dim2_scat_eval an updated value of :math:\mathrm{ist} as returned by dim2_scat_eval may be supplied instead. An input value outside the range :math:1\leq \mathrm{ist}\leq m will be treated as :math:1. **Returns** **ist** : int The index of one of the vertices of the triangle containing the point :math:\left({px}, {py}\right). **pf** : float The value of the interpolant evaluated at the point :math:\left({px}, {py}\right). .. _e01sb-py2-py-errors: **Raises** **NagValueError** (errno :math:1) On entry, :math:m = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:m\geq 3. (errno :math:2) On entry, :math:\mathrm{triang} does not contain a valid data point triangulation; :math:\mathrm{triang} may have been corrupted since the call to :meth:dim2_scat. **Warns** **NagAlgorithmicWarning** (errno :math:3) Warning -- the evaluation point :math:\left(\langle\mathit{\boldsymbol{value}}\rangle, \langle\mathit{\boldsymbol{value}}\rangle\right) lies outside the triangulation boundary. The returned value was computed by extrapolation. .. _e01sb-py2-py-notes: **Notes** In the NAG Library the traditional C interface for this routine uses a different algorithmic base. Please contact NAG if you have any questions about compatibility. dim2_scat_eval takes as input the arguments defining the interpolant :math:F\left(x, y\right) of a set of scattered data points :math:\left(x_r, y_r, f_r\right), for :math:\textit{r} = 1,2,\ldots,m, as computed by :meth:dim2_scat, and evaluates the interpolant at the point :math:\left({px}, {py}\right). If :math:\left({px}, {py}\right) is equal to :math:\left(x_r, y_r\right) for some value of :math:r, the returned value will be equal to :math:f_r. If :math:\left({px}, {py}\right) is not equal to :math:\left(x_r, y_r\right) for any :math:r, the derivatives in :math:\mathrm{grads} will be used to compute the interpolant. A triangle is sought which contains the point :math:\left({px}, {py}\right), and the vertices of the triangle along with the partial derivatives and :math:f_r values at the vertices are used to compute the value :math:F\left({px}, {py}\right). If the point :math:\left({px}, {py}\right) lies outside the triangulation defined by the input arguments, the returned value is obtained by extrapolation. In this case, the interpolating function :math:\mathrm{f} is extended linearly beyond the triangulation boundary. The method is described in more detail in Renka and Cline (1984) and the code is derived from Renka (1984). dim2_scat_eval must only be called after a call to :meth:dim2_scat. .. _e01sb-py2-py-references: **References** Renka, R L, 1984, Algorithm 624: triangulation and interpolation of arbitrarily distributed points in the plane, ACM Trans. Math. Software (10), 440--442 Renka, R L and Cline, A K, 1984, A triangle-based :math:C^1 interpolation method, Rocky Mountain J. Math. (14), 223--237 """ raise NotImplementedError [docs]def dim2_scat_shep(x, y, f, nw, nq): r""" dim2_scat_shep generates a two-dimensional interpolant to a set of scattered data points, using a modified Shepard method. .. _e01sg-py2-py-doc: For full information please refer to the NAG Library document for e01sg https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01sgf.html .. _e01sg-py2-py-parameters: **Parameters** **x** : float, array-like, shape :math:\left(m\right) The Cartesian coordinates of the data points :math:\left(x_{\textit{r}}, y_{\textit{r}}\right), for :math:\textit{r} = 1,2,\ldots,m. **y** : float, array-like, shape :math:\left(m\right) The Cartesian coordinates of the data points :math:\left(x_{\textit{r}}, y_{\textit{r}}\right), for :math:\textit{r} = 1,2,\ldots,m. **f** : float, array-like, shape :math:\left(m\right) :math:\mathrm{f}[\textit{r}-1] must be set to the data value :math:f_{\textit{r}}, for :math:\textit{r} = 1,2,\ldots,m. **nw** : int The number :math:N_w of data points that determines each radius of influence :math:R_w, appearing in the definition of each of the weights :math:w_{\textit{r}}, for :math:\textit{r} = 1,2,\ldots,m (see :ref:Notes <e01sg-py2-py-notes>). Note that :math:R_w is different for each weight. If :math:\mathrm{nw}\leq 0 the default value :math:\mathrm{nw} = \mathrm{min}\left(19, {m-1}\right) is used instead. **nq** : int The number :math:N_q of data points to be used in the least squares fit for coefficients defining the nodal functions :math:q_r\left(x, y\right) (see :ref:Notes <e01sg-py2-py-notes>). If :math:\mathrm{nq}\leq 0 the default value :math:\mathrm{nq} = \mathrm{min}\left(13, {m-1}\right) is used instead. **Returns** **iq** : int, ndarray, shape :math:\left(2\times m+1\right) Integer data defining the interpolant :math:Q\left(x, y\right). **rq** : float, ndarray, shape :math:\left(6\times m+5\right) Real data defining the interpolant :math:Q\left(x, y\right). .. _e01sg-py2-py-errors: **Raises** **NagValueError** (errno :math:1) On entry, :math:\textit{lrq} is too small: :math:\textit{lrq} = \langle\mathit{\boldsymbol{value}}\rangle. (errno :math:1) On entry, :math:\textit{liq} is too small: :math:\textit{liq} = \langle\mathit{\boldsymbol{value}}\rangle. (errno :math:1) On entry, :math:\mathrm{nw} = \langle\mathit{\boldsymbol{value}}\rangle and :math:m = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:\mathrm{nw}\leq \mathrm{min}\left(40, {m-1}\right). (errno :math:1) On entry, :math:\mathrm{nq} = \langle\mathit{\boldsymbol{value}}\rangle and :math:m = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:\mathrm{nq}\leq \mathrm{min}\left(40, {m-1}\right). (errno :math:1) On entry, :math:\mathrm{nq} = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:\mathrm{nq} \leq 0 or :math:\mathrm{nq}\geq 5. (errno :math:1) On entry, :math:m = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:m\geq 6. (errno :math:2) There are duplicate nodes in the dataset. :math:\left({\mathrm{x}[\textit{I}-1]}, {\mathrm{y}[\textit{I}-1]}\right) = \left({\mathrm{x}[\textit{J}-1]}, {\mathrm{y}[\textit{J}-1]}\right), for :math:\textit{I} = \langle\mathit{\boldsymbol{value}}\rangle and :math:\textit{J} = \langle\mathit{\boldsymbol{value}}\rangle. The interpolant cannot be derived. (errno :math:3) All nodes are collinear. There is no unique solution. .. _e01sg-py2-py-notes: **Notes** dim2_scat_shep constructs a smooth function :math:Q\left(x, y\right) which interpolates a set of :math:m scattered data points :math:\left(x_r, y_r, f_r\right), for :math:r = 1,2,\ldots,m, using a modification of Shepard's method. The surface is continuous and has continuous first partial derivatives. The basic Shepard (1968) method interpolates the input data with the weighted mean .. math:: Q\left(x, y\right) = \frac{{\sum_{{r = 1}}^mw_r\left(x, y\right)q_r}}{{\sum_{{r = 1}}^mw_r\left(x, y\right)}}\text{,} where :math:q_r = f_r, :math:w_r\left(x, y\right) = \frac{1}{{d_r^2}} and :math:d_r^2 = \left(x-x_r\right)^2+\left(y-y_r\right)^2. The basic method is global in that the interpolated value at any point depends on all the data, but this function uses a modification (see Franke and Nielson (1980) and Renka (1988a)), whereby the method becomes local by adjusting each :math:w_r\left(x, y\right) to be zero outside a circle with centre :math:\left(x_r, y_r\right) and some radius :math:R_w. Also, to improve the performance of the basic method, each :math:q_r above is replaced by a function :math:q_r\left(x, y\right), which is a quadratic fitted by weighted least squares to data local to :math:\left(x_r, y_r\right) and forced to interpolate :math:\left(x_r, y_r, f_r\right). In this context, a point :math:\left(x, y\right) is defined to be local to another point if it lies within some distance :math:R_q of it. Computation of these quadratics constitutes the main work done by this function. The efficiency of the function is further enhanced by using a cell method for nearest neighbour searching due to Bentley and Friedman (1979). The radii :math:R_w and :math:R_q are chosen to be just large enough to include :math:N_w and :math:N_q data points, respectively, for user-supplied constants :math:N_w and :math:N_q. Default values of these arguments are provided by the function, and advice on alternatives is given in Further Comments <https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01sgf.html#fcomments2>__. This function is derived from the function QSHEP2 described by Renka (1988b). Values of the interpolant :math:Q\left(x, y\right) generated by this function, and its first partial derivatives, can subsequently be evaluated for points in the domain of the data by a call to :meth:dim2_scat_shep_eval. .. _e01sg-py2-py-references: **References** Bentley, J L and Friedman, J H, 1979, Data structures for range searching, ACM Comput. Surv. (11), 397--409 Franke, R and Nielson, G, 1980, Smooth interpolation of large sets of scattered data, Internat. J. Num. Methods Engrg. (15), 1691--1704 Renka, R J, 1988, Algorithm 660: QSHEP2D: Quadratic Shepard method for bivariate interpolation of scattered data, ACM Trans. Math. Software (14), 149--150 Renka, R J, 1988, Multivariate interpolation of large sets of scattered data, ACM Trans. Math. Software (14), 139--148 Shepard, D, 1968, A two-dimensional interpolation function for irregularly spaced data, Proc. 23rd Nat. Conf. ACM, 517--523, Brandon/Systems Press Inc., Princeton """ raise NotImplementedError [docs]def dim2_scat_shep_eval(x, y, f, iq, rq, u, v): r""" dim2_scat_shep_eval evaluates the two-dimensional interpolating function generated by :meth:dim2_scat_shep and its first partial derivatives. .. _e01sh-py2-py-doc: For full information please refer to the NAG Library document for e01sh https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01shf.html .. _e01sh-py2-py-parameters: **Parameters** **x** : float, array-like, shape :math:\left(m\right) :math:\textit{m}, :math:\mathrm{x}, :math:\mathrm{y} and :math:\mathrm{f} must be the same values as were supplied in the preceding call to :meth:dim2_scat_shep. **y** : float, array-like, shape :math:\left(m\right) :math:\textit{m}, :math:\mathrm{x}, :math:\mathrm{y} and :math:\mathrm{f} must be the same values as were supplied in the preceding call to :meth:dim2_scat_shep. **f** : float, array-like, shape :math:\left(m\right) :math:\textit{m}, :math:\mathrm{x}, :math:\mathrm{y} and :math:\mathrm{f} must be the same values as were supplied in the preceding call to :meth:dim2_scat_shep. **iq** : int, array-like, shape :math:\left(2\times m+1\right) Must be unchanged from the value returned from a previous call to :meth:dim2_scat_shep. **rq** : float, array-like, shape :math:\left(6\times m+5\right) Must be unchanged from the value returned from a previous call to :meth:dim2_scat_shep. **u** : float, array-like, shape :math:\left(n\right) The evaluation points :math:\left(u_{\textit{i}}, v_{\textit{i}}\right), for :math:\textit{i} = 1,2,\ldots,n. **v** : float, array-like, shape :math:\left(n\right) The evaluation points :math:\left(u_{\textit{i}}, v_{\textit{i}}\right), for :math:\textit{i} = 1,2,\ldots,n. **Returns** **q** : float, ndarray, shape :math:\left(n\right) The values of the interpolant at :math:\left(u_{\textit{i}}, v_{\textit{i}}\right), for :math:\textit{i} = 1,2,\ldots,n. If any of these evaluation points lie outside the region of definition of the interpolant the corresponding entries in :math:\mathrm{q} are set to an extrapolated approximation, and dim2_scat_shep_eval returns with :math:\mathrm{errno} = 3. **qx** : float, ndarray, shape :math:\left(n\right) The values of the partial derivatives of the interpolant :math:Q\left(x, y\right) at :math:\left(u_{\textit{i}}, v_{\textit{i}}\right), for :math:\textit{i} = 1,2,\ldots,n. If any of these evaluation points lie outside the region of definition of the interpolant, the corresponding entries in :math:\mathrm{qx} and :math:\mathrm{qy} are set to extrapolated approximations to the partial derivatives, and dim2_scat_shep_eval returns with :math:\mathrm{errno} = 3. **qy** : float, ndarray, shape :math:\left(n\right) The values of the partial derivatives of the interpolant :math:Q\left(x, y\right) at :math:\left(u_{\textit{i}}, v_{\textit{i}}\right), for :math:\textit{i} = 1,2,\ldots,n. If any of these evaluation points lie outside the region of definition of the interpolant, the corresponding entries in :math:\mathrm{qx} and :math:\mathrm{qy} are set to extrapolated approximations to the partial derivatives, and dim2_scat_shep_eval returns with :math:\mathrm{errno} = 3. .. _e01sh-py2-py-errors: **Raises** **NagValueError** (errno :math:1) On entry, :math:\textit{lrq} is too small: :math:\textit{lrq} = \langle\mathit{\boldsymbol{value}}\rangle. (errno :math:1) On entry, :math:\textit{liq} is too small: :math:\textit{liq} = \langle\mathit{\boldsymbol{value}}\rangle. (errno :math:1) On entry, :math:n = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:n\geq 1. (errno :math:1) On entry, :math:m = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:m\geq 6. (errno :math:2) On entry, values in :math:\mathrm{rq} appear to be invalid. Check that :math:\mathrm{rq} has not been corrupted between calls to :meth:dim2_scat_shep and dim2_scat_shep_eval. (errno :math:2) On entry, values in :math:\mathrm{iq} appear to be invalid. Check that :math:\mathrm{iq} has not been corrupted between calls to :meth:dim2_scat_shep and dim2_scat_shep_eval. **Warns** **NagAlgorithmicWarning** (errno :math:3) On entry, at least one evaluation point lies outside the region of definition of the interpolant. At such points the corresponding values in :math:\mathrm{q} and :math:\mathrm{qx} contain extrapolated approximations. Points should be evaluated one by one to identify extrapolated values. .. _e01sh-py2-py-notes: **Notes** dim2_scat_shep_eval takes as input the interpolant :math:Q\left(x, y\right) of a set of scattered data points :math:\left(x_r, y_r, f_r\right), for :math:\textit{r} = 1,2,\ldots,m, as computed by :meth:dim2_scat_shep, and evaluates the interpolant and its first partial derivatives at the set of points :math:\left(u_i, v_i\right), for :math:\textit{i} = 1,2,\ldots,n. dim2_scat_shep_eval must only be called after a call to :meth:dim2_scat_shep. This function is derived from the function QS2GRD described by Renka (1988). .. _e01sh-py2-py-references: **References** Renka, R J, 1988, Algorithm 660: QSHEP2D: Quadratic Shepard method for bivariate interpolation of scattered data, ACM Trans. Math. Software (14), 149--150 """ raise NotImplementedError [docs]def dim3_scat_shep(x, y, z, f, nw, nq): r""" dim3_scat_shep generates a three-dimensional interpolant to a set of scattered data points, using a modified Shepard method. .. _e01tg-py2-py-doc: For full information please refer to the NAG Library document for e01tg https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01tgf.html .. _e01tg-py2-py-parameters: **Parameters** **x** : float, array-like, shape :math:\left(m\right) :math:\mathrm{x}[\textit{r}-1], :math:\mathrm{y}[\textit{r}-1], :math:\mathrm{z}[\textit{r}-1] must be set to the Cartesian coordinates of the data point :math:\left(x_{\textit{r}}, y_{\textit{r}}, z_{\textit{r}}\right), for :math:\textit{r} = 1,2,\ldots,m. **y** : float, array-like, shape :math:\left(m\right) :math:\mathrm{x}[\textit{r}-1], :math:\mathrm{y}[\textit{r}-1], :math:\mathrm{z}[\textit{r}-1] must be set to the Cartesian coordinates of the data point :math:\left(x_{\textit{r}}, y_{\textit{r}}, z_{\textit{r}}\right), for :math:\textit{r} = 1,2,\ldots,m. **z** : float, array-like, shape :math:\left(m\right) :math:\mathrm{x}[\textit{r}-1], :math:\mathrm{y}[\textit{r}-1], :math:\mathrm{z}[\textit{r}-1] must be set to the Cartesian coordinates of the data point :math:\left(x_{\textit{r}}, y_{\textit{r}}, z_{\textit{r}}\right), for :math:\textit{r} = 1,2,\ldots,m. **f** : float, array-like, shape :math:\left(m\right) :math:\mathrm{f}[\textit{r}-1] must be set to the data value :math:f_{\textit{r}}, for :math:\textit{r} = 1,2,\ldots,m. **nw** : int The number :math:N_w of data points that determines each radius of influence :math:R_w, appearing in the definition of each of the weights :math:w_{\textit{r}}, for :math:\textit{r} = 1,2,\ldots,m (see :ref:Notes <e01tg-py2-py-notes>). Note that :math:R_w is different for each weight. If :math:\mathrm{nw}\leq 0 the default value :math:\mathrm{nw} = \mathrm{min}\left(32, {m-1}\right) is used instead. **nq** : int The number :math:N_q of data points to be used in the least squares fit for coefficients defining the nodal functions :math:q_r\left(x, y, z\right) (see :ref:Notes <e01tg-py2-py-notes>). If :math:\mathrm{nq}\leq 0 the default value :math:\mathrm{nq} = \mathrm{min}\left(17, {m-1}\right) is used instead. **Returns** **iq** : int, ndarray, shape :math:\left(2\times m+1\right) Integer data defining the interpolant :math:Q\left(x, y, z\right). **rq** : float, ndarray, shape :math:\left(10\times m+7\right) Real data defining the interpolant :math:Q\left(x, y, z\right). .. _e01tg-py2-py-errors: **Raises** **NagValueError** (errno :math:1) On entry, :math:\textit{lrq} is too small: :math:\textit{lrq} = \langle\mathit{\boldsymbol{value}}\rangle. (errno :math:1) On entry, :math:\textit{liq} is too small: :math:\textit{liq} = \langle\mathit{\boldsymbol{value}}\rangle. (errno :math:1) On entry, :math:\mathrm{nw} = \langle\mathit{\boldsymbol{value}}\rangle and :math:m = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:\mathrm{nw}\leq \mathrm{min}\left(40, {m-1}\right). (errno :math:1) On entry, :math:\mathrm{nq} = \langle\mathit{\boldsymbol{value}}\rangle and :math:m = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:\mathrm{nq}\leq \mathrm{min}\left(40, {m-1}\right). (errno :math:1) On entry, :math:\mathrm{nq} = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:\mathrm{nq} \leq 0 or :math:\mathrm{nq} \geq 9. (errno :math:1) On entry, :math:m = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:m\geq 10. (errno :math:2) There are duplicate nodes in the dataset. :math:\left({\mathrm{x}[\textit{I}-1]}, {\mathrm{y}[\textit{I}-1]}, {\mathrm{z}[\textit{I}-1]}\right) = \left({\mathrm{x}[\textit{J}-1]}, {\mathrm{y}[\textit{J}-1]}, {\mathrm{z}[\textit{J}-1]}\right) for: :math:\textit{I} = \langle\mathit{\boldsymbol{value}}\rangle and :math:\textit{J} = \langle\mathit{\boldsymbol{value}}\rangle. The interpolant cannot be derived. (errno :math:3) All nodes are coplanar. There is no unique solution. .. _e01tg-py2-py-notes: **Notes** dim3_scat_shep constructs a smooth function :math:Q\left(x, y, z\right) which interpolates a set of :math:m scattered data points :math:\left(x_r, y_r, z_r, f_r\right), for :math:r = 1,2,\ldots,m, using a modification of Shepard's method. The surface is continuous and has continuous first partial derivatives. The basic Shepard method, which is a generalization of the two-dimensional method described in Shepard (1968), interpolates the input data with the weighted mean .. math:: Q\left(x, y, z\right) = \frac{{\sum_{{r = 1}}^mw_r\left(x, y, z\right)q_r}}{{\sum_{{r = 1}}^mw_r\left(x, y, z\right)}}\text{,} where .. math:: q_r = f_r\text{ and }w_r\left(x, y, z\right) = \frac{1}{{d_r^2}}\text{ and }d_r^2 = \left(x-x_r\right)^2+\left(y-y_r\right)^2+\left(z-z_r\right)^2\text{.} The basic method is global in that the interpolated value at any point depends on all the data, but this function uses a modification (see Franke and Nielson (1980) and Renka (1988a)), whereby the method becomes local by adjusting each :math:w_r\left(x, y, z\right) to be zero outside a sphere with centre :math:\left(x_r, y_r, z_r\right) and some radius :math:R_w. Also, to improve the performance of the basic method, each :math:q_r above is replaced by a function :math:q_r\left(x, y, z\right), which is a quadratic fitted by weighted least squares to data local to :math:\left(x_r, y_r, z_r\right) and forced to interpolate :math:\left(x_r, y_r, z_r, f_r\right). In this context, a point :math:\left(x, y, z\right) is defined to be local to another point if it lies within some distance :math:R_q of it. Computation of these quadratics constitutes the main work done by this function. The efficiency of the function is further enhanced by using a cell method for nearest neighbour searching due to Bentley and Friedman (1979). The radii :math:R_w and :math:R_q are chosen to be just large enough to include :math:N_w and :math:N_q data points, respectively, for user-supplied constants :math:N_w and :math:N_q. Default values of these arguments are provided by the function, and advice on alternatives is given in Further Comments <https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01tgf.html#fcomments2>__. This function is derived from the function QSHEP3 described by Renka (1988b). Values of the interpolant :math:Q\left(x, y, z\right) generated by this function, and its first partial derivatives, can subsequently be evaluated for points in the domain of the data by a call to :meth:dim3_scat_shep_eval. .. _e01tg-py2-py-references: **References** Bentley, J L and Friedman, J H, 1979, Data structures for range searching, ACM Comput. Surv. (11), 397--409 Franke, R and Nielson, G, 1980, Smooth interpolation of large sets of scattered data, Internat. J. Num. Methods Engrg. (15), 1691--1704 Renka, R J, 1988, Multivariate interpolation of large sets of scattered data, ACM Trans. Math. Software (14), 139--148 Renka, R J, 1988, Algorithm 661: QSHEP3D: Quadratic Shepard method for trivariate interpolation of scattered data, ACM Trans. Math. Software (14), 151--152 Shepard, D, 1968, A two-dimensional interpolation function for irregularly spaced data, Proc. 23rd Nat. Conf. ACM, 517--523, Brandon/Systems Press Inc., Princeton """ raise NotImplementedError [docs]def dim3_scat_shep_eval(x, y, z, f, iq, rq, u, v, w): r""" dim3_scat_shep_eval evaluates the three-dimensional interpolating function generated by :meth:dim3_scat_shep and its first partial derivatives. .. _e01th-py2-py-doc: For full information please refer to the NAG Library document for e01th https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01thf.html .. _e01th-py2-py-parameters: **Parameters** **x** : float, array-like, shape :math:\left(m\right) :math:\textit{m}, :math:\mathrm{x}, :math:\mathrm{y}, :math:\mathrm{z} and :math:\mathrm{f} must be the same values as were supplied in the preceding call to :meth:dim3_scat_shep. **y** : float, array-like, shape :math:\left(m\right) :math:\textit{m}, :math:\mathrm{x}, :math:\mathrm{y}, :math:\mathrm{z} and :math:\mathrm{f} must be the same values as were supplied in the preceding call to :meth:dim3_scat_shep. **z** : float, array-like, shape :math:\left(m\right) :math:\textit{m}, :math:\mathrm{x}, :math:\mathrm{y}, :math:\mathrm{z} and :math:\mathrm{f} must be the same values as were supplied in the preceding call to :meth:dim3_scat_shep. **f** : float, array-like, shape :math:\left(m\right) :math:\textit{m}, :math:\mathrm{x}, :math:\mathrm{y}, :math:\mathrm{z} and :math:\mathrm{f} must be the same values as were supplied in the preceding call to :meth:dim3_scat_shep. **iq** : int, array-like, shape :math:\left(2\times m+1\right) Must be unchanged from the value returned from a previous call to :meth:dim3_scat_shep. **rq** : float, array-like, shape :math:\left(10\times m+7\right) Must be unchanged from the value returned from a previous call to :meth:dim3_scat_shep. **u** : float, array-like, shape :math:\left(n\right) :math:\mathrm{u}[\textit{i}-1], :math:\mathrm{v}[\textit{i}-1], :math:\mathrm{w}[\textit{i}-1] must be set to the evaluation point :math:\left(u_{\textit{i}}, v_{\textit{i}}, w_{\textit{i}}\right), for :math:\textit{i} = 1,2,\ldots,n. **v** : float, array-like, shape :math:\left(n\right) :math:\mathrm{u}[\textit{i}-1], :math:\mathrm{v}[\textit{i}-1], :math:\mathrm{w}[\textit{i}-1] must be set to the evaluation point :math:\left(u_{\textit{i}}, v_{\textit{i}}, w_{\textit{i}}\right), for :math:\textit{i} = 1,2,\ldots,n. **w** : float, array-like, shape :math:\left(n\right) :math:\mathrm{u}[\textit{i}-1], :math:\mathrm{v}[\textit{i}-1], :math:\mathrm{w}[\textit{i}-1] must be set to the evaluation point :math:\left(u_{\textit{i}}, v_{\textit{i}}, w_{\textit{i}}\right), for :math:\textit{i} = 1,2,\ldots,n. **Returns** **q** : float, ndarray, shape :math:\left(n\right) :math:\mathrm{q}[\textit{i}-1] contains the value of the interpolant, at :math:\left(u_{\textit{i}}, v_{\textit{i}}, w_{\textit{i}}\right), for :math:\textit{i} = 1,2,\ldots,n. If any of these evaluation points lie outside the region of definition of the interpolant the corresponding entries in :math:\mathrm{q} are set to an extrapolated approximation, and dim3_scat_shep_eval returns with :math:\mathrm{errno} = 3. **qx** : float, ndarray, shape :math:\left(n\right) :math:\mathrm{qx}[\textit{i}-1], :math:\mathrm{qy}[\textit{i}-1], :math:\mathrm{qz}[\textit{i}-1] contains the value of the partial derivatives of the interpolant :math:Q\left(x, y, z\right) at :math:\left(u_{\textit{i}}, v_{\textit{i}}, w_{\textit{i}}\right), for :math:\textit{i} = 1,2,\ldots,n. If any of these evaluation points lie outside the region of definition of the interpolant, the corresponding entries in :math:\mathrm{qx}, :math:\mathrm{qy} and :math:\mathrm{qz} are set to extrapolated approximations to the partial derivatives, and dim3_scat_shep_eval returns with :math:\mathrm{errno} = 3. **qy** : float, ndarray, shape :math:\left(n\right) :math:\mathrm{qx}[\textit{i}-1], :math:\mathrm{qy}[\textit{i}-1], :math:\mathrm{qz}[\textit{i}-1] contains the value of the partial derivatives of the interpolant :math:Q\left(x, y, z\right) at :math:\left(u_{\textit{i}}, v_{\textit{i}}, w_{\textit{i}}\right), for :math:\textit{i} = 1,2,\ldots,n. If any of these evaluation points lie outside the region of definition of the interpolant, the corresponding entries in :math:\mathrm{qx}, :math:\mathrm{qy} and :math:\mathrm{qz} are set to extrapolated approximations to the partial derivatives, and dim3_scat_shep_eval returns with :math:\mathrm{errno} = 3. **qz** : float, ndarray, shape :math:\left(n\right) :math:\mathrm{qx}[\textit{i}-1], :math:\mathrm{qy}[\textit{i}-1], :math:\mathrm{qz}[\textit{i}-1] contains the value of the partial derivatives of the interpolant :math:Q\left(x, y, z\right) at :math:\left(u_{\textit{i}}, v_{\textit{i}}, w_{\textit{i}}\right), for :math:\textit{i} = 1,2,\ldots,n. If any of these evaluation points lie outside the region of definition of the interpolant, the corresponding entries in :math:\mathrm{qx}, :math:\mathrm{qy} and :math:\mathrm{qz} are set to extrapolated approximations to the partial derivatives, and dim3_scat_shep_eval returns with :math:\mathrm{errno} = 3. .. _e01th-py2-py-errors: **Raises** **NagValueError** (errno :math:1) On entry, :math:\textit{lrq} is too small: :math:\textit{lrq} = \langle\mathit{\boldsymbol{value}}\rangle. (errno :math:1) On entry, :math:\textit{liq} is too small: :math:\textit{liq} = \langle\mathit{\boldsymbol{value}}\rangle. (errno :math:1) On entry, :math:n = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:n\geq 1. (errno :math:1) On entry, :math:m = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:m\geq 10. (errno :math:2) On entry, values in :math:\mathrm{rq} appear to be invalid. Check that :math:\mathrm{rq} has not been corrupted between calls to :meth:dim3_scat_shep and dim3_scat_shep_eval. (errno :math:2) On entry, values in :math:\mathrm{iq} appear to be invalid. Check that :math:\mathrm{iq} has not been corrupted between calls to :meth:dim3_scat_shep and dim3_scat_shep_eval. **Warns** **NagAlgorithmicWarning** (errno :math:3) On entry, at least one evaluation point lies outside the region of definition of the interpolant. At such points the corresponding values in :math:\mathrm{q} and :math:\mathrm{qx} contain extrapolated approximations. Points should be evaluated one by one to identify extrapolated values. .. _e01th-py2-py-notes: **Notes** dim3_scat_shep_eval takes as input the interpolant :math:Q\left(x, y, z\right) of a set of scattered data points :math:\left(x_r, y_r, z_r, f_r\right), for :math:\textit{r} = 1,2,\ldots,m, as computed by :meth:dim3_scat_shep, and evaluates the interpolant and its first partial derivatives at the set of points :math:\left(u_i, v_i, w_i\right), for :math:\textit{i} = 1,2,\ldots,n. dim3_scat_shep_eval must only be called after a call to :meth:dim3_scat_shep. This function is derived from the function QS3GRD described by Renka (1988). .. _e01th-py2-py-references: **References** Renka, R J, 1988, Algorithm 661: QSHEP3D: Quadratic Shepard method for trivariate interpolation of scattered data, ACM Trans. Math. Software (14), 151--152 """ raise NotImplementedError [docs]def dim4_scat_shep(x, f, nw, nq): r""" dim4_scat_shep generates a four-dimensional interpolant to a set of scattered data points, using a modified Shepard method. .. _e01tk-py2-py-doc: For full information please refer to the NAG Library document for e01tk https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01tkf.html .. _e01tk-py2-py-parameters: **Parameters** **x** : float, array-like, shape :math:\left(4, m\right) :math:\mathrm{x}[0,\textit{r}-1],\ldots,\mathrm{x}[3,\textit{r}-1] must be set to the Cartesian coordinates of the data point :math:\mathbf{x}_{\textit{r}}, for :math:\textit{r} = 1,2,\ldots,m. **f** : float, array-like, shape :math:\left(m\right) :math:\mathrm{f}[\textit{r}-1] must be set to the data value :math:f_{\textit{r}}, for :math:\textit{r} = 1,2,\ldots,m. **nw** : int The number :math:N_w of data points that determines each radius of influence :math:R_w, appearing in the definition of each of the weights :math:w_{\textit{r}}, for :math:\textit{r} = 1,2,\ldots,m (see :ref:Notes <e01tk-py2-py-notes>). Note that :math:R_w is different for each weight. If :math:\mathrm{nw}\leq 0 the default value :math:\mathrm{nw} = \mathrm{min}\left(32, {m-1}\right) is used instead. **nq** : int The number :math:N_q of data points to be used in the least squares fit for coefficients defining the quadratic functions :math:q_r\left(\mathbf{x}\right) (see :ref:Notes <e01tk-py2-py-notes>). If :math:\mathrm{nq}\leq 0 the default value :math:\mathrm{nq} = \mathrm{min}\left(38, {m-1}\right) is used instead. **Returns** **iq** : int, ndarray, shape :math:\left(2\times m+1\right) Integer data defining the interpolant :math:Q\left(\mathbf{x}\right). **rq** : float, ndarray, shape :math:\left(15\times m+9\right) Real data defining the interpolant :math:Q\left(\mathbf{x}\right). .. _e01tk-py2-py-errors: **Raises** **NagValueError** (errno :math:1) On entry, :math:\mathrm{nw} = \langle\mathit{\boldsymbol{value}}\rangle and :math:m = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:\mathrm{nw}\leq \mathrm{min}\left(50, {m-1}\right). (errno :math:1) On entry, :math:\mathrm{nq} = \langle\mathit{\boldsymbol{value}}\rangle and :math:m = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:\mathrm{nq}\leq \mathrm{min}\left(50, {m-1}\right). (errno :math:1) On entry, :math:\mathrm{nq} = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:\mathrm{nq} \leq 0 or :math:\mathrm{nq} \geq 14. (errno :math:1) On entry, :math:m = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:m\geq 16. (errno :math:2) There are duplicate nodes in the dataset. :math:{\mathrm{x}[i-1,k-1]} = {\mathrm{x}[j-1,k-1]}, for :math:i = \langle\mathit{\boldsymbol{value}}\rangle, :math:j = \langle\mathit{\boldsymbol{value}}\rangle and :math:k = 1,2,\ldots,4. The interpolant cannot be derived. (errno :math:3) On entry, all the data points lie on the same three-dimensional hypersurface. No unique solution exists. .. _e01tk-py2-py-notes: **Notes** dim4_scat_shep constructs a smooth function :math:Q\left(\mathbf{x}\right), :math:\mathbf{x} \in \mathbb{R}^4 which interpolates a set of :math:m scattered data points :math:\left(\mathbf{x}_r, f_r\right), for :math:r = 1,2,\ldots,m, using a modification of Shepard's method. The surface is continuous and has continuous first partial derivatives. The basic Shepard method, which is a generalization of the two-dimensional method described in Shepard (1968), interpolates the input data with the weighted mean .. math:: Q\left(\mathbf{x}\right) = \frac{{\sum_{{r = 1}}^mw_r\left(\mathbf{x}\right)q_r}}{{\sum_{{r = 1}}^mw_r\left(\mathbf{x}\right)}}\text{,} where :math:q_r = f_r, :math:w_r\left(\mathbf{x}\right) = \frac{1}{{d_r^2}} and :math:d_r^2 = \left\lVert \mathbf{x}-\mathbf{x}_r\right\rVert_2^2. The basic method is global in that the interpolated value at any point depends on all the data, but dim4_scat_shep uses a modification (see Franke and Nielson (1980) and Renka (1988a)), whereby the method becomes local by adjusting each :math:w_r\left(\mathbf{x}\right) to be zero outside a hypersphere with centre :math:\mathbf{x}_r and some radius :math:R_w. Also, to improve the performance of the basic method, each :math:q_r above is replaced by a function :math:q_r\left(\mathbf{x}\right), which is a quadratic fitted by weighted least squares to data local to :math:\mathbf{x}_r and forced to interpolate :math:\left(\mathbf{x}_r, f_r\right). In this context, a point :math:\mathbf{x} is defined to be local to another point if it lies within some distance :math:R_q of it. The efficiency of dim4_scat_shep is enhanced by using a cell method for nearest neighbour searching due to Bentley and Friedman (1979) with a cell density of :math:3. The radii :math:R_w and :math:R_q are chosen to be just large enough to include :math:N_w and :math:N_q data points, respectively, for user-supplied constants :math:N_w and :math:N_q. Default values of these arguments are provided by the function, and advice on alternatives is given in Further Comments <https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01tkf.html#fcomments2>__. dim4_scat_shep is derived from the new implementation of QSHEP3 described by Renka (1988b). It uses the modification for high-dimensional interpolation described by Berry and Minser (1999). Values of the interpolant :math:Q\left(\mathbf{x}\right) generated by dim4_scat_shep, and its first partial derivatives, can subsequently be evaluated for points in the domain of the data by a call to :meth:dim4_scat_shep_eval. .. _e01tk-py2-py-references: **References** Bentley, J L and Friedman, J H, 1979, Data structures for range searching, ACM Comput. Surv. (11), 397--409 Berry, M W, Minser, K S, 1999, Algorithm 798: high-dimensional interpolation using the modified Shepard method, ACM Trans. Math. Software (25), 353--366 Franke, R and Nielson, G, 1980, Smooth interpolation of large sets of scattered data, Internat. J. Num. Methods Engrg. (15), 1691--1704 Renka, R J, 1988, Multivariate interpolation of large sets of scattered data, ACM Trans. Math. Software (14), 139--148 Renka, R J, 1988, Algorithm 661: QSHEP3D: Quadratic Shepard method for trivariate interpolation of scattered data, ACM Trans. Math. Software (14), 151--152 Shepard, D, 1968, A two-dimensional interpolation function for irregularly spaced data, Proc. 23rd Nat. Conf. ACM, 517--523, Brandon/Systems Press Inc., Princeton """ raise NotImplementedError [docs]def dim4_scat_shep_eval(x, f, iq, rq, xe): r""" dim4_scat_shep_eval evaluates the four-dimensional interpolating function generated by :meth:dim4_scat_shep and its first partial derivatives. .. _e01tl-py2-py-doc: For full information please refer to the NAG Library document for e01tl https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01tlf.html .. _e01tl-py2-py-parameters: **Parameters** **x** : float, array-like, shape :math:\left(4, m\right) Note: the coordinates of :math:x_r are stored in :math:\mathrm{x}[0,r-1]\ldots \mathrm{x}[3,r-1]. **Must** be the same array supplied as argument :math:\textit{x} in the preceding call to :meth:dim4_scat_shep. It **must** remain unchanged between calls. **f** : float, array-like, shape :math:\left(m\right) **Must** be the same array supplied as argument :math:\textit{f} in the preceding call to :meth:dim4_scat_shep. It **must** remain unchanged between calls. **iq** : int, array-like, shape :math:\left(2\times m+1\right) **Must** be the same array returned as argument :math:\textit{iq} in the preceding call to :meth:dim4_scat_shep. It **must** remain unchanged between calls. **rq** : float, array-like, shape :math:\left(15\times m+9\right) **Must** be the same array returned as argument :math:\textit{rq} in the preceding call to :meth:dim4_scat_shep. It **must** remain unchanged between calls. **xe** : float, array-like, shape :math:\left(4, n\right) :math:\mathrm{xe}[0,\textit{r}-1],\ldots,\mathrm{xe}[3,\textit{r}-1] must be set to the evaluation point :math:\mathbf{x}_{\textit{i}}, for :math:\textit{i} = 1,2,\ldots,n. **Returns** **q** : float, ndarray, shape :math:\left(n\right) :math:\mathrm{q}[\textit{i}-1] contains the value of the interpolant, at :math:\mathbf{x}_{\textit{i}}, for :math:\textit{i} = 1,2,\ldots,n. If any of these evaluation points lie outside the region of definition of the interpolant the corresponding entries in :math:\mathrm{q} are set to an extrapolated approximation, and dim4_scat_shep_eval returns with :math:\mathrm{errno} = 3. **qx** : float, ndarray, shape :math:\left(4, n\right) :math:\mathrm{qx}[j-1,i-1] contains the value of the partial derivatives with respect to :math:\mathbf{x}_j of the interpolant :math:Q\left(\mathbf{x}\right) at :math:\mathbf{x}_{\textit{i}}, for :math:\textit{i} = 1,2,\ldots,n, and for each of the four partial derivatives :math:j = 1,2,3,4. If any of these evaluation points lie outside the region of definition of the interpolant, the corresponding entries in :math:\mathrm{qx} are set to extrapolated approximations to the partial derivatives, and dim4_scat_shep_eval returns with :math:\mathrm{errno} = 3. .. _e01tl-py2-py-errors: **Raises** **NagValueError** (errno :math:1) On entry, :math:n = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:n\geq 1. (errno :math:1) On entry, :math:m = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:m\geq 16. (errno :math:2) On entry, values in :math:\mathrm{rq} appear to be invalid. Check that :math:\mathrm{rq} has not been corrupted between calls to :meth:dim4_scat_shep and dim4_scat_shep_eval. (errno :math:2) On entry, values in :math:\mathrm{iq} appear to be invalid. Check that :math:\mathrm{iq} has not been corrupted between calls to :meth:dim4_scat_shep and dim4_scat_shep_eval. **Warns** **NagAlgorithmicWarning** (errno :math:3) On entry, at least one evaluation point lies outside the region of definition of the interpolant. At such points the corresponding values in :math:\mathrm{q} and :math:\mathrm{qx} contain extrapolated approximations. Points should be evaluated one by one to identify extrapolated values. .. _e01tl-py2-py-notes: **Notes** dim4_scat_shep_eval takes as input the interpolant :math:Q\left(\mathbf{x}\right), :math:x \in \mathbb{R}^4 of a set of scattered data points :math:\left(\mathbf{x}_{\textit{r}}, f_{\textit{r}}\right), for :math:\textit{r} = 1,2,\ldots,m, as computed by :meth:dim4_scat_shep, and evaluates the interpolant and its first partial derivatives at the set of points :math:\mathbf{x}_i, for :math:\textit{i} = 1,2,\ldots,n. dim4_scat_shep_eval must only be called after a call to :meth:dim4_scat_shep. dim4_scat_shep_eval is derived from the new implementation of QS3GRD described by Renka (1988). It uses the modification for high-dimensional interpolation described by Berry and Minser (1999). .. _e01tl-py2-py-references: **References** Berry, M W, Minser, K S, 1999, Algorithm 798: high-dimensional interpolation using the modified Shepard method, ACM Trans. Math. Software (25), 353--366 Renka, R J, 1988, Algorithm 661: QSHEP3D: Quadratic Shepard method for trivariate interpolation of scattered data, ACM Trans. Math. Software (14), 151--152 """ raise NotImplementedError [docs]def dim5_scat_shep(x, f, nw, nq): r""" dim5_scat_shep generates a five-dimensional interpolant to a set of scattered data points, using a modified Shepard method. .. _e01tm-py2-py-doc: For full information please refer to the NAG Library document for e01tm https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01tmf.html .. _e01tm-py2-py-parameters: **Parameters** **x** : float, array-like, shape :math:\left(5, m\right) :math:\mathrm{x}[0,\textit{r}-1],\ldots,\mathrm{x}[4,\textit{r}-1] must be set to the Cartesian coordinates of the data point :math:\mathbf{x}_{\textit{r}}, for :math:\textit{r} = 1,2,\ldots,m. **f** : float, array-like, shape :math:\left(m\right) :math:\mathrm{f}[\textit{r}-1] must be set to the data value :math:f_{\textit{r}}, for :math:\textit{r} = 1,2,\ldots,m. **nw** : int The number :math:N_w of data points that determines each radius of influence :math:R_w, appearing in the definition of each of the weights :math:w_{\textit{r}}, for :math:\textit{r} = 1,2,\ldots,m (see :ref:Notes <e01tm-py2-py-notes>). Note that :math:R_w is different for each weight. If :math:\mathrm{nw}\leq 0 the default value :math:\mathrm{nw} = \mathrm{min}\left(32, {m-1}\right) is used instead. **nq** : int The number :math:N_q of data points to be used in the least squares fit for coefficients defining the quadratic functions :math:q_r\left(\mathbf{x}\right) (see :ref:Notes <e01tm-py2-py-notes>). If :math:\mathrm{nq}\leq 0 the default value :math:\mathrm{nq} = \mathrm{min}\left(50, {m-1}\right) is used instead. **Returns** **iq** : int, ndarray, shape :math:\left(2\times m+1\right) Integer data defining the interpolant :math:Q\left(\mathbf{x}\right). **rq** : float, ndarray, shape :math:\left(21\times m+11\right) Real data defining the interpolant :math:Q\left(\mathbf{x}\right). .. _e01tm-py2-py-errors: **Raises** **NagValueError** (errno :math:1) On entry, :math:\mathrm{nw} = \langle\mathit{\boldsymbol{value}}\rangle and :math:m = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:\mathrm{nw}\leq \mathrm{min}\left(50, {m-1}\right). (errno :math:1) On entry, :math:\mathrm{nq} = \langle\mathit{\boldsymbol{value}}\rangle and :math:m = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:\mathrm{nq}\leq \mathrm{min}\left(70, {m-1}\right). (errno :math:1) On entry, :math:\mathrm{nq} = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:\mathrm{nq} \leq 0 or :math:\mathrm{nq} \geq 20. (errno :math:1) On entry, :math:m = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:m\geq 23. (errno :math:2) There are duplicate nodes in the dataset. :math:{\mathrm{x}[i-1,k-1]} = {\mathrm{x}[j-1,k-1]}, for :math:i = \langle\mathit{\boldsymbol{value}}\rangle, :math:j = \langle\mathit{\boldsymbol{value}}\rangle and :math:k = 1,2,\ldots,5. The interpolant cannot be derived. (errno :math:3) On entry, all the data points lie on the same four-dimensional hypersurface. No unique solution exists. .. _e01tm-py2-py-notes: **Notes** dim5_scat_shep constructs a smooth function :math:Q\left(\mathbf{x}\right), :math:\mathbf{x} \in \mathbb{R}^5 which interpolates a set of :math:m scattered data points :math:\left(\mathbf{x}_r, f_r\right), for :math:r = 1,2,\ldots,m, using a modification of Shepard's method. The surface is continuous and has continuous first partial derivatives. The basic Shepard method, which is a generalization of the two-dimensional method described in Shepard (1968), interpolates the input data with the weighted mean .. math:: Q\left(\mathbf{x}\right) = \frac{{\sum_{{r = 1}}^mw_r\left(\mathbf{x}\right)q_r}}{{\sum_{{r = 1}}^mw_r\left(\mathbf{x}\right)}}\text{,} where :math:q_r = f_r, :math:w_r\left(\mathbf{x}\right) = \frac{1}{{d_r^2}} and :math:d_r^2 = \left\lVert \mathbf{x}-\mathbf{x}_r\right\rVert_2^2. The basic method is global in that the interpolated value at any point depends on all the data, but dim5_scat_shep uses a modification (see Franke and Nielson (1980) and Renka (1988a)), whereby the method becomes local by adjusting each :math:w_r\left(\mathbf{x}\right) to be zero outside a hypersphere with centre :math:\mathbf{x}_r and some radius :math:R_w. Also, to improve the performance of the basic method, each :math:q_r above is replaced by a function :math:q_r\left(\mathbf{x}\right), which is a quadratic fitted by weighted least squares to data local to :math:\mathbf{x}_r and forced to interpolate :math:\left(\mathbf{x}_r, f_r\right). In this context, a point :math:\mathbf{x} is defined to be local to another point if it lies within some distance :math:R_q of it. The efficiency of dim5_scat_shep is enhanced by using a cell method for nearest neighbour searching due to Bentley and Friedman (1979) with a cell density of :math:3. The radii :math:R_w and :math:R_q are chosen to be just large enough to include :math:N_w and :math:N_q data points, respectively, for user-supplied constants :math:N_w and :math:N_q. Default values of these arguments are provided, and advice on alternatives is given in Further Comments <https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01tmf.html#fcomments2>__. dim5_scat_shep is derived from the new implementation of QSHEP3 described by Renka (1988b). It uses the modification for five-dimensional interpolation described by Berry and Minser (1999). Values of the interpolant :math:Q\left(\mathbf{x}\right) generated by dim5_scat_shep, and its first partial derivatives, can subsequently be evaluated for points in the domain of the data by a call to :meth:dim5_scat_shep_eval. .. _e01tm-py2-py-references: **References** Bentley, J L and Friedman, J H, 1979, Data structures for range searching, ACM Comput. Surv. (11), 397--409 Berry, M W, Minser, K S, 1999, Algorithm 798: high-dimensional interpolation using the modified Shepard method, ACM Trans. Math. Software (25), 353--366 Franke, R and Nielson, G, 1980, Smooth interpolation of large sets of scattered data, Internat. J. Num. Methods Engrg. (15), 1691--1704 Renka, R J, 1988, Multivariate interpolation of large sets of scattered data, ACM Trans. Math. Software (14), 139--148 Renka, R J, 1988, Algorithm 661: QSHEP3D: Quadratic Shepard method for trivariate interpolation of scattered data, ACM Trans. Math. Software (14), 151--152 Shepard, D, 1968, A two-dimensional interpolation function for irregularly spaced data, Proc. 23rd Nat. Conf. ACM, 517--523, Brandon/Systems Press Inc., Princeton """ raise NotImplementedError [docs]def dim5_scat_shep_eval(x, f, iq, rq, xe): r""" dim5_scat_shep_eval evaluates the five-dimensional interpolating function generated by :meth:dim5_scat_shep and its first partial derivatives. .. _e01tn-py2-py-doc: For full information please refer to the NAG Library document for e01tn https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01tnf.html .. _e01tn-py2-py-parameters: **Parameters** **x** : float, array-like, shape :math:\left(5, m\right) **Must** be the same array supplied as argument :math:\textit{x} in the preceding call to :meth:dim5_scat_shep. It **must** remain unchanged between calls. **f** : float, array-like, shape :math:\left(m\right) **Must** be the same array supplied as argument :math:\textit{f} in the preceding call to :meth:dim5_scat_shep. It **must** remain unchanged between calls. **iq** : int, array-like, shape :math:\left(2\times m+1\right) **Must** be the same array returned as argument :math:\textit{iq} in the preceding call to :meth:dim5_scat_shep. It **must** remain unchanged between calls. **rq** : float, array-like, shape :math:\left(21\times m+11\right) **Must** be the same array returned as argument :math:\textit{rq} in the preceding call to :meth:dim5_scat_shep. It **must** remain unchanged between calls. **xe** : float, array-like, shape :math:\left(5, n\right) :math:\mathrm{xe}[0,\textit{i}-1],\ldots,\mathrm{xe}[4,\textit{i}-1] must be set to the evaluation point :math:\mathbf{x}_{\textit{i}}, for :math:\textit{i} = 1,2,\ldots,n. **Returns** **q** : float, ndarray, shape :math:\left(n\right) :math:\mathrm{q}[\textit{i}-1] contains the value of the interpolant, at :math:\mathbf{x}_{\textit{i}}, for :math:\textit{i} = 1,2,\ldots,n. If any of these evaluation points lie outside the region of definition of the interpolant the corresponding entries in :math:\mathrm{q} are set to an extrapolated approximation, and dim5_scat_shep_eval returns with :math:\mathrm{errno} = 3. **qx** : float, ndarray, shape :math:\left(5, n\right) :math:\mathrm{qx}[j-1,i-1] contains the value of the partial derivatives with respect to :math:\mathbf{x}_j of the interpolant :math:Q\left(\mathbf{x}\right) at :math:\mathbf{x}_{\textit{i}}, for :math:\textit{i} = 1,2,\ldots,n, and for each of the five partial derivatives :math:j = 1,2,3,4,5. If any of these evaluation points lie outside the region of definition of the interpolant, the corresponding entries in :math:\mathrm{qx} are set to extrapolated approximations to the partial derivatives, and dim5_scat_shep_eval returns with :math:\mathrm{errno} = 3. .. _e01tn-py2-py-errors: **Raises** **NagValueError** (errno :math:1) On entry, :math:n = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:n\geq 1. (errno :math:1) On entry, :math:m = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:m\geq 23. (errno :math:2) On entry, values in :math:\mathrm{rq} appear to be invalid. Check that :math:\mathrm{rq} has not been corrupted between calls to :meth:dim5_scat_shep and dim5_scat_shep_eval. (errno :math:2) On entry, values in :math:\mathrm{iq} appear to be invalid. Check that :math:\mathrm{iq} has not been corrupted between calls to :meth:dim5_scat_shep and dim5_scat_shep_eval. **Warns** **NagAlgorithmicWarning** (errno :math:3) On entry, at least one evaluation point lies outside the region of definition of the interpolant. At such points the corresponding values in :math:\mathrm{q} and :math:\mathrm{qx} contain extrapolated approximations. Points should be evaluated one by one to identify extrapolated values. .. _e01tn-py2-py-notes: **Notes** dim5_scat_shep_eval takes as input the interpolant :math:Q\left(\mathbf{x}\right), :math:\mathbf{x} \in \mathbb{R}^5 of a set of scattered data points :math:\left(\mathbf{x}_{\textit{r}}, f_{\textit{r}}\right), for :math:\textit{r} = 1,2,\ldots,m, as computed by :meth:dim5_scat_shep, and evaluates the interpolant and its first partial derivatives at the set of points :math:\mathbf{x}_{\textit{i}}, for :math:\textit{i} = 1,2,\ldots,n. dim5_scat_shep_eval must only be called after a call to :meth:dim5_scat_shep. dim5_scat_shep_eval is derived from the new implementation of QS3GRD described by Renka (1988). It uses the modification for five-dimensional interpolation described by Berry and Minser (1999). .. _e01tn-py2-py-references: **References** Berry, M W, Minser, K S, 1999, Algorithm 798: high-dimensional interpolation using the modified Shepard method, ACM Trans. Math. Software (25), 353--366 Renka, R J, 1988, Algorithm 661: QSHEP3D: Quadratic Shepard method for trivariate interpolation of scattered data, ACM Trans. Math. Software (14), 151--152 """ raise NotImplementedError [docs]def dimn_grid(narr, uniform, axis, v, point, method, k=1, wf=0.0): r""" dimn_grid interpolates data at a point in :math:n-dimensional space, that is defined by a set of gridded data points. It offers three methods to interpolate the data: Linear Interpolation, Cubic Interpolation and Weighted Average. .. _e01za-py2-py-doc: For full information please refer to the NAG Library document for e01za https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01zaf.html .. _e01za-py2-py-parameters: **Parameters** **narr** : int, array-like, shape :math:\left(d\right) The number of data ordinates in each dimension, with :math:\mathrm{narr}[\textit{i}-1] = n_i, for :math:\textit{i} = 1,2,\ldots,d. **uniform** : bool States whether the data points are uniformly spaced. :math:\mathrm{uniform} = \mathbf{True} The data points are uniformly spaced. :math:\mathrm{uniform} = \mathbf{False} The data points are not uniformly spaced. **axis** : float, array-like, shape :math:\left(\textit{lx}\right) Defines the axis. If the data points are uniformly spaced (see argument :math:\mathrm{uniform}) :math:\mathrm{axis} should contain the start and end of each dimension :math:\left(x_{{1 1}}, x_{{1 n_1}}, \ldots, x_{{d 1}}, x_{{d n_d}}\right). If the data points are not uniformly spaced, :math:\mathrm{axis} should contain all the data ordinates for each dimension :math:\left(x_{{1 1}}, x_{{1 2}}, \ldots, x_{{1 n_1}}, \ldots, x_{{d 1}}, x_{{d 2}}, \ldots, x_{{d n_d}}\right). **v** : float, array-like, shape :math:\left(\mathrm{prod}\left(\mathrm{narr}\right)\right) Holds the values of the data points in such an order that the index of a data value with coordinates :math:\left(z_1, z_2, \ldots, z_d\right) is .. math:: \sum_{1}^{d}{z_{\textit{i}}\prod_{{n \in \mathbf{S}_i}}n}\text{,} where :math:\mathbf{S}_i = \left\{\mathrm{narr}[l-1]:l = \left. 1, \ldots, {i-1}\right. \right\} e.g., :math:\left(\left(x_{{11}}, x_{{21}}, \ldots, x_{{d1}}\right), \left(x_{{12}}, x_{{21}}, \ldots, x_{{d1}}\right), \ldots, \left(x_{{1n_d}}, x_{{21}}, \ldots, x_{{d1}}\right), \left(x_{{11}}, x_{{22}}, \ldots, x_{{d1}}\right), \left(x_{{12}}, x_{{22}}, \ldots, x_{{d1}}\right), \ldots, \left(x_{{1n_d}}, x_{{2n_d}}, \ldots, x_{{dn_d}}\right)\right). **point** : float, array-like, shape :math:\left(d\right) :math:\mathbf{x}, the point at which the data value is to be interpolated. **method** : int The method to be used. :math:\mathrm{method} = 1 Weighted Average. :math:\mathrm{method} = 2 Linear Interpolation. :math:\mathrm{method} = 3 Cubic Interpolation. **k** : int, optional If :math:\mathrm{method} = 1, :math:\mathrm{k} controls the number of data points used in the Weighted Average method, with :math:\mathrm{k} points used in each dimension, either side of the interpolation point. The total number of data points used for the interpolation will, therefore, be :math:\left(2\mathrm{k}\right)^d. If :math:\mathrm{method} \neq 1, then :math:\mathrm{k} is not referenced and need not be set. **wf** : float, optional The power used for the weighted average such that a high power will cause closer points to be more heavily weighted. **Returns** **ans** : float Holds the result of the interpolation. .. _e01za-py2-py-errors: **Raises** **NagValueError** (errno :math:1) On entry, :math:d = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:d\geq 2. (errno :math:2) On entry, :math:\mathrm{narr}[\langle\mathit{\boldsymbol{value}}\rangle] = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:\mathrm{narr}[\textit{i}-1]\geq 2. (errno :math:4) On entry, :math:\mathrm{axis} decreases in dimension :math:\langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:\mathrm{axis} definition must be strictly increasing. (errno :math:5) On entry, :math:\textit{lx} = \langle\mathit{\boldsymbol{value}}\rangle, sum of :math:\mathrm{narr}:math:= \langle\mathit{\boldsymbol{value}}\rangle. Constraint: if :math:\mathrm{uniform} = \mathbf{False}, :math:\textit{lx} = sum of :math:\mathrm{narr}. (errno :math:5) On entry, :math:\textit{lx} = \langle\mathit{\boldsymbol{value}}\rangle, :math:d = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: if :math:\mathrm{uniform} = \mathbf{True}, :math:\textit{lx} = 2d. (errno :math:7) On entry, :math:\mathrm{k} = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: if :math:\mathrm{method} = 1, :math:\mathrm{k}\geq 1. (errno :math:8) On entry, :math:\mathrm{point}[\langle\mathit{\boldsymbol{value}}\rangle] = \langle\mathit{\boldsymbol{value}}\rangle and data range :math:= \left[\langle\mathit{\boldsymbol{value}}\rangle, \langle\mathit{\boldsymbol{value}}\rangle\right]. Constraint: :math:\mathrm{point} must be within the data range. (errno :math:9) On entry, :math:\mathrm{method} = 3 and :math:\mathrm{uniform} = \mathbf{False}. Constraint: if :math:\mathrm{method} = 3, :math:\mathrm{uniform} must be :math:\mathbf{True}. (errno :math:9) On entry, :math:\mathrm{method} = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:\mathrm{method} = 1, :math:2 or :math:3. (errno :math:10) On entry, :math:\mathrm{wf} = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: if :math:\mathrm{method} = 1, :math:1.0\leq \mathrm{wf}\leq 15.0. (errno :math:101) Cubic Interpolation method does not have enough data surrounding :math:\mathrm{point}; interpolation not possible. **Warns** **NagAlgorithmicWarning** (errno :math:201) Warning: the size of :math:\mathrm{k} has been reduced, due to too few data points around :math:\mathrm{point}. .. _e01za-py2-py-notes: **Notes** dimn_grid interpolates an :math:n-dimensional point within a set of gridded data points, :math:\mathbf{Z} = \left\{z_{{1j_1}}, z_{{2j_2}}, \ldots, z_{{dj_d}}\right\}, with corresponding data values :math:\mathbf{F} = \left\{f_{{1j_1}}, f_{{2j_2}}, \ldots, f_{{dj_d}}\right\} where, for the :math:i\ th dimension, :math:j_i = 1,\ldots,n_i and :math:n_i is the number of ordinates in the :math:i\ th dimension. A hypercube of :math:\left(2k\right)^d data points :math:\left[\mathbf{h}_1, \mathbf{h}_2, \ldots, \mathbf{h}_{{\left(2k\right)^d}}\right]⊂\mathbf{Z}, where :math:\mathbf{h}_i = \left(h_{{i1}}, h_{{i2}}, \ldots, h_{{id}}\right) and the corresponding data values are :math:f\left(\mathbf{h}_i\right) \in \mathbf{F}, around the given point, :math:\mathbf{x} = \left(x_1, x_2, \ldots, x_d\right), is found and then used to interpolate using one of the following three methods. (i) Weighted Average, that is a modification of Shepard's method (Shepard (1968)) as used for scattered data in :meth:dimn_scat_shep. This method interpolates the data with the weighted mean .. math:: Q\left(\mathbf{x}\right) = \frac{{\sum_{{r = 1}}^{\left(2k\right)^d}w_r\left(\mathbf{x}\right)f_r}}{{\sum_{{r = 1}}^{\left(2k\right)^d}w_r\left(\mathbf{x}\right)}}\text{,} where :math:f_r = f\left(\mathbf{h}_r\right), :math:w_r\left(\mathbf{x}\right) = \frac{1}{{D\left(\left\lvert \mathbf{x}-\mathbf{h}_r\right\rvert \right)}} and :math:D\left(\mathbf{y}\right) = \left. y_1^{\rho }+y_2^{\rho } + \cdots +y_d^{\rho }\right., for a given value of :math:\rho. (#) Linear Interpolation, which takes :math:2^d surrounding data points (:math:k = 1) and performs two one-dimensional linear interpolations in each dimension on data points :math:\mathbf{h}_a and :math:\mathbf{h}_b, reducing the dimension every iteration until it has reached an answer. The formula for linear interpolation in dimension :math:i is simply .. math:: f = f_a+\left(x_i-h_{{ai}}\right)\frac{{f_b-f_a}}{{h_{{bi}}-h_{{ai}}}}\text{,} where :math:f_r = f\left(\mathbf{h}_r\right) and :math:h_{{ai}} < x_i < h_{{bi}}. (#) Cubic Interpolation, based on cubic convolution (Keys (1981)). In a similar way to the Linear Interpolation method, it performs the interpolations in one dimension reducing it each time, however it requires four surrounding data points in each dimension (:math:k = 2), two in each direction :math:\left(\mathbf{h}_{-1}, \mathbf{h}_0, \mathbf{h}_1, \mathbf{h}_2\right). The following is used to calculate the one-dimensional interpolant in dimension :math:i .. math:: f = \frac{1}{2}\begin{pmatrix}1&t&t^2&t^3\end{pmatrix}\begin{pmatrix}0&2&0&0\\-1&0&1&0\\2&-5&4&-1\\-1&3&-3&1\end{pmatrix}\begin{pmatrix}f_{-1}\\f_0\\f_1\\f_2\end{pmatrix} where :math:t = x_i-h_{{0i}} and :math:f_r = f\left(\mathbf{h}_r\right). .. _e01za-py2-py-references: **References** Keys, R, 1981, Cubic Convolution Interpolation for Digital Image Processing, IEEE Transactions on Acoutstics, Speech, and Signal Processing (Vol ASSP-29 No. 6), 1153--1160, http://hmi.stanford.edu/doc/Tech_Notes/HMI-TN-2004-004-Interpolation/Keys_cubic_interp.pdf Shepard, D, 1968, A two-dimensional interpolation function for irregularly spaced data, Proc. 23rd Nat. Conf. ACM, 517--523, Brandon/Systems Press Inc., Princeton """ raise NotImplementedError [docs]def dimn_scat_shep(x, f, nw=-1, nq=-1): r""" dimn_scat_shep generates a multidimensional interpolant to a set of scattered data points, using a modified Shepard method. When the number of dimensions is no more than five, there are corresponding functions in submodule interp which are specific to the given dimensionality. :meth:dim2_scat_shep generates the two-dimensional interpolant, while :meth:dim3_scat_shep, :meth:dim4_scat_shep and :meth:dim5_scat_shep generate the three-, four - and five-dimensional interpolants respectively. .. _e01zm-py2-py-doc: For full information please refer to the NAG Library document for e01zm https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01zmf.html .. _e01zm-py2-py-parameters: **Parameters** **x** : float, array-like, shape :math:\left(d, m\right) The :math:\textit{d} components of the first data point must be stored in elements :math:0,1,\ldots,d-1 of :math:\mathrm{x}. The second data point must be stored in elements :math:d,d+1,\ldots,2\times d-1 of :math:\mathrm{x}, and so on. In general, the :math:\textit{m} data points must be stored in :math:\mathrm{x}[\textit{j},\textit{i}], for :math:\textit{j} = 0,1,\ldots,d-1, for :math:\textit{i} = 0,1,\ldots,m-1. **f** : float, array-like, shape :math:\left(m\right) :math:\mathrm{f}[r-1] must be set to the data value :math:f_{\textit{r}}, for :math:\textit{r} = 1,2,\ldots,m. **nw** : int, optional The number :math:N_w of data points that determines each radius of influence :math:R_w, appearing in the definition of each of the weights :math:w_{\textit{r}}, for :math:\textit{r} = 1,2,\ldots,m (see :ref:Notes <e01zm-py2-py-notes>). Note that :math:R_w is different for each weight. If :math:\mathrm{nw}\leq 0 the default value :math:\mathrm{nw} = \mathrm{min}\left({2\times \left(d+1\right)\times \left(d+2\right)}, {m-1}\right) is used instead. **nq** : int, optional The number :math:N_q of data points to be used in the least squares fit for coefficients defining the quadratic functions :math:q_r\left(\mathbf{x}\right) (see :ref:Notes <e01zm-py2-py-notes>). If :math:\mathrm{nq}\leq 0 the default value :math:\mathrm{nq} = \mathrm{min}\left({\left(d+1\right)\times \left(d+2\right)\times 6/5}, {m-1}\right) is used instead. **Returns** **iq** : int, ndarray, shape :math:\left(2\times m+1\right) Integer data defining the interpolant :math:Q\left(\mathbf{x}\right). **rq** : float, ndarray, shape :math:\left(\textit{lrq}\right) Real data defining the interpolant :math:Q\left(\mathbf{x}\right). .. _e01zm-py2-py-errors: **Raises** **NagValueError** (errno :math:1) On entry, :math:\mathrm{nw} = \langle\mathit{\boldsymbol{value}}\rangle and :math:m = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:\mathrm{nw}\leq m-1. (errno :math:1) On entry, :math:\mathrm{nq} = \langle\mathit{\boldsymbol{value}}\rangle and :math:m = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:\mathrm{nq}\leq m-1. (errno :math:1) On entry, :math:\mathrm{nq} = \langle\mathit{\boldsymbol{value}}\rangle and :math:d = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:\mathrm{nq}\leq 0 or :math:\mathrm{nq}\geq \left(d+1\right)\times \left(d+2\right)/2-1. (errno :math:1) On entry, :math:m = \langle\mathit{\boldsymbol{value}}\rangle and :math:d = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:m\geq \left(d+1\right)\times \left(d+2\right)/2+2. (errno :math:1) On entry, :math:\left(\left(d+1\right)\times \left(d+2\right)/2\right)\times m+2\times d+1 exceeds the largest machine integer. :math:d = \langle\mathit{\boldsymbol{value}}\rangle and :math:m = \langle\mathit{\boldsymbol{value}}\rangle. (errno :math:1) On entry, :math:d = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:d\geq 2. (errno :math:2) There are duplicate nodes in the dataset. :math:{\mathrm{x}[k-1,i-1]} = {\mathrm{x}[k-1,j-1]}, for :math:i = \langle\mathit{\boldsymbol{value}}\rangle, :math:j = \langle\mathit{\boldsymbol{value}}\rangle and :math:k = 1,2,\ldots,d. The interpolant cannot be derived. (errno :math:3) On entry, all the data points lie on the same hypersurface. No unique solution exists. .. _e01zm-py2-py-notes: **Notes** dimn_scat_shep constructs a smooth function :math:Q\left(\mathbf{x}\right), :math:\mathbf{x} \in \mathbb{R}^d which interpolates a set of :math:m scattered data points :math:\left(\mathbf{x}_r, f_r\right), for :math:r = 1,2,\ldots,m, using a modification of Shepard's method. The surface is continuous and has continuous first partial derivatives. The basic Shepard method, which is a generalization of the two-dimensional method described in Shepard (1968), interpolates the input data with the weighted mean .. math:: Q\left(\mathbf{x}\right) = \frac{{\sum_{{r = 1}}^mw_r\left(\mathbf{x}\right)q_r}}{{\sum_{{r = 1}}^mw_r\left(\mathbf{x}\right)}}\text{,} where :math:q_r = f_r, :math:w_r\left(\mathbf{x}\right) = \frac{1}{\left\lVert \mathbf{x}-\mathbf{x}_r\right\rVert_2^2}. The basic method is global in that the interpolated value at any point depends on all the data, but dimn_scat_shep uses a modification (see Franke and Nielson (1980) and Renka (1988a)), whereby the method becomes local by adjusting each :math:w_r\left(\mathbf{x}\right) to be zero outside a hypersphere with centre :math:\mathbf{x}_r and some radius :math:R_w. Also, to improve the performance of the basic method, each :math:q_r above is replaced by a function :math:q_r\left(\mathbf{x}\right), which is a quadratic fitted by weighted least squares to data local to :math:\mathbf{x}_r and forced to interpolate :math:\left(\mathbf{x}_r, f_r\right). In this context, a point :math:\mathbf{x} is defined to be local to another point if it lies within some distance :math:R_q of it. The efficiency of dimn_scat_shep is enhanced by using a cell method for nearest neighbour searching due to Bentley and Friedman (1979) with a cell density of :math:3. The radii :math:R_w and :math:R_q are chosen to be just large enough to include :math:N_w and :math:N_q data points, respectively, for user-supplied constants :math:N_w and :math:N_q. Default values of these parameters are provided, and advice on alternatives is given in Further Comments <https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01zmf.html#fcomments2>__. dimn_scat_shep is derived from the new implementation of QSHEP3 described by Renka (1988b). It uses the modification for high-dimensional interpolation described by Berry and Minser (1999). Values of the interpolant :math:Q\left(\mathbf{x}\right) generated by dimn_scat_shep, and its first partial derivatives, can subsequently be evaluated for points in the domain of the data by a call to :meth:dimn_scat_shep_eval. .. _e01zm-py2-py-references: **References** Bentley, J L and Friedman, J H, 1979, Data structures for range searching, ACM Comput. Surv. (11), 397--409 Berry, M W, Minser, K S, 1999, Algorithm 798: high-dimensional interpolation using the modified Shepard method, ACM Trans. Math. Software (25), 353--366 Franke, R and Nielson, G, 1980, Smooth interpolation of large sets of scattered data, Internat. J. Num. Methods Engrg. (15), 1691--1704 Renka, R J, 1988, Multivariate interpolation of large sets of scattered data, ACM Trans. Math. Software (14), 139--148 Renka, R J, 1988, Algorithm 661: QSHEP3D: Quadratic Shepard method for trivariate interpolation of scattered data, ACM Trans. Math. Software (14), 151--152 Shepard, D, 1968, A two-dimensional interpolation function for irregularly spaced data, Proc. 23rd Nat. Conf. ACM, 517--523, Brandon/Systems Press Inc., Princeton """ raise NotImplementedError [docs]def dimn_scat_shep_eval(x, f, iq, rq, xe): r""" dimn_scat_shep_eval evaluates the multidimensional interpolating function generated by :meth:dimn_scat_shep and its first partial derivatives. .. _e01zn-py2-py-doc: For full information please refer to the NAG Library document for e01zn https://www.nag.com/numeric/nl/nagdoc_28.5/flhtml/e01/e01znf.html .. _e01zn-py2-py-parameters: **Parameters** **x** : float, array-like, shape :math:\left(d, m\right) Note: the :math:i\ th ordinate of the point :math:x_j is stored in :math:\mathrm{x}[i-1,j-1]. **Must** be the same array supplied as argument :math:\textit{x} in the preceding call to :meth:dimn_scat_shep. It **must** remain unchanged between calls. **f** : float, array-like, shape :math:\left(m\right) **Must** be the same array supplied as argument :math:\textit{f} in the preceding call to :meth:dimn_scat_shep. It **must** remain unchanged between calls. **iq** : int, array-like, shape :math:\left(2\times m+1\right) **Must** be the same array returned as argument :math:\textit{iq} in the preceding call to :meth:dimn_scat_shep. It **must** remain unchanged between calls. **rq** : float, array-like, shape :math:\left(\textit{lrq}\right) **Must** be the same array returned as argument :math:\textit{rq} in the preceding call to :meth:dimn_scat_shep. It **must** remain unchanged between calls. **xe** : float, array-like, shape :math:\left(d, n\right) Note: the :math:i\ th ordinate of the point :math:x_j is stored in :math:\mathrm{xe}[i-1,j-1]. :math:\mathrm{xe}[0,\textit{j}-1],\ldots,\mathrm{xe}[d-1,\textit{j}-1] must be set to the evaluation point :math:\mathbf{x}_{\textit{j}}, for :math:\textit{j} = 1,2,\ldots,n. **Returns** **q** : float, ndarray, shape :math:\left(n\right) :math:\mathrm{q}[\textit{i}-1] contains the value of the interpolant, at :math:\mathbf{x}_{\textit{i}}, for :math:\textit{i} = 1,2,\ldots,n. If any of these evaluation points lie outside the region of definition of the interpolant the corresponding entries in :math:\mathrm{q} are set to an extrapolated approximation, and dimn_scat_shep_eval returns with :math:\mathrm{errno} = 3. **qx** : float, ndarray, shape :math:\left(d, n\right) :math:\mathrm{qx}[i-1,j-1] contains the value of the partial derivatives with respect to the :math:i\ th independent variable (dimension) of the interpolant :math:Q\left(\mathbf{x}\right) at :math:\mathbf{x}_{\textit{j}}, for :math:\textit{j} = 1,2,\ldots,n, and for each of the partial derivatives :math:i = 1,2,\ldots,d. If any of these evaluation points lie outside the region of definition of the interpolant, the corresponding entries in :math:\mathrm{qx} are set to extrapolated approximations to the partial derivatives, and dimn_scat_shep_eval returns with :math:\mathrm{errno} = 3. .. _e01zn-py2-py-errors: **Raises** **NagValueError** (errno :math:1) On entry, :math:n = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:n\geq 1. (errno :math:1) On entry, :math:m = \langle\mathit{\boldsymbol{value}}\rangle and :math:d = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:m\geq \left(d+1\right)\times \left(d+2\right)/2+2. (errno :math:1) On entry, :math:\left(\left(d+1\right)\times \left(d+2\right)/2\right)\times m+2\times d+1 exceeds the largest machine integer. :math:d = \langle\mathit{\boldsymbol{value}}\rangle and :math:m = \langle\mathit{\boldsymbol{value}}\rangle. (errno :math:1) On entry, :math:d = \langle\mathit{\boldsymbol{value}}\rangle. Constraint: :math:d\geq 2. (errno :math:2) On entry, values in :math:\mathrm{rq} appear to be invalid. Check that :math:\mathrm{rq} has not been corrupted between calls to :meth:dimn_scat_shep and dimn_scat_shep_eval. (errno :math:2) On entry, values in :math:\mathrm{iq} appear to be invalid. Check that :math:\mathrm{iq} has not been corrupted between calls to :meth:dimn_scat_shep and dimn_scat_shep_eval. **Warns** **NagAlgorithmicWarning** (errno :math:3) On entry, at least one evaluation point lies outside the region of definition of the interpolant. At such points the corresponding values in :math:\mathrm{q} and :math:\mathrm{qx} contain extrapolated approximations. Points should be evaluated one by one to identify extrapolated values. .. _e01zn-py2-py-notes: **Notes** dimn_scat_shep_eval takes as input the interpolant :math:Q\left(\mathbf{x}\right), :math:\mathbf{x} \in \mathbb{R}^d of a set of scattered data points :math:\left(\mathbf{x}_{\textit{r}}, f_{\textit{r}}\right), for :math:\textit{r} = 1,2,\ldots,m, as computed by :meth:dimn_scat_shep, and evaluates the interpolant and its first partial derivatives at the set of points :math:\mathbf{x}_{\textit{i}}, for :math:\textit{i} = 1,2,\ldots,n. dimn_scat_shep_eval must only be called after a call to :meth:dimn_scat_shep. dimn_scat_shep_eval is derived from the new implementation of QS3GRD described by Renka (1988). It uses the modification for high-dimensional interpolation described by Berry and Minser (1999). .. _e01zn-py2-py-references: **References** Berry, M W, Minser, K S, 1999, Algorithm 798: high-dimensional interpolation using the modified Shepard method, ACM Trans. Math. Software (25), 353--366 Renka, R J, 1988, Algorithm 661: QSHEP3D: Quadratic Shepard method for trivariate interpolation of scattered data, ACM Trans. Math. Software (14), 151--152 """ raise NotImplementedError
2022-10-02 16:59:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5624719262123108, "perplexity": 10236.773496997836}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00203.warc.gz"}
https://web2.0calc.com/questions/please-help_98500
+0 +1 140 7 +2448 $$6\sqrt[4]{567x^4}$$ $$4\sqrt[3]{64n^8}$$ Oct 23, 2018 #1 +2448 +1 Before you say anything, yes I did try. I still can't do it. My answers never seem to look correct. I only tried the first one. Oct 23, 2018 #2 +2448 +1 What I got for the first one so far: $$6\sqrt[4]{3^4x^4}$$ RainbowPanda  Oct 23, 2018 #3 +5091 +2 $$6\sqrt[4]{567x^4} = \\ 6\sqrt[4]{3^4x^4\cdot 7} =\\ 6\cdot 3x\sqrt[4]{7} = \\ 18x\sqrt[4]{7}$$ $$4\sqrt[3]{64n^8} = \\ 4\sqrt[3]{2^6 n^6 \cdot n^2} = \\ 4\cdot 2^2 n^2 \sqrt[3]{n^2} = \\ 16n^2 \sqrt[3]{n^2}$$ . Oct 23, 2018 edited by Rom  Oct 23, 2018 #4 +2448 0 Seems I would've done it the wrong way anyways, why is it that I have to multiply by 7? RainbowPanda  Oct 23, 2018 #5 +5091 +1 $$567 = 81 \cdot 7 = 3^4 \cdot 7$$ Rom  Oct 23, 2018 #6 +2448 0 Oh I guess that makes sense RainbowPanda  Oct 23, 2018 #7 +101044 0 It is very nice to see you interacting Panda but it would be nice to also see you say 'thanks' to Rom :) Melody  Oct 24, 2018
2019-05-26 14:39:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8991886973381042, "perplexity": 5456.2340073083615}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232259177.93/warc/CC-MAIN-20190526125236-20190526151236-00530.warc.gz"}
https://www.physicsforums.com/threads/when-is-inclusive-and-exclusive-class-interval-used.414272/
# When is inclusive and exclusive class interval used 1. Jul 5, 2010 ### sachin_naik04 i remember my teacher was saying something like whenever we have to tabulate students marks we use only inclusive class interval (or i doubt it could be exclusive) or something like when decimal numbers like 3.5 are there we use something (that is either inclusive or exclusive) but i dont remember all that now so i want to know when should i show certain data (of rainfall, students marks, population etc.) in a inclusive method and exclusive method like i guess when marks of a students are given she told us to use only inclusive class interval but not sure
2017-05-25 16:25:38
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.809924840927124, "perplexity": 1833.008381967964}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608107.28/warc/CC-MAIN-20170525155936-20170525175936-00156.warc.gz"}
http://www.physicsforums.com/showthread.php?t=585841
## Max of random Variables [b]1. $X_1,X_2\cdots X_n\:\text{are IID Random Variables with CDF}\,F(x)\:\text{and PDF}\,f(x)\\ \text{then What is the CDF of Random variable }\,Max(X_1,X_2\cdots X_n)$ 2. Relevant equations [b]3. $\text{Since Y will be one among}\,X_1,X_2\cdots X_n,\text{why cannot its CDF be }\,F(x)\\\text{I need to know flaw in my answer}$ PhysOrg.com science news on PhysOrg.com >> King Richard III found in 'untidy lozenge-shaped grave'>> Google Drive sports new view and scan enhancements>> Researcher admits mistakes in stem cell study Recognitions: Gold Member Homework Help Quote by ekaveera100 [b]1. $X_1,X_2\cdots X_n\:\text{are IID Random Variables with CDF}\,F(x)\:\text{and PDF}\,f(x)\\ \text{then What is the CDF of Random variable }\,Max(X_1,X_2\cdots X_n)$ [b]3. $\text{Since Y will be one among}\,X_1,X_2\cdots X_n,\text{why cannot its CDF be }\,F(x)\\\text{I need to know flaw in my answer}$ Intuitively, here's what's wrong with that. Take the simpler case of n IID random variables ##X_1,\ X_2,...X_n## uniformly distributed on [0,1]. If you take a samples ##x_1,\ x_2,...x_n## from these distributions, and you always choose the largest value, wouldn't you expect your answer to be biased towards the larger numbers in the interval? Suppose you take 20 samples and consider the largest value. It would be very unlikely for the max to be less than 1/2, wouldn't it? ##(\frac 1 2)^{20}## to be exact, even though each sample had an a priori probability 1/2 of being less than 1/2.
2013-05-25 02:46:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31761568784713745, "perplexity": 2329.540220674925}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705352205/warc/CC-MAIN-20130516115552-00063-ip-10-60-113-184.ec2.internal.warc.gz"}
http://mathhelpforum.com/algebra/51122-logarthim-finding-expression-urgent-plzz-print.html
# Logarthim - Finding expression - URGENT PLZZ • Sep 29th 2008, 12:53 AM GoneCrazy Logarthim - Finding expression - URGENT PLZZ log[base2]8, log[base4]8, log[base8]8, log[base18, log[base32]8, log[base64]8, I am given this sequence and i need to find an expression for the nth term in form of p/q, where p and q belongs to z(integer). i guess it has to do with the bases like log[base2]8 = log8/log2 = log8 - log2 = ?????? then how to find an expression I tried the following: 1) since the bases form a geometric progression i found the common ratio which is r =Un/Un-1 ex: 8/4 = 2 2) The nth term formula for a geometric progression is Un= ar^(n-1) where a = 1st term, which is Log(base2)8 r=common ratio, which is 2 n=the rank so the expression is Un= Log(base2)8(2)^n-1 However its not quite correct beacuse if we tested it, when n=7 to complete the sequence the value should be log(base128)8 NOT Log(base2)8 x 128 I hope you get where i am going I really appreciate ypu help THANK YOU!!!!!! • Sep 29th 2008, 12:58 AM Jhevon Quote: Originally Posted by GoneCrazy log[base2]8, log[base4]8, log[base8]8, log[base18, log[base32]8, log[base64]8, I am given this sequence and i need to find an expression for the nth term in form of p/q, where p and q belongs to z(integer). i guess it has to do with the bases like log[base2]8 = log8/log2 = log8 - log2 = ?????? then how to find an expression I tried the following: 1) since the bases form a geometric progression i found the common ratio which is r =Un/Un-1 ex: 8/4 = 2 2) The nth term formula for a geometric progression is Un= ar^(n-1) where a = 1st term, which is Log(base2)8 r=common ratio, which is 2 n=the rank so the expression is Un= Log(base2)8(2)^n-1 However its not quite correct beacuse if we tested it, when n=7 to complete the sequence the value should be log(base128)8 NOT Log(base2)8 x 128 I hope you get where i am going I really appreciate ypu help THANK YOU!!!!!! you said the base was a geometric progression. why did you not then make the base the geometric progression? i have trouble seeing your logic. would it not rather be $a_n = \log_{2^n}8$ for $n = 1,~2,~3, \cdots$ ? • Sep 29th 2008, 02:44 AM bkarpuz Quote: Originally Posted by Jhevon you said the base was a geometric progression. why did you not then make the base the geometric progression? i have trouble seeing your logic. would it not rather be $a_n = \log_{2^n}8$ for $n = 1,~2,~3, \cdots$ ? $a_{n}=\log_{2^{n}}(8) \Rightarrow 2^{na_{n}}=2^{3}$ .................... $\Rightarrow 2^{na_{n}-3}=1$ .................... $\Rightarrow na_{n}-3=0$ .................... $\Rightarrow a_{n}={\color{red}{\frac{3}{n}}}.$ $a_{n}={\color{red}{\frac{n-1}{n}}}{\color{red}{\frac{3}{n-1}}}$ ..._ $={\color{red}{\frac{n-1}{n}a_{n}}}$ I dont think that this is a geometric sequence. (Thinking) Or if it is so, I have a big mistake. (Doh) • Sep 29th 2008, 03:46 AM mr fantastic Quote: Originally Posted by bkarpuz $a_{n}=\log_{2^{n}}(8) \Rightarrow 2^{na_{n}}=2^{3}$ .................... $\Rightarrow 2^{na_{n}-3}=1$ .................... $\Rightarrow na_{n}-3=0$ .................... $\Rightarrow a_{n}=-\frac{3}{n}.$ * [snip] Correction to line *: $\Rightarrow a_{n}=\frac{3}{n}.$ • Sep 29th 2008, 03:48 AM mr fantastic Quote: Originally Posted by GoneCrazy log[base2]8, log[base4]8, log[base8]8, log[base18, log[base32]8, log[base64]8, I am given this sequence and i need to find an expression for the nth term in form of p/q, where p and q belongs to z(integer). [snip] Regarding the red: You mean log[base16]8, right ....? • Sep 29th 2008, 09:36 AM GoneCrazy Yeh it has nothing to do with geometric progression -my mistake i didn't take enough time to think it over before asking (Itwasntme) oooh yeh...i meant 16 not 18 - sorry I really didn't get your way - Both bkarpuz and Mr.Fantastic, if you can explain it more plzzzz --------------------------------------------------- Here is my way, i figured out 2 ways: 2,4,8,16,32,64 can be expressed as 2 raised to a certain power Then using these two basic rules, each value in the squence can be simplified: 1) Log (baseB)A = Log A /Log B 2) Log a^2 = 2Loga example: Log (base4 )8 = Log 8 Log 4 = Log 2^3 Log 2^2 3 Log 2 2 Log 2 = 3 2 The same applies for the rest making the following pattern: 3/1 , 3,2 , 3/3, 3/4, 3/5, 3/6 Therefore, the nth-term is going to be 3/n -------------------------------------------- 2) The sequence is in the form of log[base 2^n] 8 By using the base rule, we can further simplify it beacome in the form of p/q log[base 2^n] 8 = log[base 2] 8 / log [base 2]2^n = log[base 2] 2^3 / log [base 2]2^n After crossing out the similar logs you are left with 3/n ---------------------------------- I have anthor part of the question if anyone is intrested • Sep 29th 2008, 01:55 PM mr fantastic Quote: Originally Posted by GoneCrazy Yeh it has nothing to do with geometric progression -my mistake i didn't take enough time to think it over before asking (Itwasntme) oooh yeh...i meant 16 not 18 - sorry I really didn't get your way - Both bkarpuz and Mr.Fantastic, if you can explain it more plzzzz --------------------------------------------------- Here is my way, i figured out 2 ways: 2,4,8,16,32,64 can be expressed as 2 raised to a certain power Then using these two basic rules, each value in the squence can be simplified: 1) Log (baseB)A = Log A /Log B 2) Log a^2 = 2Loga example: Log (base4 )8 = Log 8 Log 4 = Log 2^3 Log 2^2 3 Log 2 2 Log 2 = 3 2 The same applies for the rest making the following pattern: 3/1 , 3,2 , 3/3, 3/4, 3/5, 3/6 Therefore, the nth-term is going to be 3/n -------------------------------------------- 2) The sequence is in the form of log[base 2^n] 8 By using the base rule, we can further simplify it beacome in the form of p/q log[base 2^n] 8 = log[base 2] 8 / log [base 2]2^n = log[base 2] 2^3 / log [base 2]2^n After crossing out the similar logs you are left with 3/n ---------------------------------- I have anthor part of the question if anyone is intrested If you have more questions related to this one then you should certainly feel free to ask them.
2016-12-03 23:12:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.825137734413147, "perplexity": 4389.285049471636}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541140.30/warc/CC-MAIN-20161202170901-00462-ip-10-31-129-80.ec2.internal.warc.gz"}
https://www.groundai.com/project/degrees-of-freedom-of-interference-channels-with-hybrid-beam-forming/
Degrees of Freedom of Interference Channels with Hybrid Beam-forming # Degrees of Freedom of Interference Channels with Hybrid Beam-forming Sung Ho Chae,  and Cheol Jeong,  The authors are with the DMC R&D Center, Samsung Electronics, Suwon, Republic of Korea. ###### Abstract We study the sum degrees of freedom (DoF) of interference channels with hybrid beam-forming in which each transmitter uses antennas and RF chains and each receiver uses antennas and RF chains, where and , , and hybrid beam-forming composed of analog and digital precodings is employed at each node. For the two-user case, we completely characterize the sum DoF for an arbitrary number of antennas and RF chains by developing an achievable scheme optimized for the hybrid beam-forming structure and deriving its matching upper bound. For a general -user case, we focus on a symmetric case where , , , and , , and obtain lower and upper bounds on the sum DoF, which are tight when is an integer. The results show that hybrid beam-forming can increase the sum DoF of interference channel under certain conditions while it cannot improve the sum DoFs of point-to-point channel, multiple access channel, and broadcast channel. The key insights on this gain is that hybrid beam-forming enables users to manage inter-user interference better, and thus each user can increase the dimension of interference-free signal space for its own desired signals. Degrees of freedom, hybrid beam-forming, interference alignment, interference channel ## I Introduction Mobile data traffic has been growing dramatically as the number of mobile smart devices is increasing rapidly in recent years [1]. To accommodate tremendous demand on mobile data traffic, the cell capacity can be largely increased by deploying a very large number of antennas at base stations (BSs), often referred to as a massive multiple-input multiple-output (MIMO) system [2, 3]. The massive MIMO system, however, has hardware constrains that come from using a few hundred antennas. For a conventional antenna array structure, each antenna needs to have a dedicated RF chain. This naturally leads to an increment in the circuit size, power consumption, and device cost proportionally to the number of antennas, and hence it can be a serious problem in a practical point of view especially for massive MIMO systems. Therefore, to resolve this problem, a hybrid beam-forming structure with a lower number of RF chains than the number of antenna elements has been recently introduced as a practical solution [4, 5]. As an alternative approach to increase the cell capacity, millimeter-wave (mmWave) communications have attracted great attention recently [6]. The mmWave band from 30 to 300 GHz provides abundant contiguous frequency resources while frequency bands under 5 GHz used for legacy cellular communications are very crowded and fragmented. The main advantage in mmWave communications is that a very high data rate can be supported using a very large bandwidth at mmWave bands. However, one of major drawbacks is the high induced path loss due to the propagation loss and absorption loss at mmWave bands [7]. Fortunately, this high path loss can be effectively compensated by a high beam-forming gain obtained from a large number of antenna elements that can be packed into a small form factor due to the small wavelength in mmWave bands. To support a single stream only, the analog beam-forming, which is simply implemented by controlling attenuators and phase shifters of the antenna array to steer a directional beam, is enough to be considered. However, to transmit multiple streams, the hybrid beam-forming structure, where analog beam-forming is performed at RF domain and antenna arrays are connected to a relatively small number of digital paths, should be considered to get the multiplexing gain [8, 9]. As mentioned above, the hybrid beam-forming architecture can play a key role in the next generation communications (e.g., massive MIMO and/or mmWave communications) and hence has been widely studied recently [8, 9, 10, 11, 12]. In [8], precoders and combiners are designed using a sparse reconstruction approach. In [10], baseband and RF beams are designed for multiuser downlink spatial division multiple access (SDMA). In addition, a hybrid precoding algorithm based on a hierarchical codebook is proposed in [11]. Furthermore, a hybrid precoder is proposed for massive multiuser MIMO systems in [12]. While there are some works on hybrid beam-forming structures, however, to the best of our knowledge, the degrees of freedom (DoF) gain from hybrid beam-forming has not been analyzed before. ### I-a Previous Works The DoF, which is also known as a capacity pre-log, gives the capacity approximation at high signal to noise ratio (SNR) regime. For example, for the point-to-point (PTP) channel with transmit antennas and receive antennas, it is well known that the capacity increases with the growth rate at high SNR [13, 14]. Since exact capacity characterization is generally still unknown even for simple networks (e.g., two-user interference channel), instead of obtaining an exact capacity, approximate characterization by finding the optimal DoF has been studied in many networks recently [15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31]. Specifically, for the two-user interference channel, the sum DoF has been completely characterized, where zero-forcing precoding has been shown to be enough to achieve the optimal DoF [15]. For a general -user interference channel, a novel interference management technique called interference alignment has been proposed in [16, 19], which achieves the optimal sum DoF of . Later this scheme has been extended to MIMO configurations both for rich scattering environment [23, 22] and poor scattering environment [29, 30, 31]. Furthermore, beyond the interference channels, the idea of interference alignment has been successfully adapted to various networks, e.g., see [17, 18, 19, 20, 21, 24, 25, 26, 27, 28] and references therein. ### I-B Contributions In this paper, our primary goal is to answer if hybrid beam-forming can increase the sum DoF of interference channels. To this end, motivated by the aforementioned previous works, we propose zero forcing and interference alignment schemes optimized for the hybrid beam-forming structure. In addition, we also derive a new upper bound on the sum DoF when hybrid beam-forming is employed at each node. For the two-user case, this upper bound coincides with the achievable sum DoF of the proposed scheme, thereby completely characterizing the sum DoF. For a general -user case, our proposed scheme can achieve the upper bound when is an integer, where and denote the number of antennas at each transmitter and receiver, respectively. As a consequence of the result, we show that hybrid beam-forming can indeed improve the sum DoF of the -user interference channel under certain conditions. This is in contrast to the PTP channels, multiple access channel (MAC), and broadcast channel (BC) cases in which hybrid beam-forming cannot increase the sum DoF (see Section III). The key insight behind this gain is that hybrid beam-forming enables users to manage interference better, and thus each user can increase the dimension of interference-free signal space which can be used for its own desired signals. ### I-C Organization The rest of this paper is organized as follows. In Section II, we describe the system model and sum DoF metric considered in this paper. In Section III, we give an intuition as to how hybrid beam-forming can increase the sum DoF through motivating examples. In Section IV, we present and discuss about the main results of this paper. In addition, we provide numerical results which show the performance improvement from hybrid beam-forming in Section V. In Sections VI and VII, we provide the proofs of the main theorems. Finally, we conclude the paper in Section VIII. ### I-D Notations Throughout the paper, we will use , , and to denote a matrix, a vector, and a scalar, respectively. For a rational number , the notation denotes the integer part of . For matrix , let , , and denote the transpose, the complex conjugate transpose, and the norm of , respectively. In addition, let and denote the determinant and the rank of , respectively. The notations and denote the identity matrix and zero matrix, respectively. We write if . ## Ii System Model Consider a -user interference channel with hybrid beam-forming, as shown in Fig. 1. Transmitter wishes to communicate with receiver only, while causing interference to all the other receivers. In addition, transmitter uses antennas and RF chains and receiver uses antennas and RF chains, where and , . Specially, when and , , we call the corresponding channel as a full digital channel. ### Ii-a Channel Model Similar to previous works [8, 10], in this paper we assume that transmitter utilizes transmit hybrid beam-forming which consists of an analog precoder and an digital precoder as depicted in Fig. 1, where denotes the number of streams of user .111As compared to the hybrid beam-forming structure introduced in [8, 10], in this paper, coefficients in can have different norms by relaxing the constraint that all entries are of equal norm. In practical point of view, this is feasible since we can implement by using both attenuators and analog phase shifters rather than using analog phase shifters only. In addition, based on this hybrid beam-forming, the input signal of transmitter at time slot , , is assumed to be given by xi(t) =V′i(t)x[b]i(t) =V′i(t)Vi(t)si(t), where is the baseband-domain input vector and si(t)=[si,1(t)⋯si,di(t)]T is the symbol vector of transmitter . Here, denotes the th symbol of user at time slot . Then the input and output relationship at RF domain is given by yj(t) =K∑i=1Hji(t)xi(t)+zj(t) =K∑i=1Hji(t)V′i(t)x[b]i(t)+zj(t) =K∑i=1Hji(t)V′i(t)Vi(t)si(t)+zj(t), where is the channel matrix from transmitter to receiver , is the RF-domain received signal vector at receiver , and is the Gaussian noise vector at receiver whose entries are drawn from . We assume that all channel coefficients are independent and identically distributed (i.i.d) from a continuous distribution and known to all nodes. After receiving , receiver applies receive hybrid beam-forming which consists of an analog precoder and a digital precoder as shown in Fig. 1. Specifically, by applying the analog precoder to the received signal at RF domain, we can obtain the input and output relationship at baseband domain as y[b]j(t)=K∑i=1H[b]ji(t)x[b]i(t)+z[b]j(t), where , , and . If we further apply the digital precoder to the received signal at baseband domain, we finally get y[e]j(t)=K∑i=1H[e]ji(t)si(t)+z[e]j(t), where , , and . Note that is the effective channel matrix which can be obtained after applying transmit hybrid beam-forming of transmitter and receive hybrid beam-forming of receiver . Finally, by applying the aforementioned hybrid beam-forming strategy and assuming Gaussian signaling , the following average sum rate is achievable for a given transmit power  [32]: Rsum(P)≤E⎡⎢ ⎢⎣K∑i=1log∣∣Ai(t)+Pdi∑Kj=1H[e]ij(t)H[e]ij(t)∗∣∣∣∣Ai(t)+Pdi∑Kk=1,i≠kH[e]ik(t)H[e]ik(t)∗∣∣⎤⎥ ⎥⎦, (1) where . Specifically, when all the interferences are eliminated via hybrid beam-forming, i.e., , and , (1) becomes Rsum(P) ≤E⎡⎢ ⎢⎣K∑i=1log∣∣Ai(t)+Pdi∑Kj=1H[e]ij(t)H[e]ij(t)∗∣∣|Ai(t)|⎤⎥ ⎥⎦ (2) =K∑i=1dilog(P)+o(log(P)). (3) ### Ii-B Encoding, Decoding, and Sum DoF There are independent messages . For each transmitter , a message is mapped to an length codeword . To send the message , at time , transmitter sends . Here, we assume that each transmitter should satisfy the average power constraint , i.e., for . Then receiver decodes its desired message , based on its received signal. A rate tuple is said to be achievable for the channel if there exists a sequence of codes such that the average probability of decoding error tends to zero as the code length goes to infinity. The capacity region of this channel is the closure of the set of achievable rate tuples . The sum DoF , which is also known as a sum-capacity pre-log, provides the sum capacity approximation at high SNR as222In this paper, when we derive lower and upper bounds on the sum DoF, we restrict our attention on the cases in which the hybrid beam-forming structure introduced in Section II-A is used. Csum(P)=max(R1,R2,…,RK)∈CK∑i=1Ri(P)=Γlog(P)+o(log(P)). Equivalently, the sum DoF can be defined as . ## Iii Preliminary Discussion To gain insights into the DoF gain from hybrid beam-forming, we begin with examining PTP channel, MAC, and BC cases. Note that the PTP channel, the -user MAC, and the -user BC can be obtained from the -user interference channel by allowing full cooperation among all the transmitters and among all the receivers, full cooperation among all the receivers only, and full cooperation among all the transmitters only, respectively. Here, we assume that hybrid beam-forming strategy (including digital precoder and analog precoder) for each channel is employed in a similar manner as in Section II. ### Iii-a Point-to-Point (PTP) Channel Consider the PTP channel in which the transmitter uses RF chains and antennas and the receiver uses RF chains and antennas. The DoF of this channel is stated in the following lemma. ###### Lemma 1 For the PTP channel with hybrid beam-forming, the DoF is given by . {proof} We first provide a converse proof. Following a similar way described in Section II, we can write the input and output relationship of the PTP channel at time slot as y(t) =H(t)x(t)+z(t) =H(t)V′(t)x[b](t)+z(t), where is the RF-domain output vector at the receiver, is the channel matrix from the transmitter to the receiver, and are the RF-domain input vector and the baseband-domain input vector at the transmitter, respectively, is the analog precoder of the transmitter, and is the Gaussian noise vector. Now focus on the input and output relationship at baseband domain. By applying receive analog precoding at the receiver, we can get U′∗(t)y(t)=y[b](t)=H[b](t)x[b](t)+z[b](t), where is the analog precoder of the receiver, , and . Since , we see that . For achievability, we only use transmit antennas out of antennas of the transmitter and receive antennas out of antennas of the receiver to equivalently create a conventional full digital PTP channel with transmit antennas and receive antennas. Therefore, is achievable [13, 14], which completes the proof. It is well known that the DoF of the full digital PTP channel with transmit antennas and receive antennas is equal to  [13, 14]. Therefore, from the result of Lemma 1, we see that adding more antennas only cannot increase the DoF of a PTP channel without increasing the number of RF chains, regardless of the values of and . ### Iii-B Multiple Access Channel (MAC) and Broadcast Channel (BC) Now we consider the -user MAC and BC with hybrid beam-forming. For the MAC case, each transmitter uses RF chains and antennas and the receiver uses RF chains and antennas. For the BC case, the transmitter uses RF chains and antennas and each receiver uses RF chains and antennas. The DoFs of these channels are stated in the following lemmas. ###### Lemma 2 For the -user multiple access channel (MAC) with hybrid beam-forming, the DoF is given by . {proof} For a converse proof, we allow full cooperation among all the transmitters to form PTP channel. Then, from the result of Lemma 1, the sum DoF of this network is equal to . Since allowing cooperation does not reduce the capacity region [17], this is an upper bound of the original network, and thus ΓMAC≤min{K∑i=1Mi,N}. For achievability, we use only antennas out of antennas of transmitter , , and antennas out of antennas of the receiver to form a conventional full digital MAC in a similar manner as in Lemma 1. Then, is achievable [15], which completes the proof. ###### Lemma 3 For the -user broadcast channel (BC) with hybrid beam-forming, the DoF is given by . {proof} We can easily prove Lemma 3 by following similar proof steps in Lemma 2 except the fact that we now allow full cooperation among all the receivers instead of transmitters for a converse proof. For brevity, we omit the rest of the proof steps. From the results of Lemmas 2 and 3, adding more antennas only without more RF chains cannot increase the sum DoFs of MAC and BC, as in the PTP case. Therefore, we can see that when full cooperation is already allowed at either transmitter side or receiver side of the -user interference channel, hybrid beam-forming cannot further improve the DoF. However, as we will show in the following example, for the case in which full cooperation is not allowed so that there exist inter-user interferences, the sum DoF of an interference channel can be improved via hybrid beam-forming for certain cases. ### Iii-C Interference Channel: Motivating Example Now we provide a simple example where hybrid beam-forming indeed improves the sum DoF. In the following example, we omit the time index for brevity. ###### Example 1 Consider the two-user interference channel where and , . We first set the analog precoder to satisfy for and . Since is the matrix and all channel coefficients are generic, we can easily find that satisfies these conditions. In addition, for the digital precoder of transmitter , we set , . Then, the received signal at each receiver is given by yi =HiiV′iVisi+HijV′jVjsj+zi =HiiV′isi+zi, where is the transmitted symbol vector of user and . Since , we can achieve for each user, thus achieving . Note that for the two-user full digital interference channel, which has the same number of RF chains as in the two-user interference channel, only the sum DoF of two can be achieved. This shows that for some cases, the sum DoF of an interference channel can actually be increased by adding more antennas only without increasing the number of RF chains. ###### Remark 1 As shown in Example 1, by using more antennas, we can have a better ability to null out interferences from/to other users at RF domain. This enables users to secure more interference-free dimensions, and as a result, a higher sum DoF is achievable without any additional RF chains for some cases. However, despite this improved capability dealing with interferences, hybrid beam-forming does not always increase the DoF of an interference channel. For instance, as will be demonstrated in the next example, if all the interferences can be eliminated without the need to add more antennas, hybrid beam-forming cannot increase the sum DoF. ###### Example 2 Consider the two-user interference channel where , , , and , . By allowing full cooperation among transmitters and among receivers, we can get the PTP channel. Since the DoF of this channel is given by two from Lemma 1 and allowing full cooperation does not reduce the capacity region, the sum DoF of the two-user interference channel cannot be more than two. Note that the two-user full digital interference channel can also achieve the sum DoF of two [15]. Therefore, unlike in Example 1, adding antennas only cannot increase the sum DoF in this case. In fact, in this case, to achieve a higher DoF, we need to use more RF chains as well as more antennas. For example, if we use additional one RF chain and two RF chains at each transmitter and receiver, respectively, the channel becomes the two-user full digital interference channel, and we can now achieve the improved DoF of 4. ## Iv Main Results In this section, we state and discuss about the main results of this paper. For the two-user case, the sum DoF is completely characterized for any antenna configuration. When , we focus on a symmetric case where , , , and , , and derive lower and upper bounds on the sum DoF. It is shown that two bounds are matched under a certain condition. ### Iv-a Two-user Case For the two-user interference channel, we completely characterize the sum DoF as stated in the following theorem. ###### Theorem 1 (Two-user case) For the two-user interference channel with hybrid beam-forming, the sum DoF is given by Γ=min{M1+M2,N1+N2,M1+N2,M2+N1,max{M′1,N′2},max{M′2,N′1}} where and for . {proof} See Section VI for the proof. ###### Remark 2 For the case where and , , the sum DoF becomes Γ=min{M1+M2,N1+N2,max{M1,N2},max{M2,N1}}, which recovers the result for the two-user full digital interference channel in [15]. ###### Remark 3 Note that when the condition min{max{M′1,N′2},max{M′2,N′1}}≥min{M1+M2,N1+N2,M1+N2,M2+N1} is satisfied, the sum DoF becomes Γ =min{M1+M2,N1+N2,N1+M2,M2+N1} =min{M1,N1}+min{M2,N2}, which is the sum DoF of the interference-free channel. Therefore, we can see that by adding enough number of antennas at each node, all the users can utilize their full DoFs as if there is no interference. DoF gain due to hybrid beam-forming: Consider a symmetric case where and . We plot the sum DoF as a function of with fixed in Fig. 2. For comparison, we also plot the sum DoF of the full digital case where the number of RF chains is the same as the hybrid beam-forming case. As can be seen in Fig. 2, although we add antennas only, we can achieve a higher DoF and it reaches up to the maximum value of , the sum DoF of the interference-free channel, when . The gain comes from the fact that hybrid beam-forming can null out more interferences without increasing the number of RF chains, as well as enhancing the capacity of PTP channel as reported in [8, 10]. ### Iv-B K-user Case When , we focus on a symmetric case where , , , and , . Under this configuration, we obtain lower and upper bounds on the sum DoF as stated in the following theorem, which are tight when is an integer. ###### Theorem 2 (K-user case) For the symmetric -user interference channel with hybrid beam-forming, the following sum DoF is achievable: Γ≥⎧⎨⎩Kmin{M,N}if K≤R,Kmin{M,N,RR+1min{M′,N′}}if % K>R, where . For converse, the sum DoF is upper bounded by Γ≤⎧⎪⎨⎪⎩Kmin{M,N}if K≤R,Kmin{M,N,max{M′,N′}R+1}if K>R. {proof} See Section VII for the proof. ###### Remark 4 Similar to the two-user case explained in Remark 2, for the case where and , Theorem 2 recovers the result for the -user full digital interference channel in [23]. ###### Remark 5 It is easy to see that is a non-decreasing function of and . Intuitively, this is clear since having more antennas does not reduce the capacity region. Moreover, when each user can achieve the maximum DoF of as if there is no interference. ###### Corollary 1 By employing hybrid beam-forming, we can get at most two-fold DoF gain as compared to the full digital case in which the number of RF chains is the same as the hybrid beam-forming case. {proof} Let and denote the sum DoFs with hybrid beam-forming and full digital structures, respectively. For the two-user case, we have ΓhΓf =min{M1+M2,N1+N2,M1+N2,M2+N1,max{M′1,N′2},max{M′2,N′1}}min{M1+M2,N1+N2,max{M1,N2},max{M2,N1}} ≤min{M1,N1}+min{M2,N2}min{M1+M2,N1+N2,max{M1,N2},max{M2,N1}} ≤min{M1,N1}+min{M2,N2}max{min{M1,N1},min{M2,N2}} ≤2max{min{M1,N1},min{M2,N2}}max{min{M1,N1},min{M2,N2}} =2 In addition, for the general -user case, we have ΓhΓf ≤Kmin{M,N}KLL+1min{M,N} =L+1L ≤2, where . This completes the proof. DoF gain due to hybrid beam-forming: Consider the three-user case where . First, we set and plot the sum DoF as a function of with fixed and in Fig. 3. In addition, we consider another scenario in which additional antennas are employed only at transmitters, i.e., , and again plot the sum DoF as a function of . As can be seen in the figure, by using hybrid beam-forming, we can achieve a higher DoF and interestingly, it can reach up to the maximum DoF of six even when hybrid beam-forming is applied at transmitters only. Furthermore, note that when achieving this DoF, interference alignment combined with hybrid beam-forming is employed. From this point, we can see that hybrid beam-forming can provide an improved capability not only nulling out interferences but also aligning interferences at RF domain. Now, we examine a tendency of the sum DoF with respect to with the fixed number of antennas and RF chains at each node. Specifically, we set and plot the sum DoFs when and in Fig. 4. For comparison, we also plot the sum DoF of the full digital case where the number of RF chains is the same as the hybrid beam-forming case. From Fig. 4, we see that hybrid beam-forming can improve the sum DoF for all values of , and moreover, the slope also increases as the number of additional antennas at each receiver increases. ## V Numerical Simulation In this section, we numerically evaluate the average sum rate performance of the proposed hybrid beam-forming schemes for and cases to show that the sum DoFs stated in Theorems 1 and 2 are indeed achievable. For comparison, the sum DoFs of the full digital and the interference-free cases are also plotted. Here, we assume Rayleigh fading environment where each channel coefficient is drawn i.i.d from . In addition, we assume that all the noise power is normalized to unity and thus . Furthermore, to clearly capture the sum DoFs from the sum-rate graphs, we plot the average sum rates as a function of . ### V-a Average Sum Rate for the Two-user Case In Fig. 5, the average sum rates are plotted as a function of , where and . Note that the sum DoFs can be observed from the slopes in the figure. We can see that the sum DoFs obtained by the simulation are well matched with the sum DoFs stated in Lemma 1 and Theorem 1. Here, when the simulation is performed, the number of streams of hybrid beam-forming for each user is set by and for and for by following Theorem 1. As shown in the figure, the full digital scheme can only achieve the sum DoF of two, while the sum DoF of the interference-free channel is four. When hybrid beam-forming is employed, we can see by simulation that the sum DoF can be improved and even reach up to the interference-free DoF, as shown in Theorem 1, and therefore the performance gap between hybrid beam-forming and full digital cases dramatically increases as the SNR increases. ### V-B Average Sum Rate for the Three-user Case As in the previous subsection, the average sum rate is plotted as a function of in Fig. 6, where . When hybrid beam-forming is used, we consider the two different scenarios in which additional antennas are employed only at transmitters, i.e., for and 6, and additional antennas are employed both at transmitters and receivers, i.e., for and 6. Here, we adopt the distributed interference alignment333Note that the achievable scheme proposed in Theorem 2 requires an arbitrary large number of symbol extension. Therefore, in this subsection, instead of adopting the achievable scheme in Theorem 2 directly, we employ the DIA algorithm to numerically show that the sum DoF stated in Theorem 2 is indeed feasible. Here, Theorem 2 provides theoretical guidance when selecting a suitable number of streams for each user. (DIA) algorithm proposed in [33] for numerical simulation and the number of streams of hybrid beam-forming used for the simulation is given by Theorem 2. The slopes in the figure show that the sum DoFs stated in Theorem 2 is indeed achievable. The full digital scheme can only achieve the sum DoF of three, while the sum DoF of the interference-free channel is six as shown in the figure. As in the two-user case, the sum DoF of the full digital scheme is only half of that of the interference-free channel. When and , the hybrid beam-forming can achieve the maximum sum DoF of six as if there is no interference between users. Interestingly, for the case in which additional antennas are employed only at transmitters , the sum DoF can also be increased as compared to the full digital case, and the performance gain over the full digital case increases as the number of additional antennas increases. ## Vi Proof of Theorem 1 ### Vi-a Achievability In our achievable scheme, we will use only transmit RF chains out of RF chains of transmitter and receive RF chains out of RF chains of receiver , for all . Hence, from now on, we can equivalently consider the interference channel instead of the original channel, the interference channel. In addition, since our achievable scheme operates in a single time slot, we omit the time index for brevity. We design the input signal of transmitter as xi=V′iVisi, where is the transmit analog precoder, is the transmit digital precoder, and is the vector of transmitted Gaussian symbols of user . To be specific, beam-forming vectors in can be decomposed into two parts: V′i=[V′iiV′i0] • denotes the th beam-forming vector in such that and , where . Note that since the size of is given by and channel matrices are drawn i.i.d from a continuous distribution, the maximum number of linearly independent beam-forming vectors satisfying this condition is . Let denote the number of such vectors. • denotes the th beam-forming vector in whose coefficients are randomly generated from a continuous distribution and , where has a finite value. Hence, and for with probability one. Let denote the number of such vectors. In addition, we further restrict and to satisfy . In summary, we choose , , , , , and to satisfy the following conditions. 0≤d1=d11+d10≤min(M1,N1) (4) 0≤d2=d22+d20≤min(M2,N2) (5) 0≤d11≤max(0,M′1−N′2) (6) 0≤d22≤max(0,M′2−N′1) (7) 0≤d1+d20≤N′1 (8) 0≤d2+d10≤N′2 (9) Then the received signal at receiver at RF domain is given by yi =Hiixi+Hijxj+zi =HiiV′iVisi+HijV′jVjsj+zi =HiiV′iVisi+Hij[0M′j×djjV′j0]Vjsj+zi, (10) where (10) is due to the properties of and . Now we explain the beam-forming matrix at receiver . Denote as the receive analog precoder and as the receive digital precoder. We set such that and . Since we have rank(HiiV′iVi)=di rank(Hij[0M′j×djjV′j0]Vj)=dj0 di+dj0≤N′i, we can find satisfying these conditions. Therefore, after applying receive analog precoding, we obtain U′∗iyi =U′∗iHiiV′iVisi+U′∗iHij[0M′j×djjV′j0]Vjsj+U′∗izi =U′∗iHiiV′iVisi+U′∗izi. Recall that . Now, we set and as the left and right singular matrices of the matrix , respectively. Then we get parallel AWGN channels for user after applying the receive digital precoding as follows: U∗iU′∗iyi=y[e]i =U∗iU′∗iHiiV′iVisi+U∗iU′∗izi =Λisi+z[e]i, where is the diagonal matrix with the singular values of on the diagonal and . Therefore, we can see that each user achieves DoF via the proposed scheme, and thus the achievable total DoF is given by . Finally, by evaluating the conditions (4)–(9) using the Fourier-Motzkin elimination, we get the desired bound: Γ≥min{M1+M2,N1+N2,M1+N2,M2+N1,max{M′1</
2019-09-15 10:32:52
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8417331576347351, "perplexity": 468.47904705312754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514571027.62/warc/CC-MAIN-20190915093509-20190915115509-00280.warc.gz"}
http://math.stackexchange.com/questions/604635/of-any-52-integers-two-can-be-found-whose-difference-of-squares-is-divisible-by/604655
Of any 52 integers, two can be found whose difference of squares is divisible by 100 Prove that of any 52 integers, two can always be found such that the difference of their squares is divisible by 100. I was thinking about using recurrence, but it seems like pigeonhole may also work. I don't know where to start. - You may think wise to precise that the integers all have to be distinct. –  Pierre Arlaud Dec 13 '13 at 14:29 –  lab bhattacharjee Dec 13 '13 at 15:04 Certainly the result is still true even if the integers aren't distinct! –  hunter Dec 13 '13 at 15:32 @Wolfman: The whole world considers 0 to be divisible by 100, surely? –  TonyK Dec 14 '13 at 9:30 @WolfmanJoe Relevant: 0 is an even number. (0 is divisible by 2.) –  JiminP Dec 18 '13 at 19:45 Look at your $52$ integers mod $100$. Look at the pairs of additive inverses $(0,0)$, $(1,99)$, $(2,98)$, etc. There are $51$ such pairs. Since we have $52$ integers, two of them must belong to a pair $(x,-x)$. Then $x^2 - (-x)^2 = 0 \pmod{100}$, so that the difference of their squares is divisible by $100$. - Instead of $(x,-x)$ they might also be $(x,x)$ –  Hagen von Eitzen Dec 12 '13 at 20:56 @HagenvonEitzen, however this occurs precisely when $x=-x$, or when $x=0,50$ –  Dylan Yott Dec 12 '13 at 21:01 I recognize this solution isn't optimal, but I figured I'd give it because it seemed like the intended method and also has the advantage of not needing to know the number of squares mod $100$. –  Dylan Yott Dec 13 '13 at 7:08 This seems to have the advantage that it does not use the factorization of $100$. If you ask the same question modulo $2k$ (instead of modulo $100$), you would get $k+1$ pairs, so $k+2$ integers would certainly be enough. For odd "base", i.e. modulo $(2k+1)$, we also get $k+1$ pairs, and $k+2$ integers suffice again. –  Jeppe Stig Nielsen Dec 13 '13 at 13:43 +1 This is a simpler proof than the optimal result. Also: what the OP asked for. I think both answers have their merits. –  Tim Seguine Dec 13 '13 at 19:03 Only 23 numbers are needed. There are only $22$ squares mod $100$, so if you have $23$ integers, two must be yield the same square mod $100$. That is, you must have two different values, $a$ and $b$, such that $a^2 \equiv b^2 \pmod {100}$. Hence, $100$ divides $a^2-b^2$. Here are the $22$ squares : $$0,1,4,9,16,21,24,25,29,36,41,44,49,56,61,64,69,76,81,84,89,96.$$ Added: Note that since there are $22$ squares mod $100$, we can create sets of $22$ integers for which there is not pair with the property that the difference of their squares is divisible by $100$. Hence, the $23$ here is best possible. - It appears you beat me by 35 seconds … –  Harald Hanche-Olsen Dec 12 '13 at 20:57 +1... Answer by @Dylan Yott solves the question, but yours is that of a mathematician. Instead of providing the answer, you found a more intelligent fact than the one asked, and provided a solution when there are even less integers. –  Nico Dec 13 '13 at 1:26 Every square is congruent to either $0$ or $1$ modulo $4$. Also, there are $11$ distinct squares modulo $25$. By the Chinese remainder theorem, there are only $2\cdot11=22$ distinct squares modulo $100$. So the $52$ in the problem can be improved to $23$. - Nice generalization of the technique used above by @Matthew Conroy. –  Pieter Geerkens Dec 13 '13 at 2:26 I was just about to ask how * did you know there are 22 suqares mod 100 –  zinking Dec 13 '13 at 5:42 Yes, thanks for reminding me of this theorem... since I attempted the problem before reading the other answers, I enumerated them and then had to work backward... good thing it wasn't modulo 10000. –  laindir Dec 13 '13 at 22:20 Fascinating how much attention this is getting. For future readers, here is how to quickly count the 11 distinct squares modulo 25: $a^2\equiv b^2$ is the same as $(a-b)(a+b)\equiv0\pmod{25}$. The most trival case is $a\equiv b$, then comes $a+b\equiv 0$, which shows that we need only consider $0,1,\ldots,12$. If neither congruence holds, then both of $a\pm b$ must be divisible by $5$, hence so must both $a$ and $b$. This takes care of the cases 0, 5, and 10, all having the same square 0. The remaining ten will have non-congruent squares. –  Harald Hanche-Olsen Dec 14 '13 at 14:17 In 26 integers, by Pigeonhole Principle you have at least two whose difference is zero when divided by 25. In 52 integers you have at least 3 such integers. Pick those three integers. Again, by Pigeonhole, two of them will have the same parity. Let them be $a$ and $b$. Thus $2|(a+b)$ and $2|(a-b)$ as such $4|(a+b)(a-b)$. Since 25|(a+b)(a-b) also and 4 and 25 are relatively prime you have $100|(a^2-b^2)$. - Nice, but this must not be the intended solution, because it your use of the pigeonhole principle requires only 51 integers. –  bof Dec 13 '13 at 7:26 Look at your $52$ integers $\mod 100$. So, the difference of their squares resulting in division by $100$ can be given by $a^2=b^2(\mod 100)$. This will resolute in product of the difference of the numbers and sum of the numbers is divisible by $100$. since, any of $52$ integer numbers are asked, there can be no optimal solution answer for this. For example:- Look at the pairs of additive inverses $(0,100)$, $(1,99)$, $(2,98)$, etc. There are $51$ such pairs. Since we have $52$ integers, two of them must belong to a pair $(a,−a)$. Then $a^2-(−a)^2=0(\mod 100)$, so that the difference of their squares is divisible by $100$. Likewise, since square of $10$ is $100$, so all the pair of integers with a difference of multiple of $10$ and the numbers whose additive numbers comes resulting in $\mod 10$ will also end up in the difference of squares is divisible by $100$. For example: $(0,10)$, $(10,20)$, $(20,30)$,... etc. Similarly, sum is multiple of $20$ and difference is multiple of $5$ will produce the same result. so, this assumption is proved. But the following analysis can be further improved by Chinese remainder theorem. There are $11$ distinct squares modulo $25$. By the Chinese remainder theorem, there are only $2\cdot 11=22$ distinct squares modulo $100$. So the $52$ in the problem can be improved to $23$. since, square of $0$'s at unit place ending up in $0$, likewise $1$ in $1$, $2$ in $4$, $3$ in $9$, $4$ in $6$, $5$ in $5$, $6$ in $6$, $7$ in $9$, $8$ in $4$, $9$ in $1$. So, every square is congruent to either $0$ or $1$ or modulo $4$. so, there are only $23$ integers, for which two of the integers will draw the result that difference of their squares is divisible by $100$. - If the difference $n^2 - m^2$ between two integer squares is an even number, then the numbers, n and m, must either be both odd or both even. $n^2 - m^2$ must therefore always contain the factor 4 and we only have to consider squares (mod 25). These are the 11 numbers 0, 1, 4, 6, 9, 11, 14, 16, 19, 21 and 24. But 23 numbers must always contain 12 numbers either odd or even. -
2015-07-02 05:52:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8329285383224487, "perplexity": 202.31078072490135}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375095404.78/warc/CC-MAIN-20150627031815-00254-ip-10-179-60-89.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/213901/vector-fields-on-torus?answertab=votes
# Vector fields on torus Let $A=d/dx$ and $B=d/dy$ be vector fields on $\mathbb{R}^2$, prove that they induce vector fields $X$ and $Y$ on the torus $T$ by $X(f)=A(f(\pi)), Y(f)=B(f(\pi))$ where $\pi$ is quotient map from $\mathbb{R}^2$ to torus. - Welcome to math.SE. Here we answer questions for the people who might have them. Problem is, you have yet to propose one. What we like to see here is questions or very polite requests, together with perhaps some of your own thoughts on the topic. Once that is supplied, people will feel more inclined towards putting effort into solving someone else's problem. –  Arthur Oct 14 '12 at 22:10 On the definition(mathworld.wolfram.com/VectorField.html) it is written that vector field is a map from $R^n$ to $R^n$. By this definition we have that $X$ is vector field, nothing to prove? –  user44674 Oct 14 '12 at 22:25 It doesn't say what a vector field on the torus is. Until you have that piece of information, showing whether or not anything is a vector field on the torus is impossible. So that's where I'd go. See how your source (be it internet or a book) would define a vector field on a torus, and see if $X$ and $Y$ can fit that role. –  Arthur Oct 14 '12 at 22:30 Well, the torus can be defined as the quotient space $T=\Bbb R^2/\sim$, where $(u,v)\sim(u',v')$ iff $(u'-u),(v'-v)\in\Bbb Z$. Best is to draw on a squared paper.. Vectors of $A$ go right, $B$ goes up, with unit speed. In particular, it is represented by the unit square $[0,1]^2$. Practically, all we have to prove is that if points $U\sim V$ then $AU = AV$ and $BU=BV$. In this case, all smooth functions $f$ on $T$ can be 'lifted' to $\Bbb R^2$, i.e. can be viewed as a $\Bbb Z$-periodic function $f:\Bbb R^2\to\Bbb R$, that is, $fU\sim fV$ if $U\sim V$. And so..
2014-04-23 07:59:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8653918504714966, "perplexity": 211.05771839144202}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00510-ip-10-147-4-33.ec2.internal.warc.gz"}
https://py-pol.readthedocs.io/en/master/source/examples/advanced/Example_3_Analysis.html
# 5.2.1. Example 3: Analysis¶ In Optics, polarimetry is the discipline that studies the measurement and interpretation of the polarization of light waves, and the behavior of optical elements upon the polarization of light waves. In other words, it studies the measurement and analysis of Jones/Stokes vectors (light waves) and Jones/Mueller matrices (optical objects). This example is devoted to show the different analysis possibilities offered by py_pol. [9]: import numpy as np from py_pol import degrees from py_pol.jones_vector import Jones_vector from py_pol.jones_matrix import Jones_matrix from py_pol.stokes import Stokes, create_Stokes from py_pol.mueller import Mueller The autoreload extension is already loaded. To reload it, use: ## 5.2.1.1. Analysis of light waves¶ ### 5.2.1.1.1. Theory of Jones vectors¶ A transversal light wave can be described as: $$\overrightarrow{E}=\left[\begin{array}{c} E_{x}\\ E_{y}\\ 0 \end{array}\right]e^{i\left(kz-\omega t\right)}e^{i\varphi}=\overrightarrow{E}_{pol}e^{i\left(kz-\omega t\right)}$$ Being $$\varphi$$ the global phase, $$z$$ the propagation direction of light, $k=:nbsphinx-math:left\Vert :nbsphinx-math:overrightarrow{k}:nbsphinx-math:right\Vert $ the wavevector modulus, :math:omega the angular frequency and $$\overrightarrow{E}_{pol}$$ the polarization Jones vector composed of two complex components $$E_x$$ and $$E_y$$. Any complex 2x1 vector can describe a physically realizable light wave polarization state. The Jones vector is completely described by four parameters. There are several sets of parameters that can be used. Some of the most used ones are the following: 1. The real and imaginary parts of $$E_x$$ and $$E_y$$. 2. Global phase (phase of $$E_x$$, $$\varphi$$), electric field amplitudes ($$\left|E_{x}\right|$$ and $$\left|E_{y}\right|$$) and the phase difference between components ($$\delta$$). 3. Global phase, semimajor and semiminor ellipse axes ($$a$$ and $$b$$ respectively) and the semimajor axis azimuth ($$\phi$$). 4. Global phase, wave intensity ($$I=\left|E_{x}\right|^{2}+\left|E_{y}\right|^{2}$$) and the characteristic angles $$\alpha$$ (ratio angle: $$\tan\alpha=\frac{\left|E_{y}\right|}{\left|E_{x}\right|}$$) and $$\delta$$. 5. Global phase, wave intensity, azimuth ($$\phi$$) and ellipticity angle ($$\tan\chi=\frac{b}{a}$$). Most of these parameters can be easily visualized in the ellipse produced by the real part of $$\overrightarrow{E}$$: It is worth noting that not all the values for the four angles should be used. Specifically: 1. $$\;\alpha\;\in\;[0º, 90º]$$ 2. $$\;\delta\;\in\;[0º, 360º)$$ 3. $$\;\phi\;\in\;[0º, 180º)$$ 4. $$\;\chi\;\in\;[-45º, 45º]$$ Also, there are some cases where two or more different values correspond to the same polarization state. For example, right-handed circular polarization corresponds to $$\chi=45º$$ and any azimuth value. Analyze a Jones vector means calculating one set of 4 parameters. We chose to use mainly the two last sets of parameters, as their physical meaning is easily understood and highly compatible with polarimetry experiments. In some special cases, one one of the parameters used to describe the light wave may be undetermined. For example, circular polarization states does not have a determined azimuth. ### 5.2.1.1.2. Example of analysis of Jones vectors¶ All the parameters described above can be easily calculated from a Jones_vector object through the parameters subclass. This subclass also provides some other parameters that can be useful. [2]: # Create a random Jones vector E = Jones_vector('Random polarization') E_real = np.random.rand(2, 10) E_imag = np.random.rand(2, 10) E.from_matrix(M=E_real+1j*E_imag) print(E) Random polarization = [+1.000+0.373j] [+0.745+0.982j] [+0.421+0.062j] [+0.328+0.216j] [+0.520+0.298j] [+0.350+0.941j] [+0.089+0.464j] [+0.845+0.579j] [+0.675+0.215j] [+0.468+0.175j] [+0.797+0.388j] [+0.578+0.631j] [+0.915+0.942j] [+0.177+0.308j] [+0.686+0.451j] [+0.179+0.659j] [+0.181+0.677j] [+0.443+0.927j] [+0.024+0.792j] [+0.952+0.084j] [3]: # Calculate a set of parameters gp = E.parameters.global_phase(verbose=True) I = E.parameters.intensity(verbose=True) azimuth = E.parameters.azimuth(verbose=True) ellipticity = E.parameters.ellipticity_angle(verbose=True) The global phase of Random polarization is (deg.): [20.47881277 52.82310549 8.37669939 33.4224342 29.80730163 69.62319614 79.10493956 34.41232349 17.71335467 20.46600872] The mean value is 36.62281760625346 +- 22.124347087961016 The intensity of Random polarization is (a.u.): [1.92452675 2.25289393 1.90468351 0.28056211 1.0330991 1.47356069 0.71500272 2.10269023 1.13019968 1.16393945] The mean value is 1.3981158181111537 +- 0.6117112533271021 The azimuth of Random polarization is (deg.): [39.67195812 34.73076642 75.06633859 41.80155297 53.92044418 34.17850188 56.03811077 45.103272 54.3856788 62.89374159] The mean value is 49.77903653215532 +- 12.435733679626598 The ellipticity angle of Random polarization is (deg.): [ 2.69254418 -2.492722 10.44696024 13.32323771 1.67171187 2.38605174 -1.88274892 15.02903114 34.78040804 -6.29793778] The mean value is 6.9656536217808736 +- 11.411191797005818 [4]: # Calculate some other parameters dlp = E.parameters.degree_linear_polarization(verbose=True) dcp = E.parameters.degree_circular_polarization(verbose=True) The degree of linear polarization of Random polarization is: [0.99558642 0.99621681 0.93424232 0.89379074 0.9982979 0.99653348 0.9978412 0.86551827 0.34921296 0.97593246] The mean value is 0.9003172572480562 +- 0.18933560250702827 The degree of circular polarization of Random polarization is: [ 0.09384921 -0.08690266 0.35663887 0.44848423 0.05832064 0.08319265 -0.06567304 0.50087735 0.93704339 -0.21807299] The mean value is 0.2107757665675825 +- 0.3303852934869926 The checks subclass has some methods that allow analyzing the Jones vectors to see if some conditions are met. [15]: cond = E.checks.is_linear(verbose=True) Random polarization is linearly polarized: [False False False False False False False False False False] The mean value is 0.0 +- 0.0 ### 5.2.1.1.3. Theory of Stokes vectors¶ Stokes vectors can also be used to represent the polarization state of light waves. They are composed of four real elements: $$S=\left[\begin{array}{c} I\\ Q\\ U\\ V \end{array}\right]=\left[\begin{array}{c} S_{0}\\ S_{1}\\ S_{2}\\ S_{3} \end{array}\right]=\left[\begin{array}{c} I_{total}\\ I_{0\text{º}}-I_{90\text{º}}\\ I_{45\text{º}}-I_{135\text{º}}\\ I_{right}-I_{left} \end{array}\right]$$ The four elements of the Stokes vector represent some intensities: $$I_{total}$$ is the total intensity, $$I_x$$ is the partial intensity of linear polarization of angle $$x$$, and $$I_{right}$$ and $$I_{left}$$ are the partial right-handed and left-handed circular polarizations. Not every 4x1 real vector is a physically realizable Stokes vector, as it must fulfill the conservation of energy: $$S_{0}^{2}\geq S_{1}^{2}+S_{2}^{2}+S_{3}^{2}$$. There are some differences between Stokes and Jones vectors. Stokes vectors use intensity, while Jones vectors use electric field. This means that Jones vectors have the information of the wave global phase, while Stokes vectors can be used to describe partially depolarized light. Totally polarized states (pure states) have their electric field totally described by the equation above. Totally depolarized light (or natural light) electric field is random (with the module of the electric field limited by the wave intensity). Partially polarized light can be divided into the sum of a totally polarized state plus a totally unpolarized state: $$S=\wp S_{pol}+(1-\wp)S_{depol}$$. Being $$\wp$$ the polarization degree, $$S_{pol}$$ the pure Stokes vector. The totally polarized Stokes vector fulfills the equation: $$S_{0}^{2} = S_{1}^{2}+S_{2}^{2}+S_{3}^{2}$$. While a totally unpolarized Stokes vectors have $$S_1=S_2=S_3=0$$. NOTE: Even if Stokes vectors do not contain neither use the global phase of the light wave, Stokes objects do. So it is possible to calculate the result of interference experiments using Stokes objects. A Stokes vector can be analyzed by describing some of their parameters. Like Jones vectors, there are several sets of 4 parameters that completely describe a Stokes vector. The most usual are: 1. The four components of the vector. 2. Polarization degree, wave total intensity and the characteristic angles $$\alpha$$ and $$\delta$$. 3. Polarization degree, wave total intensity, azimuth ($$\phi$$) and ellipticity angle ($$\chi$$). It is important noting that the four angles are defined only for pure states. In the case of partially depolarized states, those angles are referred to the totally polarized part of the Stokes vector. ### 5.2.1.1.4. Example of analysis of Stokes vectors¶ All the parameters described above can be easily calculated from a Stokes object through the parameters subclass. This subclass also provides some other parameters that can be useful. [7]: # Create a random Stokes vector S = Stokes('Random polarization') S_matrix = np.random.rand(4, 10) S_matrix[0,:] = np.sqrt(3) # This assures that the Stokes vectors are physically realizable. S.from_matrix(M=S_matrix) print(S) Random polarization = [+1.732] [+1.732] [+1.732] [+1.732] [+1.732] [+1.732] [+1.732] [+1.732] [+1.732] [+1.732] [+0.561] [+0.186] [+0.951] [+0.158] [+0.086] [+0.228] [+0.989] [+0.619] [+0.674] [+0.598] [+0.067] [+0.690] [+0.973] [+0.774] [+0.379] [+0.225] [+0.109] [+0.932] [+0.902] [+0.629] [+0.949] [+0.652] [+0.550] [+0.132] [+0.178] [+0.846] [+0.966] [+0.973] [+0.817] [+0.579] [11]: # Calculate a set of parameters# Calculate a set of parameters P = S.parameters.degree_polarization(verbose=True) I = S.parameters.intensity(verbose=True) azimuth = S.parameters.azimuth(verbose=True) ellipticity = S.parameters.ellipticity_angle(verbose=True) The degree of polarization of Random polarization is: [0.63762812 0.55837176 0.84697073 0.4626651 0.24694762 0.5224137 0.80099099 0.85597778 0.80317173 0.60228119] The mean value is 0.6337418716831728 +- 0.18694668753460006 The intensity of Random polarization is (a.u.): [1.73205081 1.73205081 1.73205081 1.73205081 1.73205081 1.73205081 1.73205081 1.73205081 1.73205081 1.73205081] The mean value is 1.7320508075688772 +- 0.0 The azimuth of Random polarization is (deg): [ 3.40989654 37.46278842 22.82597776 39.23257653 38.59205754 22.29215954 3.14242587 28.19732688 26.62926903 23.23188297] The mean value is 24.501636108744115 +- 12.296732222924637 The ellipticity angle of Random polarization is (deg): [29.61105375 21.18829465 11.00851636 4.73496352 12.30511127 34.61733771 22.07461622 20.50166121 17.97886053 16.86008132] The mean value is 19.088049654937674 +- 8.310918395121357 [12]: # Calculate some other parameters dlp = S.parameters.degree_linear_polarization(verbose=True) dcp = S.parameters.degree_circular_polarization(verbose=True) The degree of linear polarization of Random polarization is: [0.32628158 0.41248641 0.78520323 0.45635994 0.22451534 0.18521715 0.57473351 0.64598207 0.65012776 0.50095269] The mean value is 0.47618596856749357 +- 0.1843437348289973 The degree of circular polarization of Random polarization is: [0.54782292 0.3763429 0.31751426 0.07612225 0.10283961 0.48847793 0.55791393 0.56160941 0.47161289 0.33434867] The mean value is 0.3834604771096698 +- 0.16970620839618467 The checks subclass has some methods that allow analyzing the Stokes vectors to see if some conditions are met. [15]: cond = E.checks.is_linear(verbose=True) Random polarization is linearly polarized: [False False False False False False False False False False] The mean value is 0.0 +- 0.0 [17]: S.M[0,0] = 0 # This forces the first Stokes vector to not be physically realizable cond = S.checks.is_physical(verbose=True) Random polarization is physically realizable: [False True True True True True True True True True] The mean value is 0.9 +- 0.30000000000000004 d:\codigo_ucm\py_pol\py_pol\stokes.py:3062: RuntimeWarning: invalid value encountered in less_equal DOP = self.parent.parameters.degree_polarization(out_number=False) ## 5.2.1.2. Analysis of optical elements¶ ### 5.2.1.2.1. Theory of Jones matrix objects¶ Pure optical elements (elements that do not depolarize totally polarized light states) can be described by a Jones matrix, a 2x2 matrix of complex numbers: $$J=\left[\begin{array}{cc} J_{00} & J_{01}\\ J_{10} & J_{11} \end{array}\right]$$. The effect of an optical element upon an incident light wave is calculated by multiplying the Jones matrix of the optical object by the Jones vector of the light wave: $$E_{out}=J\,E_{in}$$. There are three things that an optical object can do to a light wave in Jones formalism: 1. Diattenuate (vary the electric field amplitude). 2. Retard (vary the delay between electric field components). 3. Increase the global phase. The first two phenomena define two types of optical elements: diattenuator and retarders. Both diattenuators and retarders may increase the global phase of light waves. Diattenuators are element that vary the electric field amplitude of the incident wave. Usually, this variation is different on both electric field components. These variations are described by the maximum and minimum field transmissions ($$p_1$$ and $$p_2$$ respectively). Alternatively, intensity transmissions can be used: $$T_{max}=p_1^2$$ and $$T_{min}=p2^2$$. Passive diattenuators have both $$T_{i}\leq1$$, while active diattenuators have one or both $$T_{i}>1$$. The last ones are usually called amplifiers. Diattenuators are often called polarizers (it will be explained for Mueller formalism) and usually have $$p_{1}\simeq1$$ and $$p_{2}\simeq0$$. Retarders introduce a phase delay between electric field components called retardance ($$\Delta$$). This allows changing from linear to elliptical polarization. Retardance is between 0º and 180º, as different values are equivalent to a retardance between those values and/or a rotation. Special cases of retarders are quarter and half-waveplates, which present a retardance of 90º and 180º respectively. Every Jones matrix due to being a 2x2 matrix, has at least two eigenvectors, often referred as eigenstates. If those eigenstates are orthogonal, the optical element is referred as homogeneous. The eigenvalues of a diattenuator are $$p_1e^{i\varphi}$$ and $$p_2e^{i\varphi}$$. The eigenstate associated to $$p_1$$ is called the transmission axis and the eigenstate associated to $$p_2$$ the extinction axis (due to the usual values for polarizers). In the case of retarders, the two eigenvalues are $$e^{i\varphi}$$ and $$e^{i(\varphi + \Delta}$$. The eigenstate associated with those eigenvalues are called the fast and slow axes respectively, due to the difference in phase. Independently of the optical element, its eigenstates are usually characterized using two angles: the characteristic angles or azimuth and ellipticity. The reason is that, if a Jones vector is an eigenvector of $$J$$, it remains an eigenvector even if its intensity or global phase is varied. If the optical element is homogeneous, then only one eigenstate is characterized, as the other is orthogonal to the given one. In the case of diattenuators and retarders, the given eigenstate is usually the transmission axis and the fast axis respectively. The global phase introduced by the Jones matrix can be a difficult topic. First, this parameter is rarely measured, as most experiments only consider intensity and do not introduce interferences between light waves. Also, the global phase introduced by a Jones matrix may depend on the incident light wave. That renders difficult to define a good reference for global phase. We choose that reference to be when $$J_{00}$$ is real and positive. ### 5.2.1.2.2. Examples of characterization of Jones matrices¶ Many parameters can be easily calculated from a Jones_matrix object through the parameters subclass. [22]: # Create a random Jones matrix J = Jones_matrix('Random element') M_real = np.random.rand(2, 2, 5) M_imag = np.random.rand(2, 2, 5) J.from_matrix(M=M_real+1j*M_imag) print(J) Random element = [+0.210+0.431j +0.205+0.113j] [+0.556+0.783j +0.282+0.635j] [+0.702+0.701j +0.101+0.490j] [+0.854+0.932j +0.235+0.163j] [+0.643+0.736j +0.098+0.389j] [+0.523+0.399j +0.995+0.051j] [+0.855+0.501j +0.146+0.668j] [+0.164+0.184j +0.214+0.386j] [+0.967+0.451j +0.548+0.450j] [+0.922+0.079j +0.328+0.732j] [24]: # Calculate the transmission and retardance of the random eleemnt trans = J.parameters.transmissions(kind='all', verbose=True) ret = J.parameters.retardance(verbose=True) The intensity transmissions of Random element are: Maximum (int.) [1.64695644 2.79459313 1.41513167 3.1924615 2.50762893] Minimum (int.) [0.06341533 0.06179514 0.07474705 0.12867671 0.1077289 ] The mean value of param Maximum (int.) is 2.311354334237151 +- 0.6772049792862028 The mean value of param Minimum (int.) is 0.0872726272050373 +- 0.026487434381375564 The field transmissions of Random element are: Maximum (int.) [1.28333801 1.67170366 1.18959307 1.78674606 1.58354947] Minimum (int.) [0.251824 0.24858629 0.27339907 0.35871537 0.32822082] The mean value of param Maximum (int.) is 1.502986054752848 +- 0.2288826193829923 The mean value of param Minimum (int.) is 0.2921491068198357 +- 0.04383522087784595 The retardance of Random element is (deg.): [48.97269025 98.29319062 12.22748499 30.01450328 42.63314305] The mean value is 46.42820243893986 +- 28.809555663155344 d:\codigo_ucm\py_pol\py_pol\jones_matrix.py:2446: ComplexWarning: Casting complex values to real discards the imaginary part R[cond1] = 2 * np.arccos(np.sqrt(num / den) * co) The checks subclass also includes some interesting methods to calculate if an optical element fulfills some conditions. For example, we can calculate if an element is an homogeneous diattenuator: [25]: cond = J.checks.is_diattenuator(verbose=True) Random element is an homogeneous diattenuator: [False False False False False] The mean value is 0.0 +- 0.0 Homogeneous diattenuators and retarders are well understood. They are characterized by the transmissions (diattenuators) and retardance (retarders), plus the transmission/fast eigenstate (as the other one is perpendicular). The analysis subclass have two methods to extract those parameters from an homogeneous diattenuator or retarder. For example, we can create the most general diattenuator and compare the calculations with the original values: [47]: # Create random values N = 1000 p1 = np.random.rand(N) * 0.5 + 0.5 p2 = np.random.rand(N) * 0.5 alpha = np.random.rand(N) * 90 * degrees delay = np.random.rand(N) * 360 * degrees # Create the py_pol object J = Jones_matrix('General random diattenuator') J.diattenuator_charac_angles(p1=p1, p2=p2, alpha=alpha, delay=delay) # Analyze it trans, angles = J.analysis.diattenuator(angles='charac', transmissions='field') # Compare the results print('Error in p1:') error = np.linalg.norm(p1 - trans[0]) print(error) print('Error in p2:') error = np.linalg.norm(p2 - trans[1]) print(error) print('Error in alpha:') error = np.linalg.norm(alpha - angles[0])/degrees print(error) print('Error in delay') error = np.linalg.norm(delay - angles[1])/degrees print(error) Error in p1: 3.575193960459936e-15 Error in p2: 1.2557344989130534e-14 Error in alpha: 4.377375969826691e-13 Error in delay 4.985750427120611e-13 Same for the most general homogeneous retarder with randomized values. [11]: # Create random values N = 1000 R = np.random.rand(N) * 180 * degrees alpha = np.random.rand(N) * 90 * degrees delay = np.random.rand(N) * 360 * degrees # Create the py_pol object J = Jones_matrix('General random diattenuator') J.retarder_charac_angles(R=R, alpha=alpha, delay=delay) # Analyze it R_calc, angles = J.analysis.retarder(angles='charac') # Compare the results print('Error in retardance:') error = np.linalg.norm(R - R_calc)/degrees print(error) print('Error in alpha:') error = np.linalg.norm(alpha - angles[0])/degrees print(error) print('Error in delay') error = np.linalg.norm(delay - angles[1])/degrees print(error) Error in retardance: 8.913732066593084e-12 Error in alpha: 1.739757186343351e-09 Error in delay 2.711462214533282e-13 d:\codigo_ucm\py_pol\py_pol\jones_matrix.py:2446: ComplexWarning: Casting complex values to real discards the imaginary part R[cond1] = 2 * np.arccos(np.sqrt(num / den) * co) The analysis of homogeneous diattenuators and retarders is very useful and easy to understand. However, not every optical element is an homogeneous diattenuator or retarder. Some of them may be inhomogeneous diattenuators or retarders, easy to understand but not so much to identify. Even more common are optical elements that show both diattenuation and retardance. We provide an easy method to analyze these matrices taking advantage from the polar decomposition theorem. This theorem states that every Jones matrix can be decomposed in the product of an homogeneous diattenuator $$J_D$$ and an homogeneous retarder $$J_R$$. There are two possible combinations: 1. $$J=J_D*J_R$$ 2. $$J=J_R*J_D$$ Then, each element can be easily analyzed separately. This means that Jones matrix is decomposed in seven parameters: $$p_1$$ and $$p_2$$ from $$J_D$$, $$\phi_D$$ and $$\chi_D$$ (or $$\alpha_D$$ and $$\delta_D$$) of the transmission axis of the diattenuator, and $$\Delta$$, $$\phi_R$$ and $$\chi_R$$ (or $$\alpha_R$$ and $$\delta_R$$) of the fast axis of the retarder, plus a global phase phase factor. Here we present an example of the decomposition and analysis of random matrices. [13]: # Create random Jones matrices M_real = np.random.rand(2,2,5) M_imag = np.random.rand(2,2,5) # Create the pypol object J = Jones_matrix('Random element') J.from_matrix(M=M_real + 1j*M_imag) # Decompose the matrix and measure all the relevant aprameters Jr, Jd, parameters = J.analysis.decompose_pure(verbose=True, all_info=True) ------------------------------------------------------ Polar decomposition of Random element as M = RP. Analysis of Random element Diattenuator as polarizer: - Transmissions of Random element Diattenuator are: Max. transmission [2.31222555 2.17868416 2.93798912 3.40565529 2.94364259] Min. transmission [0.03689747 0.02998905 0.09149645 0.23282748 0.0017845 ] p1 [1.52060039 1.47603664 1.71405633 1.84544176 1.71570469] p2 [0.19208714 0.17317347 0.3024838 0.482522 0.04224337] The mean value of param Max. transmission is 2.755639340420053 +- 0.451798732850638 The mean value of param Min. transmission is 0.07859899000810314 +- 0.08240774087740922 The mean value of param p1 is 1.654367963003518 +- 0.13676981906708907 The mean value of param p2 is 0.23850195639484006 +- 0.14736284064830235 - Angles of Random element Diattenuator are: Alpha [64.27653861 43.52943726 49.92039375 53.09474915 51.18469335] Delay [ 12.23299094 356.7133992 0.59222081 42.93019956 13.20098798] Azimuth [64.59802752 43.52701883 49.92065146 55.81474284 51.3471884 ] Ellipticity angle [ 4.76902817 -1.64113345 0.29175343 20.42562493 6.4446046 ] The mean value of param Alpha is 52.40116242268018 +- 6.7505131613387785 The mean value of param Delay is 85.13395969893591 +- 136.50843095527404 The mean value of param Azimuth is 53.041525811294676 +- 6.991136437843724 The mean value of param Ellipticity angle is 6.0579755366576915 +- 7.7558469944698105 Analysis of Random element Retarder as retarder: - Retardance of Random element Retarder is: [179.98294741 172.84269897 118.1761176 83.63765305 116.12427906] The mean value is 134.1527392185979 +- 36.68616282010433 - Angles of Random element Retarder are: Alpha [35.71619068 36.88159097 39.46679517 28.68533112 42.5317236 ] Delay [338.11561768 16.39980999 337.94641839 176.09702366 358.96207944] Azimuth [ 35.05001898 36.55651628 39.04203736 151.34489001 42.53132055] Ellipticity [-10.34567023 7.86418204 -10.81131376 1.6431273 -0.51703504] The mean value of param Alpha is 36.65632630836703 +- 4.623643411436292 The mean value of param Delay is 245.50418983253775 +- 132.13728200506344 The mean value of param Azimuth is 60.90495663588829 +- 45.29080180391635 The mean value of param Ellipticity is -2.433341937983976 +- 7.198946928846067 Random element decomposition mean square error: [1.07854610e-15 1.08424519e-15 8.33129733e-16 8.17730320e-16 9.30138993e-15] The mean value is 2.62300825445496e-15 +- 3.341156399458237e-15 ### 5.2.1.2.3. Theory of Mueller objects¶ All the theory of Jones matrices can be directly extrapolated to pure Mueller matrices. However, Mueller-Stokes formalism allows working with partially polarized light and optical elements which depolarize light. This allows defining a third basic optical element: the depolarizer. A depolarizer is an optical element that increases the depolarization degree of incoming light waves. A Mueller matrix is a 4x4 matrix of real elements. Its 16 components are usually divided in blocks that allow describing its properties easily: $$M=\left[\begin{array}{cccc} M_{00} & M_{01} & M_{02} & M_{03}\\ M_{10} & M_{11} & M_{12} & M_{13}\\ M_{20} & M_{21} & M_{22} & M_{23}\\ M_{30} & M_{31} & M_{32} & M_{33} \end{array}\right]=M\left[\begin{array}{cc} 1 & \overrightarrow{D}\\ \overrightarrow{P} & m \end{array}\right]=M_{00}\left[\begin{array}{cccc} 1 & D_{1} & D_{2} & D_{3}\\ P_{1} & m_{11} & m_{12} & m_{13}\\ P_{2} & m_{21} & m_{22} & m_{23}\\ P_{3} & m_{31} & m_{32} & m_{33} \end{array}\right]$$. This divides the Mueller matrix in four blocks: 1. :math:M_{00}: Mean transmission coefficient. This number describes the mean transmission of the object. 2. :math:overrightarrow{D}: Diattenuation vector. This 1x3 vector describes the properties of the object to reduce the intensity of the light that gets through it. 3. :math:overrightarrow{P}: Polarizance vector. This 3x1 vector describes the properties of the object to transform depolarized light into polarized light. 4. :math:m: Small matrix m. This 3x3 matrix describes the depolarization and retardance properties of the optical object. These four blocks allow start to analyze de behavior of a Mueller matrix, as the three basic optical elements fulfill some conditions: 1. Diattenuator: • $$M_{00}=\frac{p_{1}^{2}+p_{2}^{2}}{2}$$. • $$\overrightarrow{D}=\overrightarrow{P}^{T}$$. • $$m=m^T$$ with $$det(m)=0$$. 2. Retarder: • $$M_{00}=1$$. • $$P_{i}=D_{i}=0$$. • $$det(m)=\pm1$$. 3. Depolarizer: • $$m=m^T$$. This also explains the reason why diattenuators are commonly referred as polarizers: due to the condition $$\overrightarrow{D}=\overrightarrow{P}^{T}$$. This means that a diatenuator reduces the depolarization degree of light waves, behaving also as a polarizer. ### 5.2.1.2.4. Examples of characterization of Mueller matrices¶ Again, the parameters class includes many methods to characterize Mueller matrices. [3]: # Create optical elements. We start from Jones matrices because it is easier to create physically realizable random matrices. M_real = np.random.rand(2,2,5) M_imag = np.random.rand(2,2,5) # Create the pypol object J = Jones_matrix('Random element') J.from_matrix(M=M_real + 1j*M_imag) M = Mueller('Random element') M.from_Jones(J) print(M) Random element = [+1.041 +0.976 +0.309 +0.177] [+1.448 +0.362 +1.302 +0.391] [+1.478 -0.439 +1.074 +0.860] [+0.939 -0.242 +0.590 -0.136] [+1.508 +0.756 +0.654 +0.378] [-0.011 -0.017 -0.012 +0.050] [-0.197 +0.146 -0.160 -0.334] [+0.184 +0.104 +0.345 -0.061] [-0.010 +0.507 +0.091 -0.438] [+0.082 -0.242 -0.128 +1.031] [+0.947 +0.884 +0.303 +0.164] [+1.158 +0.182 +1.178 +0.200] [+1.228 -0.269 +0.890 +0.862] [+0.567 -0.110 +0.873 +0.067] [+1.052 +0.967 +1.053 +0.441] [-0.428 -0.418 -0.082 -0.068] [+0.773 +0.443 +0.633 +0.348] [-0.738 +0.457 -0.586 -0.303] [-0.322 +0.496 -0.187 +0.528] [-0.168 -0.842 +0.659 -0.129] [4]: # Calculate the transmissions and retardance of the random matrices trans = M.parameters.transmissions(verbose=True) The intensity transmissions of Random element are: Maximum (int.) [2.0802189 2.85510391 2.92226298 1.59204482 2.57686893] Minimum (int.) [0.00134622 0.04183691 0.03409886 0.28694948 0.43893157] The mean value of param Maximum (int.) is 2.4052999100062467 +- 0.5032041110108076 The mean value of param Minimum (int.) is 0.16063260711030697 +- 0.1725697670100231 The checks subclass also includes some interesting methods to calculate if an optical element fulfills some conditions. For example, we can calculate if an element is pure (it must be, as it comes from a Jones matrix): [6]: cond = M.checks.is_pure(verbose=True) Random element is pure (non-depolarizing): [ True True True True True] The mean value is 1.0 +- 0.0 Homogeneous diattenuators and retarders are well understood. They are characterized by the transmissions (diattenuators) and retardance (retarders), plus the transmission/fast eigenstate (as the other one is perpendicular). The analysis subclass have two methods to extract those parameters from an homogeneous diattenuator or retarder. For example, we can create the most general diattenuator and compare the calculations with the original values: [7]: # Create random values N = 1000 p1 = np.random.rand(N) * 0.5 + 0.5 p2 = np.random.rand(N) * 0.5 alpha = np.random.rand(N) * 90 * degrees delay = np.random.rand(N) * 360 * degrees # Create the py_pol object M = Mueller('General random diattenuator') M.diattenuator_charac_angles(p1=p1, p2=p2, alpha=alpha, delay=delay) # Analyze it trans, angles = M.analysis.diattenuator(angles='charac', transmissions='field') # Compare the results print('Error in p1:') error = np.linalg.norm(p1 - trans[0]) print(error) print('Error in p2:') error = np.linalg.norm(p2 - trans[1]) print(error) print('Error in alpha:') error = np.linalg.norm(alpha - angles[0])/degrees print(error) print('Error in delay') error = np.linalg.norm(delay - angles[1])/degrees print(error) Error in p1: 1.2008898127460164e-15 Error in p2: 8.076479830048999e-14 Error in alpha: 1.0848826826569258e-12 Error in delay 1.53233195374529e-08 Depolarizers are slightly harder to characterize. The most general depolarizer is easily understood using its division in blocks. First, a depolarizer may present diattenuation ($D=:nbsphinx-math:left\Vert :nbsphinx-math:overrightarrow{D}:nbsphinx-math:right\Vert \ :math:) or polarizance (P=:nbsphinx-math:left:nbsphinx-math:Vert :nbsphinx-math:overrightarrow{P}:nbsphinx-math:right\Vert $). That may seem strange, but in some cases a depolarized can increase the polarization degree light (also, in some cases a non-depolarizing element may decrease the polarization degree of light). Second, the small m matrix has three 3x1 orthonormal eigenvectors :math:overrightarrow{v_{i}}. Those eigenvectors can be transformed into six Stokes vectors in order to describe the principal states as: $$S_{i}=\left[\begin{array}{c} 1\\ \overrightarrow{v_{i}} \end{array}\right]$$. $$S_{i+3}=\left[\begin{array}{c} 1\\ -\overrightarrow{v_{i}} \end{array}\right]$$. Its associated eigenvalues (which are lower than 1) are called its depolarization factors. The principal states are not orthogonal in a polarization sense. Also, in general those states will not be eigenstates of the Mueller matrix of the depolarizer. As the last three states are easily calculated from the first three, only the first ones are usually calculated. We can check this using a random depolarizer. [136]: # Create the random variables N = 5 D = np.random.rand(N) alphaD = np.random.rand(N) * 90 * degrees delayD = np.random.rand(N) * 360 * degrees P = np.random.rand(N) alphaP = np.random.rand(N) * 90 * degrees delayP = np.random.rand(N) * 360 * degrees Pv = np.array([P*np.cos(2*alphaP), P*np.sin(2*alphaP)*np.cos(delayP), P*np.sin(2*alphaP)*np.sin(delayP)]) d1 = np.random.rand(N) d2 = np.random.rand(N) d3 = np.random.rand(N) alpha1 = np.random.rand(N) * 45 * degrees # Alpha can go up to 90º, but then we could have a problem calculating the orthogonal principal states alpha2 = 45*degrees # If we can compare the results, S1, S2 and S3 3x1 vectors must be orthogonal alpha3 = 45*degrees - alpha1 delay1 = np.random.rand(N) * 360 * degrees delay2 = delay1 - 90*degrees delay3 = delay1 + 180*degrees # Create the pypol objects S1, S2, S3 = create_Stokes(N=3) S1.general_charac_angles(alpha=alpha1, delay=delay1) S2.general_charac_angles(alpha=alpha2, delay=delay2) S3.general_charac_angles(alpha=alpha3, delay=delay3) M = Mueller('Rangom general depolarizer') M.depolarizer_states(d=[d1, d2, d3], S=[S1, S2, S3], Pv=Pv, Dv=Dv) print(M) # Analyze the depolarizer trans_D, trans_P, ang_D, ang_P, depolar, principal_states = M.analysis.depolarizer(angles='Charac', transmissions='Intensity', depolarization='Factors') Rangom general depolarizer = [+1.000 +0.671 +0.005 -0.060] [+1.000 -0.000 +0.683 +0.174] [+1.000 -0.243 -0.042 +0.653] [+1.000 +0.305 -0.468 -0.575] [+1.000 -0.040 -0.150 +0.019] [+0.955 +0.579 -0.079 -0.063] [-0.050 +0.507 -0.154 +0.091] [-0.101 +0.713 +0.001 -0.019] [-0.038 +0.184 +0.112 -0.049] [-0.002 +0.522 -0.059 -0.034] [+0.157 -0.079 +0.674 +0.312] [+0.076 -0.154 +0.698 -0.229] [+0.051 +0.001 +0.687 -0.000] [-0.706 +0.112 +0.166 +0.007] [+0.019 -0.059 +0.205 +0.016] [+0.175 -0.063 +0.312 +0.529] [+0.026 +0.091 -0.229 +0.445] [+0.041 -0.019 -0.000 +0.689] [-0.617 -0.049 +0.007 +0.179] [+0.027 -0.034 +0.016 +0.187] [137]: print(d1, d2, d3, '\n', sep='\n') print(depolar[0], depolar[1], depolar[2], '\n', sep='\n') [0.94973259 0.91255888 0.67802625 0.29693391 0.20030754] [0.28090059 0.30965405 0.68669381 0.18172246 0.17803357] [0.55135005 0.4280079 0.7239018 0.0507721 0.53658826] [0.94973259 0.4280079 0.7239018 0.29693391 0.53658826] [0.55135005 0.91255888 0.67802625 0.0507721 0.20030754] [0.28090059 0.30965405 0.68669381 0.18172246 0.17803357] [138]: # Diattenuation vector errors Dcalc = (trans_D[0]-trans_D[1])/2 error = D-Dcalc print('The error in the diattenuation is:') print(np.linalg.norm(error)) print('The error in the diattenuation alpha is:') print(np.linalg.norm(error)) error = delayD-ang_D[1] print('The error in the diattenuation delay is:') print(np.linalg.norm(error), '\n') The error in the diattenuation is: 1.6653345369377348e-16 The error in the diattenuation alpha is: 2.420669712248692e-16 The error in the diattenuation delay is: 3.198140942662673e-14 [139]: # Polarizance vector errors Pcalc = (trans_P[0]-trans_P[1])/2 error = P-Pcalc print('The error in the polarizance is:') print(np.linalg.norm(error)) error = alphaP-ang_P[0] print('The error in the polarizance alpha is:') print(np.linalg.norm(error)) error = delayP-ang_P[1] print('The error in the polarizance delay is:') print(np.linalg.norm(error), '\n') The error in the polarizance is: 7.850462293418876e-17 The error in the polarizance alpha is: 2.5476231888929306e-16 The error in the polarizance delay is: 4.1910000110727263e-16 [140]: # First depolarization factor / principal state errorserror1 = d1-depolar[0] error1 = d1-depolar[0] error2 = d1-depolar[1] error3 = d1-depolar[2] error = np.minimum.reduce([np.abs(error1), np.abs(error2), np.abs(error3)]) # Eig algorithms dont give the same order we used, so we have to compare to all print('The error in the first depolarization factor is:') print(np.linalg.norm(error)) error1 = np.linalg.norm(S1.M - principal_states[0].M, axis=0) error2 = np.linalg.norm(S1.M - principal_states[1].M, axis=0) error3 = np.linalg.norm(S1.M - principal_states[2].M, axis=0) S1.M[1:,:] = -S1.M[1:,:] error4 = np.linalg.norm(S1.M - principal_states[0].M, axis=0) error5 = np.linalg.norm(S1.M - principal_states[1].M, axis=0) error6 = np.linalg.norm(S1.M - principal_states[2].M, axis=0) error = np.minimum.reduce([np.abs(error1), np.abs(error2), np.abs(error3), np.abs(error4), np.abs(error5), np.abs(error6)]) print('The error in the first principal state is:') print(np.linalg.norm(error), '\n') The error in the first depolarization factor is: 3.433175098891678e-16 The error in the first principal state is: 6.847141445191268e-15 [141]: # Second depolarization factor / principal state errors error1 = d2-depolar[0] error2 = d2-depolar[1] error3 = d2-depolar[2] error = np.minimum.reduce([np.abs(error1), np.abs(error2), np.abs(error3)]) print('The error in the second depolarization factor is:') print(np.linalg.norm(error)) error1 = np.linalg.norm(S2.M - principal_states[0].M, axis=0) error2 = np.linalg.norm(S2.M - principal_states[1].M, axis=0) error3 = np.linalg.norm(S2.M - principal_states[2].M, axis=0) S2.M[1:,:] = -S2.M[1:,:] error4 = np.linalg.norm(S2.M - principal_states[0].M, axis=0) error5 = np.linalg.norm(S2.M - principal_states[1].M, axis=0) error6 = np.linalg.norm(S2.M - principal_states[2].M, axis=0) error = np.minimum.reduce([np.abs(error1), np.abs(error2), np.abs(error3), np.abs(error4), np.abs(error5), np.abs(error6)]) print('The error in the second principal state is:') print(np.linalg.norm(error), '\n') The error in the second depolarization factor is: 2.3714374201337736e-16 The error in the second principal state is: 1.743906609776931e-14 [142]: # Third depolarization factor / principal state errors error1 = d3-depolar[0] error2 = d3-depolar[1] error3 = d3-depolar[2] error = np.minimum.reduce([np.abs(error1), np.abs(error2), np.abs(error3)]) print('The error in the third depolarization factor is:') print(np.linalg.norm(error)) error1 = np.linalg.norm(S3.M - principal_states[0].M, axis=0) error2 = np.linalg.norm(S3.M - principal_states[1].M, axis=0) error3 = np.linalg.norm(S3.M - principal_states[2].M, axis=0) S3.M[1:,:] = -S3.M[1:,:] error4 = np.linalg.norm(S3.M - principal_states[0].M, axis=0) error5 = np.linalg.norm(S3.M - principal_states[1].M, axis=0) error6 = np.linalg.norm(S3.M - principal_states[2].M, axis=0) error = np.minimum.reduce([np.abs(error1), np.abs(error2), np.abs(error3), np.abs(error4), np.abs(error5), np.abs(error6)]) print('The error in the third principal state is:') print(np.linalg.norm(error)) The error in the third depolarization factor is: 3.3306690738754696e-16 The error in the third principal state is: 4.443083001703408e-15 Again, most optical elements do not belong to one of the three basic groups of basic elements, but are a mix of them. In the case of non-pure optical elements, the polar decomposition theorem states that every Mueller matrix can be decomposed in the product of a pure homogeneous diattenuator $$M_D$$, a pure homogeneous retarder $$M_R$$ and a depolarizer $$M_P$$: $$M = M_R*M_P*M_D$$. There are six possible combinations, altering the order of the elements. This decomposition allows analyzing the three elements separately. The method decompose_polar of analysis class calculates the polar decomposition of Mueller matrices. If only pure matrices are analyzed, use decompose_pure instead. For example, if we build the most general element: [51]: # Create the diattenuator N = 5 p1 = np.random.rand(N) p2 = np.random.rand(N) alpha = np.random.rand(N) * 90*degrees delay = np.random.rand(N) * 360*degrees Md = Mueller('Diattenuator') Md.diattenuator_charac_angles(p1=p1, p2=p2, alpha=alpha, delay=delay) # Create the retarder R = np.random.rand(N) * 180*degrees alpha = np.random.rand(N) * 90*degrees delay = np.random.rand(N) * 360*degrees Mr = Mueller('Retarder') Mr.retarder_charac_angles(R=R, alpha=alpha, delay=delay) # Create the depolarizer P = np.random.rand(N) alphaP = np.random.rand(N) * 90 * degrees delayP = np.random.rand(N) * 360 * degrees Pv = np.array([P*np.cos(2*alphaP), P*np.sin(2*alphaP)*np.cos(delayP), P*np.sin(2*alphaP)*np.sin(delayP)]) d1 = np.random.rand(N) d2 = np.random.rand(N) d3 = np.random.rand(N) alpha1 = np.random.rand(N) * 90 * degrees alpha2 = np.random.rand(N) * 90 * degrees alpha3 = np.random.rand(N) * 90 * degrees delay1 = np.random.rand(N) * 360 * degrees delay2 = np.random.rand(N) * 90 * degrees delay3 = np.random.rand(N) * 90 * degrees S1, S2, S3 = create_Stokes(N=3) S1.general_charac_angles(alpha=alpha1, delay=delay1) S2.general_charac_angles(alpha=alpha2, delay=delay2) S3.general_charac_angles(alpha=alpha3, delay=delay3) Mp = Mueller('Depolarizer') Mp.depolarizer_states(d=[d1, d2, d3], S=[S1, S2, S3], Pv=Pv) # Calculate the product M = Mp * Mr * Md M.name = 'General matrix' print(M) General matrix = [+0.698 +0.104 +0.013 +0.152] [+0.263 -0.016 +0.026 -0.021] [+0.119 +0.042 -0.005 +0.021] [+0.060 -0.013 -0.027 -0.001] [+0.146 +0.078 -0.068 -0.103] [-0.650 -0.676 -0.875 +0.433] [+0.113 +0.203 -0.055 -0.090] [-0.029 -0.057 +0.045 -0.044] [-0.008 +0.022 -0.017 -0.012] [-0.073 -0.034 +0.026 +0.061] [+0.021 -0.047 +0.406 -0.358] [+0.055 +0.009 +0.097 -0.019] [+0.031 +0.056 +0.047 +0.083] [-0.009 -0.019 +0.015 +0.003] [-0.063 -0.030 +0.028 +0.047] [-0.161 +0.160 +0.321 -0.270] [+0.034 -0.117 +0.153 +0.048] [+0.037 +0.097 -0.017 +0.062] [+0.019 -0.032 -0.011 -0.023] [+0.009 +0.006 -0.005 -0.005] [56]: Mr, Md, Mp = M.analysis.decompose_polar(verbose=True) ------------------------------------------------------ Polar decomposition of General matrix as M = . Analysis of Diattenuator of General matrix as diattenuator: - Transmissions of Diattenuator of General matrix are: Max. transmission [0.88198152 0.30022504 0.16590048 0.08914879 0.29243019] Min. transmission [5.13179191e-01 2.25205746e-01 7.11392932e-02 2.99717743e-02 4.59311831e-04] p1 [0.93913871 0.54792795 0.40730882 0.29857794 0.54076814] p2 [0.71636526 0.47455847 0.2667195 0.17312358 0.02143156] The mean value of param Max. transmission is 0.34593720239581227 +- 0.27951730727596963 The mean value of param Min. transmission is 0.1679910631739822 +- 0.18916915204292745 The mean value of param p1 is 0.5467443149886063 +- 0.21681295261919178 The mean value of param p2 is 0.3304396766952467 +- 0.24248852187170236 - Angles of Diattenuator of General matrix are: Alpha [27.84607161 58.04488939 13.84981514 57.53645919 28.88866877] Delay [ 85.0078597 321.07323849 104.32877719 182.56334098 236.40447466] Azimuth [ 3.63378343 61.09400437 176.2984921 122.45252888 159.3600951 ] Ellipticity angle [ 27.68745459 -17.17698052 13.383905 -1.16082806 -22.40157465] The mean value of param Alpha is 37.23318082239673 +- 17.606139936486752 The mean value of param Delay is 185.87553820419188 +- 86.79484435277902 The mean value of param Azimuth is 104.56778077503436 +- 64.10147880568259 The mean value of param Ellipticity angle is 0.06639527144997573 +- 18.675806499471012 Analysis of Retarder of General matrix as retarder: - Retardance of Retarder of General matrix is: [170.92392209 49.99121033 104.18931098 154.67147499 76.74592979] The mean value is 111.30436963607733 +- 45.69334785299364 - Angles of Retarder of General matrix are: Alpha [57.84385169 88.76445186 32.42385232 19.59367553 40.47534568] Delay [343.73653633 127.82768821 24.72208126 202.23867174 224.86760839] Azimuth [ 58.3066556 90.75804107 31.33220352 161.4813861 141.3325167 ] Ellipticity angle [ -7.30910264 0.97579464 11.12232958 -6.91779386 -22.08103099] The mean value of param Alpha is 47.820235416254064 +- 23.935899584207476 The mean value of param Delay is 184.67851718720647 +- 105.86883741774207 The mean value of param Azimuth is 96.64216059779035 +- 48.930516839740804 The mean value of param Ellipticity angle is -4.841960654879352 +- 10.920496863130417 Analysis of Depolarizer of General matrix as depolarizer: - Depolarization index of Depolarizer of General matrix is: [ nan 0.63779108 0.38840727 0.77017126 0.63303606] The mean value is nan +- nan - First depolarization factor of Depolarizer of General matrix is: [2.00466434 1.13485499 1.43749032 0.03901074 1.23742444] The mean value is 1.1706889647488454 +- 0.6408012474870327 The alpha of First principal state is (deg): [75.60609681 70.0309573 57.81949517 29.11226629 9.88341614] The mean value is 48.49044634139649 +- 25.10896217603074 The delay of First principal state is (deg): [ 42.82915408 79.4028255 49.97008913 344.53020389 20.72726024] The mean value is 107.49190656698616 +- 119.99523804284395 The azimuth of First principal state is (deg): [79.0258232 85.62332729 63.36544348 28.63519594 9.2889677 ] The mean value is 53.18775152168685 +- 29.50657263363871 The ellipticity angle of First principal state is (deg): [ 9.55480462 19.56235539 21.82755508 -6.55297175 3.43720101] The mean value is 9.56578887119095 +- 10.4657193387497 - Second depolarization factor of Depolarizer of General matrix is: [0.44871276 0.47482309 0.15969467 0.62848492 0.24025284] The mean value is 0.39039365727810144 +- 0.16910422117889026 The alpha of Second principal state is (deg): [34.13047619 60.35743778 23.17816768 24.71585255 35.16369238] The mean value is 35.509125315776934 +- 13.326709141628355 The delay of Second principal state is (deg): [359.35355611 214.60405719 112.72777955 106.5575901 194.90213788] The mean value is 197.62902416633068 +- 91.59982459282895 The azimuth of Second principal state is (deg): [ 34.12984885 117.08919412 168.97375487 170.7946478 145.15111827] The mean value is 127.22771278268712 +- 50.46198599171677 The ellipticity angle of Second principal state is (deg): [ -0.30023365 -14.61237074 20.9354951 23.36502129 -7.00698018] The mean value is 4.476186365617659 +- 15.144287657272631 - Third depolarization factor of Depolarizer of General matrix is: [0.01020537 0.03828233 0.66805179 0.84294551 0.00173165] The mean value is 0.31224332974003594 +- 0.3663177663524229 The alpha of Third principal state is (deg): [36.03739245 56.44250227 62.72633067 61.59735072 45.92607924] The mean value is 52.54593107031307 +- 10.169624245433413 The delay of Third principal state is (deg): [ 96.76436491 319.12759616 159.95373118 50.62251875 105.56455357] The mean value is 146.40655291529325 +- 93.08392101591284 The azimuth of Third principal state is (deg): [169.9960686 59.5855539 116.41952122 67.94063631 131.56402011] The mean value is 109.1011600280975 +- 41.01783111508235 The ellipticity angle of Third principal state is (deg): [ 35.44011083 -18.53782158 8.10700669 20.15202588 37.16416867] The mean value is 16.465098099476872 +- 20.480030006449336 - Depolarizer of General matrix has no diattenuation. - Transmissions of Depolarizer of General matrix from polarizance are: Max. transmission [1.9983297 1.51463509 1.09615282 1.33680321 1.45700371] Min. transmission [0.0016703 0.48536491 0.90384718 0.66319679 0.54299629] p1 [1.4136229 1.23070512 1.04697317 1.15620206 1.20706409] p2 [0.04086927 0.69668136 0.95070878 0.81436895 0.73688282] The mean value of param Max. transmission is 1.4805849044509007 +- 0.2960528227690688 The mean value of param Min. transmission is 0.5194150955490994 +- 0.29605282276906886 The mean value of param p1 is 1.2109134669351374 +- 0.119471670474747 The mean value of param p2 is 0.6479022362064927 +- 0.31565453880425265 - Angles of Depolarizer of General matrix from diattenuation are: Alpha [45. 45. 45. 45. 45.] Delay [0. 0. 0. 0. 0.] Azimuth [45. 45. 45. 45. 45.] Ellipticity angle [0. 0. 0. 0. 0.] The mean value of param Alpha is 45.0 +- 0.0 The mean value of param Delay is 0.0 +- 0.0 The mean value of param Azimuth is 45.0 +- 0.0 The mean value of param Ellipticity angle is 0.0 +- 0.0 General matrix decomposition mean square error: [0.16008588 0.1076687 0.12040788 0.12280425 0.1216883 ] The mean value is 0.12653099945189955 +- 0.017643974691655174 ------------------------------------------------------ d:\codigo_ucm\py_pol\py_pol\mueller.py:3757: RuntimeWarning: invalid value encountered in sqrt DI = sqrt(1. - PP**2)
2021-12-06 21:42:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.618654727935791, "perplexity": 10464.511363447233}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363312.79/warc/CC-MAIN-20211206194128-20211206224128-00324.warc.gz"}